problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_3412
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-767
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] filter_strategy config not working
**Describe the bug**
It seems that the `filter_strategy` config (which by the way is misspelled int the [docs](https://github.com/rochacbruno/dynaconf/blob/master/docs/configuration.md)) is not taken into account when used in the `Dynaconf` constructor.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
Just a plain python script.
2. Having the following config files:
No config file, just using env variables
4. Having the following app code:
<details>
<summary> Code </summary>
**/test/test.py**
```python
import dynaconf
class CustomFilter:
def __call__(self, data):
print("this is never called")
return {
k: v
for k, v in data.items()
if k.startswith("PREFIX")
}
if __name__ == "__main__":
dc = dynaconf.Dynaconf(
envvar_prefix=False,
filter_strategy=CustomFilter(),
)
print(dc.as_dict())
```
</details>
5. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
PREFIX_VAR="HELLO" OTHER_VAR="WORLD" python test.py
```
</details>
**Expected behavior**
`CustomFilter` should be called ("this is never called" should be displayed) and only the `PREFIX_VAR` should be in dict, not `OTHER_VAR`
**Environment (please complete the following information):**
- OS: Linux version 5.10.60.1-microsoft-standard-WSL2
- Dynaconf Version 3.1.9
- Framework: None
**Context**
I was looking for a way to filter out empty environment variables.
</issue>
<code>
[start of dynaconf/loaders/env_loader.py]
1 from __future__ import annotations
2
3 from os import environ
4
5 from dynaconf.utils import missing
6 from dynaconf.utils import upperfy
7 from dynaconf.utils.parse_conf import parse_conf_data
8 from dynaconf.vendor.dotenv import cli as dotenv_cli
9
10
11 IDENTIFIER = "env"
12
13
14 def load(obj, env=None, silent=True, key=None):
15 """Loads envvars with prefixes:
16
17 `DYNACONF_` (default global) or `$(ENVVAR_PREFIX_FOR_DYNACONF)_`
18 """
19 global_prefix = obj.get("ENVVAR_PREFIX_FOR_DYNACONF")
20 if global_prefix is False or global_prefix.upper() != "DYNACONF":
21 load_from_env(obj, "DYNACONF", key, silent, IDENTIFIER + "_global")
22
23 # Load the global env if exists and overwrite everything
24 load_from_env(obj, global_prefix, key, silent, IDENTIFIER + "_global")
25
26
27 def load_from_env(
28 obj,
29 prefix=False,
30 key=None,
31 silent=False,
32 identifier=IDENTIFIER,
33 env=False, # backwards compatibility bc renamed param
34 ):
35 if prefix is False and env is not False:
36 prefix = env
37
38 env_ = ""
39 if prefix is not False:
40 if not isinstance(prefix, str):
41 raise TypeError("`prefix/env` must be str or False")
42
43 prefix = prefix.upper()
44 env_ = f"{prefix}_"
45
46 # Load a single environment variable explicitly.
47 if key:
48 key = upperfy(key)
49 value = environ.get(f"{env_}{key}")
50 if value:
51 try: # obj is a Settings
52 obj.set(key, value, loader_identifier=identifier, tomlfy=True)
53 except AttributeError: # obj is a dict
54 obj[key] = parse_conf_data(
55 value, tomlfy=True, box_settings=obj
56 )
57
58 # Load environment variables in bulk (when matching).
59 else:
60 # Only known variables should be loaded from environment?
61 ignore_unknown = obj.get("IGNORE_UNKNOWN_ENVVARS_FOR_DYNACONF")
62
63 trim_len = len(env_)
64 data = {
65 key[trim_len:]: parse_conf_data(
66 data, tomlfy=True, box_settings=obj
67 )
68 for key, data in environ.items()
69 if key.startswith(env_)
70 and not (
71 # Ignore environment variables that haven't been
72 # pre-defined in settings space.
73 ignore_unknown
74 and obj.get(key[trim_len:], default=missing) is missing
75 )
76 }
77 # Update the settings space based on gathered data from environment.
78 if data:
79 obj.update(data, loader_identifier=identifier)
80
81
82 def write(settings_path, settings_data, **kwargs):
83 """Write data to .env file"""
84 for key, value in settings_data.items():
85 quote_mode = (
86 isinstance(value, str)
87 and (value.startswith("'") or value.startswith('"'))
88 ) or isinstance(value, (list, dict))
89 dotenv_cli.set_key(
90 str(settings_path),
91 key,
92 str(value),
93 quote_mode="always" if quote_mode else "none",
94 )
95
[end of dynaconf/loaders/env_loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dynaconf/loaders/env_loader.py b/dynaconf/loaders/env_loader.py
--- a/dynaconf/loaders/env_loader.py
+++ b/dynaconf/loaders/env_loader.py
@@ -76,6 +76,9 @@
}
# Update the settings space based on gathered data from environment.
if data:
+ filter_strategy = obj.get("FILTER_STRATEGY")
+ if filter_strategy:
+ data = filter_strategy(data)
obj.update(data, loader_identifier=identifier)
|
{"golden_diff": "diff --git a/dynaconf/loaders/env_loader.py b/dynaconf/loaders/env_loader.py\n--- a/dynaconf/loaders/env_loader.py\n+++ b/dynaconf/loaders/env_loader.py\n@@ -76,6 +76,9 @@\n }\n # Update the settings space based on gathered data from environment.\n if data:\n+ filter_strategy = obj.get(\"FILTER_STRATEGY\")\n+ if filter_strategy:\n+ data = filter_strategy(data)\n obj.update(data, loader_identifier=identifier)\n", "issue": "[bug] filter_strategy config not working\n**Describe the bug**\r\nIt seems that the `filter_strategy` config (which by the way is misspelled int the [docs](https://github.com/rochacbruno/dynaconf/blob/master/docs/configuration.md)) is not taken into account when used in the `Dynaconf` constructor.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Having the following folder structure\r\nJust a plain python script.\r\n\r\n2. Having the following config files:\r\nNo config file, just using env variables\r\n\r\n4. Having the following app code:\r\n\r\n<details>\r\n<summary> Code </summary>\r\n\r\n**/test/test.py**\r\n```python\r\nimport dynaconf\r\n\r\nclass CustomFilter:\r\n def __call__(self, data):\r\n print(\"this is never called\")\r\n return {\r\n k: v\r\n for k, v in data.items()\r\n if k.startswith(\"PREFIX\")\r\n }\r\n\r\n\r\nif __name__ == \"__main__\":\r\n dc = dynaconf.Dynaconf(\r\n envvar_prefix=False,\r\n filter_strategy=CustomFilter(),\r\n )\r\n print(dc.as_dict())\r\n```\r\n\r\n</details>\r\n\r\n5. Executing under the following environment\r\n\r\n<details>\r\n<summary> Execution </summary>\r\n\r\n```bash\r\nPREFIX_VAR=\"HELLO\" OTHER_VAR=\"WORLD\" python test.py\r\n```\r\n\r\n</details>\r\n\r\n**Expected behavior**\r\n`CustomFilter` should be called (\"this is never called\" should be displayed) and only the `PREFIX_VAR` should be in dict, not `OTHER_VAR`\r\n\r\n**Environment (please complete the following information):**\r\n - OS: Linux version 5.10.60.1-microsoft-standard-WSL2 \r\n - Dynaconf Version 3.1.9\r\n - Framework: None\r\n\r\n**Context**\r\n\r\nI was looking for a way to filter out empty environment variables.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom os import environ\n\nfrom dynaconf.utils import missing\nfrom dynaconf.utils import upperfy\nfrom dynaconf.utils.parse_conf import parse_conf_data\nfrom dynaconf.vendor.dotenv import cli as dotenv_cli\n\n\nIDENTIFIER = \"env\"\n\n\ndef load(obj, env=None, silent=True, key=None):\n \"\"\"Loads envvars with prefixes:\n\n `DYNACONF_` (default global) or `$(ENVVAR_PREFIX_FOR_DYNACONF)_`\n \"\"\"\n global_prefix = obj.get(\"ENVVAR_PREFIX_FOR_DYNACONF\")\n if global_prefix is False or global_prefix.upper() != \"DYNACONF\":\n load_from_env(obj, \"DYNACONF\", key, silent, IDENTIFIER + \"_global\")\n\n # Load the global env if exists and overwrite everything\n load_from_env(obj, global_prefix, key, silent, IDENTIFIER + \"_global\")\n\n\ndef load_from_env(\n obj,\n prefix=False,\n key=None,\n silent=False,\n identifier=IDENTIFIER,\n env=False, # backwards compatibility bc renamed param\n):\n if prefix is False and env is not False:\n prefix = env\n\n env_ = \"\"\n if prefix is not False:\n if not isinstance(prefix, str):\n raise TypeError(\"`prefix/env` must be str or False\")\n\n prefix = prefix.upper()\n env_ = f\"{prefix}_\"\n\n # Load a single environment variable explicitly.\n if key:\n key = upperfy(key)\n value = environ.get(f\"{env_}{key}\")\n if value:\n try: # obj is a Settings\n obj.set(key, value, loader_identifier=identifier, tomlfy=True)\n except AttributeError: # obj is a dict\n obj[key] = parse_conf_data(\n value, tomlfy=True, box_settings=obj\n )\n\n # Load environment variables in bulk (when matching).\n else:\n # Only known variables should be loaded from environment?\n ignore_unknown = obj.get(\"IGNORE_UNKNOWN_ENVVARS_FOR_DYNACONF\")\n\n trim_len = len(env_)\n data = {\n key[trim_len:]: parse_conf_data(\n data, tomlfy=True, box_settings=obj\n )\n for key, data in environ.items()\n if key.startswith(env_)\n and not (\n # Ignore environment variables that haven't been\n # pre-defined in settings space.\n ignore_unknown\n and obj.get(key[trim_len:], default=missing) is missing\n )\n }\n # Update the settings space based on gathered data from environment.\n if data:\n obj.update(data, loader_identifier=identifier)\n\n\ndef write(settings_path, settings_data, **kwargs):\n \"\"\"Write data to .env file\"\"\"\n for key, value in settings_data.items():\n quote_mode = (\n isinstance(value, str)\n and (value.startswith(\"'\") or value.startswith('\"'))\n ) or isinstance(value, (list, dict))\n dotenv_cli.set_key(\n str(settings_path),\n key,\n str(value),\n quote_mode=\"always\" if quote_mode else \"none\",\n )\n", "path": "dynaconf/loaders/env_loader.py"}]}
| 1,793 | 111 |
gh_patches_debug_37526
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-8273
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
stoploss_from_open has some ambiguous with leverage
<!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: ____ Linux
* Python Version: _____ (`python -V`) 3.10
* CCXT version: _____ (`pip freeze | grep ccxt`) 1.95
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
In the doc explanation, profit and stoploss is calculated with leverage.
And this function `stoploss_from_open` (in freqtrade/strategy/strategy_helper.py) may wrong when enter short with leverage. Because it's profit_rate may large than 1 , and but open_relative_stop still less than 1, and stoploss be always negative, and function return 0.0. But in fact, it has positive profit.
*Explain the problem you have encountered*
### Steps to reproduce:
1. _____
2. _____
3. _____
### Observed Results:
* What happened?
stoploss_from_open may have ambiguous when used leverage
* What did you expect to happen?
In function dive leverage, or argument explicitly explanates without leverage
### Relevant code exceptions or logs
``` Python
def stoploss_from_open(
open_relative_stop: float,
current_profit: float,
is_short: bool = False
) -> float:
"""
Given the current profit, and a desired stop loss value relative to the open price,
return a stop loss value that is relative to the current price, and which can be
returned from `custom_stoploss`.
The requested stop can be positive for a stop above the open price, or negative for
a stop below the open price. The return value is always >= 0.
Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price
:param open_relative_stop: Desired stop loss percentage relative to open price
:param current_profit: The current profit percentage
:param is_short: When true, perform the calculation for short instead of long
:return: Stop loss value relative to current price
"""
# formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value
if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):
return 1
if is_short is True:
stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))
else:
stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))
# negative stoploss values indicate the requested stop price is higher/lower
# (long/short) than the current price
return max(stoploss, 0.0)
```
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
// paste your log here
```
</issue>
<code>
[start of freqtrade/strategy/strategy_helper.py]
1 from typing import Optional
2
3 import pandas as pd
4
5 from freqtrade.exchange import timeframe_to_minutes
6
7
8 def merge_informative_pair(dataframe: pd.DataFrame, informative: pd.DataFrame,
9 timeframe: str, timeframe_inf: str, ffill: bool = True,
10 append_timeframe: bool = True,
11 date_column: str = 'date',
12 suffix: Optional[str] = None) -> pd.DataFrame:
13 """
14 Correctly merge informative samples to the original dataframe, avoiding lookahead bias.
15
16 Since dates are candle open dates, merging a 15m candle that starts at 15:00, and a
17 1h candle that starts at 15:00 will result in all candles to know the close at 16:00
18 which they should not know.
19
20 Moves the date of the informative pair by 1 time interval forward.
21 This way, the 14:00 1h candle is merged to 15:00 15m candle, since the 14:00 1h candle is the
22 last candle that's closed at 15:00, 15:15, 15:30 or 15:45.
23
24 Assuming inf_tf = '1d' - then the resulting columns will be:
25 date_1d, open_1d, high_1d, low_1d, close_1d, rsi_1d
26
27 :param dataframe: Original dataframe
28 :param informative: Informative pair, most likely loaded via dp.get_pair_dataframe
29 :param timeframe: Timeframe of the original pair sample.
30 :param timeframe_inf: Timeframe of the informative pair sample.
31 :param ffill: Forwardfill missing values - optional but usually required
32 :param append_timeframe: Rename columns by appending timeframe.
33 :param date_column: A custom date column name.
34 :param suffix: A string suffix to add at the end of the informative columns. If specified,
35 append_timeframe must be false.
36 :return: Merged dataframe
37 :raise: ValueError if the secondary timeframe is shorter than the dataframe timeframe
38 """
39
40 minutes_inf = timeframe_to_minutes(timeframe_inf)
41 minutes = timeframe_to_minutes(timeframe)
42 if minutes == minutes_inf:
43 # No need to forwardshift if the timeframes are identical
44 informative['date_merge'] = informative[date_column]
45 elif minutes < minutes_inf:
46 # Subtract "small" timeframe so merging is not delayed by 1 small candle
47 # Detailed explanation in https://github.com/freqtrade/freqtrade/issues/4073
48 informative['date_merge'] = (
49 informative[date_column] + pd.to_timedelta(minutes_inf, 'm') -
50 pd.to_timedelta(minutes, 'm')
51 )
52 else:
53 raise ValueError("Tried to merge a faster timeframe to a slower timeframe."
54 "This would create new rows, and can throw off your regular indicators.")
55
56 # Rename columns to be unique
57 date_merge = 'date_merge'
58 if suffix and append_timeframe:
59 raise ValueError("You can not specify `append_timeframe` as True and a `suffix`.")
60 elif append_timeframe:
61 date_merge = f'date_merge_{timeframe_inf}'
62 informative.columns = [f"{col}_{timeframe_inf}" for col in informative.columns]
63
64 elif suffix:
65 date_merge = f'date_merge_{suffix}'
66 informative.columns = [f"{col}_{suffix}" for col in informative.columns]
67
68 # Combine the 2 dataframes
69 # all indicators on the informative sample MUST be calculated before this point
70 if ffill:
71 # https://pandas.pydata.org/docs/user_guide/merging.html#timeseries-friendly-merging
72 # merge_ordered - ffill method is 2.5x faster than seperate ffill()
73 dataframe = pd.merge_ordered(dataframe, informative, fill_method="ffill", left_on='date',
74 right_on=date_merge, how='left')
75 else:
76 dataframe = pd.merge(dataframe, informative, left_on='date',
77 right_on=date_merge, how='left')
78 dataframe = dataframe.drop(date_merge, axis=1)
79
80 # if ffill:
81 # dataframe = dataframe.ffill()
82
83 return dataframe
84
85
86 def stoploss_from_open(
87 open_relative_stop: float,
88 current_profit: float,
89 is_short: bool = False
90 ) -> float:
91 """
92
93 Given the current profit, and a desired stop loss value relative to the open price,
94 return a stop loss value that is relative to the current price, and which can be
95 returned from `custom_stoploss`.
96
97 The requested stop can be positive for a stop above the open price, or negative for
98 a stop below the open price. The return value is always >= 0.
99
100 Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price
101
102 :param open_relative_stop: Desired stop loss percentage relative to open price
103 :param current_profit: The current profit percentage
104 :param is_short: When true, perform the calculation for short instead of long
105 :return: Stop loss value relative to current price
106 """
107
108 # formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value
109 if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):
110 return 1
111
112 if is_short is True:
113 stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))
114 else:
115 stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))
116
117 # negative stoploss values indicate the requested stop price is higher/lower
118 # (long/short) than the current price
119 return max(stoploss, 0.0)
120
121
122 def stoploss_from_absolute(stop_rate: float, current_rate: float, is_short: bool = False) -> float:
123 """
124 Given current price and desired stop price, return a stop loss value that is relative to current
125 price.
126
127 The requested stop can be positive for a stop above the open price, or negative for
128 a stop below the open price. The return value is always >= 0.
129
130 Returns 0 if the resulting stop price would be above the current price.
131
132 :param stop_rate: Stop loss price.
133 :param current_rate: Current asset price.
134 :param is_short: When true, perform the calculation for short instead of long
135 :return: Positive stop loss value relative to current price
136 """
137
138 # formula is undefined for current_rate 0, return maximum value
139 if current_rate == 0:
140 return 1
141
142 stoploss = 1 - (stop_rate / current_rate)
143 if is_short:
144 stoploss = -stoploss
145
146 # negative stoploss values indicate the requested stop price is higher/lower
147 # (long/short) than the current price
148 # shorts can yield stoploss values higher than 1, so limit that as well
149 return max(min(stoploss, 1.0), 0.0)
150
[end of freqtrade/strategy/strategy_helper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/freqtrade/strategy/strategy_helper.py b/freqtrade/strategy/strategy_helper.py
--- a/freqtrade/strategy/strategy_helper.py
+++ b/freqtrade/strategy/strategy_helper.py
@@ -86,37 +86,41 @@
def stoploss_from_open(
open_relative_stop: float,
current_profit: float,
- is_short: bool = False
+ is_short: bool = False,
+ leverage: float = 1.0
) -> float:
"""
-
- Given the current profit, and a desired stop loss value relative to the open price,
+ Given the current profit, and a desired stop loss value relative to the trade entry price,
return a stop loss value that is relative to the current price, and which can be
returned from `custom_stoploss`.
The requested stop can be positive for a stop above the open price, or negative for
a stop below the open price. The return value is always >= 0.
+ `open_relative_stop` will be considered as adjusted for leverage if leverage is provided..
Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price
- :param open_relative_stop: Desired stop loss percentage relative to open price
+ :param open_relative_stop: Desired stop loss percentage, relative to the open price,
+ adjusted for leverage
:param current_profit: The current profit percentage
:param is_short: When true, perform the calculation for short instead of long
+ :param leverage: Leverage to use for the calculation
:return: Stop loss value relative to current price
"""
# formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value
- if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):
+ _current_profit = current_profit / leverage
+ if (_current_profit == -1 and not is_short) or (is_short and _current_profit == 1):
return 1
if is_short is True:
- stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))
+ stoploss = -1 + ((1 - open_relative_stop / leverage) / (1 - _current_profit))
else:
- stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))
+ stoploss = 1 - ((1 + open_relative_stop / leverage) / (1 + _current_profit))
# negative stoploss values indicate the requested stop price is higher/lower
# (long/short) than the current price
- return max(stoploss, 0.0)
+ return max(stoploss * leverage, 0.0)
def stoploss_from_absolute(stop_rate: float, current_rate: float, is_short: bool = False) -> float:
|
{"golden_diff": "diff --git a/freqtrade/strategy/strategy_helper.py b/freqtrade/strategy/strategy_helper.py\n--- a/freqtrade/strategy/strategy_helper.py\n+++ b/freqtrade/strategy/strategy_helper.py\n@@ -86,37 +86,41 @@\n def stoploss_from_open(\n open_relative_stop: float,\n current_profit: float,\n- is_short: bool = False\n+ is_short: bool = False,\n+ leverage: float = 1.0\n ) -> float:\n \"\"\"\n-\n- Given the current profit, and a desired stop loss value relative to the open price,\n+ Given the current profit, and a desired stop loss value relative to the trade entry price,\n return a stop loss value that is relative to the current price, and which can be\n returned from `custom_stoploss`.\n \n The requested stop can be positive for a stop above the open price, or negative for\n a stop below the open price. The return value is always >= 0.\n+ `open_relative_stop` will be considered as adjusted for leverage if leverage is provided..\n \n Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price\n \n- :param open_relative_stop: Desired stop loss percentage relative to open price\n+ :param open_relative_stop: Desired stop loss percentage, relative to the open price,\n+ adjusted for leverage\n :param current_profit: The current profit percentage\n :param is_short: When true, perform the calculation for short instead of long\n+ :param leverage: Leverage to use for the calculation\n :return: Stop loss value relative to current price\n \"\"\"\n \n # formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value\n- if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):\n+ _current_profit = current_profit / leverage\n+ if (_current_profit == -1 and not is_short) or (is_short and _current_profit == 1):\n return 1\n \n if is_short is True:\n- stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))\n+ stoploss = -1 + ((1 - open_relative_stop / leverage) / (1 - _current_profit))\n else:\n- stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))\n+ stoploss = 1 - ((1 + open_relative_stop / leverage) / (1 + _current_profit))\n \n # negative stoploss values indicate the requested stop price is higher/lower\n # (long/short) than the current price\n- return max(stoploss, 0.0)\n+ return max(stoploss * leverage, 0.0)\n \n \n def stoploss_from_absolute(stop_rate: float, current_rate: float, is_short: bool = False) -> float:\n", "issue": "stoploss_from_open has some ambiguous with leverage\n<!-- \r\nHave you searched for similar issues before posting it?\r\n\r\nIf you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue). \r\nIf it hasn't been reported, please create a new issue.\r\n\r\nPlease do not use bug reports to request new features.\r\n-->\r\n\r\n## Describe your environment\r\n\r\n * Operating system: ____ Linux\r\n * Python Version: _____ (`python -V`) 3.10\r\n * CCXT version: _____ (`pip freeze | grep ccxt`) 1.95\r\n * Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\nIn the doc explanation, profit and stoploss is calculated with leverage.\r\n\r\nAnd this function `stoploss_from_open` (in freqtrade/strategy/strategy_helper.py) may wrong when enter short with leverage. Because it's profit_rate may large than 1 , and but open_relative_stop still less than 1, and stoploss be always negative, and function return 0.0. But in fact, it has positive profit.\r\n\r\n*Explain the problem you have encountered*\r\n\r\n### Steps to reproduce:\r\n\r\n 1. _____\r\n 2. _____\r\n 3. _____\r\n \r\n### Observed Results:\r\n\r\n * What happened? \r\n stoploss_from_open may have ambiguous when used leverage\r\n * What did you expect to happen?\r\n In function dive leverage, or argument explicitly explanates without leverage\r\n### Relevant code exceptions or logs\r\n``` Python\r\n def stoploss_from_open(\r\n open_relative_stop: float,\r\n current_profit: float,\r\n is_short: bool = False\r\n ) -> float:\r\n \"\"\"\r\n Given the current profit, and a desired stop loss value relative to the open price,\r\n return a stop loss value that is relative to the current price, and which can be\r\n returned from `custom_stoploss`.\r\n \r\n The requested stop can be positive for a stop above the open price, or negative for\r\n a stop below the open price. The return value is always >= 0.\r\n \r\n Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price\r\n \r\n :param open_relative_stop: Desired stop loss percentage relative to open price\r\n :param current_profit: The current profit percentage\r\n :param is_short: When true, perform the calculation for short instead of long\r\n :return: Stop loss value relative to current price\r\n \"\"\"\r\n \r\n # formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value\r\n if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):\r\n return 1\r\n \r\n if is_short is True:\r\n stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))\r\n else:\r\n stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))\r\n \r\n # negative stoploss values indicate the requested stop price is higher/lower\r\n # (long/short) than the current price\r\n return max(stoploss, 0.0)\r\n```\r\nNote: Please copy/paste text of the messages, no screenshots of logs please.\r\n\r\n ```\r\n // paste your log here\r\n ```\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport pandas as pd\n\nfrom freqtrade.exchange import timeframe_to_minutes\n\n\ndef merge_informative_pair(dataframe: pd.DataFrame, informative: pd.DataFrame,\n timeframe: str, timeframe_inf: str, ffill: bool = True,\n append_timeframe: bool = True,\n date_column: str = 'date',\n suffix: Optional[str] = None) -> pd.DataFrame:\n \"\"\"\n Correctly merge informative samples to the original dataframe, avoiding lookahead bias.\n\n Since dates are candle open dates, merging a 15m candle that starts at 15:00, and a\n 1h candle that starts at 15:00 will result in all candles to know the close at 16:00\n which they should not know.\n\n Moves the date of the informative pair by 1 time interval forward.\n This way, the 14:00 1h candle is merged to 15:00 15m candle, since the 14:00 1h candle is the\n last candle that's closed at 15:00, 15:15, 15:30 or 15:45.\n\n Assuming inf_tf = '1d' - then the resulting columns will be:\n date_1d, open_1d, high_1d, low_1d, close_1d, rsi_1d\n\n :param dataframe: Original dataframe\n :param informative: Informative pair, most likely loaded via dp.get_pair_dataframe\n :param timeframe: Timeframe of the original pair sample.\n :param timeframe_inf: Timeframe of the informative pair sample.\n :param ffill: Forwardfill missing values - optional but usually required\n :param append_timeframe: Rename columns by appending timeframe.\n :param date_column: A custom date column name.\n :param suffix: A string suffix to add at the end of the informative columns. If specified,\n append_timeframe must be false.\n :return: Merged dataframe\n :raise: ValueError if the secondary timeframe is shorter than the dataframe timeframe\n \"\"\"\n\n minutes_inf = timeframe_to_minutes(timeframe_inf)\n minutes = timeframe_to_minutes(timeframe)\n if minutes == minutes_inf:\n # No need to forwardshift if the timeframes are identical\n informative['date_merge'] = informative[date_column]\n elif minutes < minutes_inf:\n # Subtract \"small\" timeframe so merging is not delayed by 1 small candle\n # Detailed explanation in https://github.com/freqtrade/freqtrade/issues/4073\n informative['date_merge'] = (\n informative[date_column] + pd.to_timedelta(minutes_inf, 'm') -\n pd.to_timedelta(minutes, 'm')\n )\n else:\n raise ValueError(\"Tried to merge a faster timeframe to a slower timeframe.\"\n \"This would create new rows, and can throw off your regular indicators.\")\n\n # Rename columns to be unique\n date_merge = 'date_merge'\n if suffix and append_timeframe:\n raise ValueError(\"You can not specify `append_timeframe` as True and a `suffix`.\")\n elif append_timeframe:\n date_merge = f'date_merge_{timeframe_inf}'\n informative.columns = [f\"{col}_{timeframe_inf}\" for col in informative.columns]\n\n elif suffix:\n date_merge = f'date_merge_{suffix}'\n informative.columns = [f\"{col}_{suffix}\" for col in informative.columns]\n\n # Combine the 2 dataframes\n # all indicators on the informative sample MUST be calculated before this point\n if ffill:\n # https://pandas.pydata.org/docs/user_guide/merging.html#timeseries-friendly-merging\n # merge_ordered - ffill method is 2.5x faster than seperate ffill()\n dataframe = pd.merge_ordered(dataframe, informative, fill_method=\"ffill\", left_on='date',\n right_on=date_merge, how='left')\n else:\n dataframe = pd.merge(dataframe, informative, left_on='date',\n right_on=date_merge, how='left')\n dataframe = dataframe.drop(date_merge, axis=1)\n\n # if ffill:\n # dataframe = dataframe.ffill()\n\n return dataframe\n\n\ndef stoploss_from_open(\n open_relative_stop: float,\n current_profit: float,\n is_short: bool = False\n) -> float:\n \"\"\"\n\n Given the current profit, and a desired stop loss value relative to the open price,\n return a stop loss value that is relative to the current price, and which can be\n returned from `custom_stoploss`.\n\n The requested stop can be positive for a stop above the open price, or negative for\n a stop below the open price. The return value is always >= 0.\n\n Returns 0 if the resulting stop price would be above/below (longs/shorts) the current price\n\n :param open_relative_stop: Desired stop loss percentage relative to open price\n :param current_profit: The current profit percentage\n :param is_short: When true, perform the calculation for short instead of long\n :return: Stop loss value relative to current price\n \"\"\"\n\n # formula is undefined for current_profit -1 (longs) or 1 (shorts), return maximum value\n if (current_profit == -1 and not is_short) or (is_short and current_profit == 1):\n return 1\n\n if is_short is True:\n stoploss = -1 + ((1 - open_relative_stop) / (1 - current_profit))\n else:\n stoploss = 1 - ((1 + open_relative_stop) / (1 + current_profit))\n\n # negative stoploss values indicate the requested stop price is higher/lower\n # (long/short) than the current price\n return max(stoploss, 0.0)\n\n\ndef stoploss_from_absolute(stop_rate: float, current_rate: float, is_short: bool = False) -> float:\n \"\"\"\n Given current price and desired stop price, return a stop loss value that is relative to current\n price.\n\n The requested stop can be positive for a stop above the open price, or negative for\n a stop below the open price. The return value is always >= 0.\n\n Returns 0 if the resulting stop price would be above the current price.\n\n :param stop_rate: Stop loss price.\n :param current_rate: Current asset price.\n :param is_short: When true, perform the calculation for short instead of long\n :return: Positive stop loss value relative to current price\n \"\"\"\n\n # formula is undefined for current_rate 0, return maximum value\n if current_rate == 0:\n return 1\n\n stoploss = 1 - (stop_rate / current_rate)\n if is_short:\n stoploss = -stoploss\n\n # negative stoploss values indicate the requested stop price is higher/lower\n # (long/short) than the current price\n # shorts can yield stoploss values higher than 1, so limit that as well\n return max(min(stoploss, 1.0), 0.0)\n", "path": "freqtrade/strategy/strategy_helper.py"}]}
| 3,213 | 642 |
gh_patches_debug_16832
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-20984
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`stevedore_namespace` documentation shows `str`'s doc string
**Describe the bug**
The `stevedore_namespace` BUILD file symbol has a doc-string, but it isn't shown in `pants help-all`. It instead shows what looks like the doc string for `str`.
https://github.com/pantsbuild/pants/blob/ec86d19cd954cd49a9562880a7c0dbc45632778c/src/python/pants/backend/python/framework/stevedore/target_types.py#L13-L30
To reproduce, enable the stevedore backend and look at `help` or `help-all`:
```shell
PANTS_VERSION=2.22.0.dev3 pants --backend-packages=pants.backend.experimental.python.framework.stevedore help stevedore_namespace
```
```
`stevedore_namespace` BUILD file symbol
---------------------------------------
str(object='') -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or
errors is specified, then the object must expose a data buffer
that will be decoded using the given encoding and error handler.
Otherwise, returns the result of object.__str__() (if defined)
or repr(object).
encoding defaults to sys.getdefaultencoding().
errors defaults to 'strict'.
```
```shell
PANTS_VERSION=2.22.0.dev3 pants --backend-packages=pants.backend.experimental.python.framework.stevedore help-all | \
jq .name_to_build_file_info.stevedore_namespace
```
```json
{
"documentation": "str(object='') -> str\nstr(bytes_or_buffer[, encoding[, errors]]) -> str\n\nCreate a new string object from the given object. If encoding or\nerrors is specified, then the object must expose a data buffer\nthat will be decoded using the given encoding and error handler.\nOtherwise, returns the result of object.__str__() (if defined)\nor repr(object).\nencoding defaults to sys.getdefaultencoding().\nerrors defaults to 'strict'.",
"is_target": false,
"name": "stevedore_namespace",
"signature": null
}
```
**Pants version**
Seems to be visible in 2.16 through to the currently latest.
**OS**
both
**Additional info**
- Will appear in online docs too after https://github.com/pantsbuild/pantsbuild.org/pull/216
- Relevant issues:
- https://github.com/pantsbuild/pants/discussions/18117
- https://github.com/pantsbuild/pants/issues/14832
</issue>
<code>
[start of src/python/pants/backend/python/framework/stevedore/target_types.py]
1 # Copyright 2023 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7
8 from pants.backend.python.target_types import PythonDistribution
9 from pants.engine.target import StringSequenceField, Targets
10 from pants.util.strutil import help_text
11
12
13 class StevedoreNamespace(str):
14 f"""Tag a namespace in entry_points as a stevedore namespace.
15
16 This is required for the entry_point to be visible to dep inference
17 based on the `stevedore_namespaces` field.
18
19 For example:
20 {PythonDistribution.alias}(
21 ...
22 entry_points={{
23 stevedore_namespace("a.b.c"): {{
24 "plugin_name": "some.entry:point",
25 }},
26 }},
27 )
28 """
29
30 alias = "stevedore_namespace"
31
32
33 # This is a lot like a SpecialCasedDependencies field, but it doesn't list targets directly.
34 class StevedoreNamespacesField(StringSequenceField):
35 alias = "stevedore_namespaces"
36 help = help_text(
37 f"""
38 List the stevedore namespaces required by this target.
39
40 Code for all `entry_points` on `{PythonDistribution.alias}` targets with
41 these namespaces will be added as dependencies so that they are
42 available on PYTHONPATH during tests. Note that this is only a subset
43 of the `{PythonDistribution.alias}`s dependencies, so the `entry_points`
44 only need to be defined on one `{PythonDistribution.alias}` even if the
45 test only needs some of the `entry_points` namespaces on it.
46
47 Plus, an `entry_points.txt` file will be generated in the sandbox so that
48 each of the `{PythonDistribution.alias}`s appear to be "installed". The
49 `entry_points.txt` file will only include the namespaces requested on this
50 field. Without this, stevedore would not be able to look up plugins in
51 the setuptools `entry_points` metadata.
52
53 NOTE: Each `{PythonDistribution.alias}` must opt-in to being included in
54 this repo-wide inference by tagging the namespaces with
55 `{StevedoreNamespace.alias}("my.stevedore.extension")`.
56
57 The stevedore namespace format (`my.stevedore.extension`) is similar
58 to a Python namespace.
59 """
60 )
61
62
63 class AllStevedoreExtensionTargets(Targets):
64 pass
65
66
67 @dataclass(frozen=True)
68 class StevedoreNamespacesProviderTargetsRequest:
69 stevedore_namespaces: StevedoreNamespacesField
70
71
72 class StevedoreExtensionTargets(Targets):
73 pass
74
[end of src/python/pants/backend/python/framework/stevedore/target_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/backend/python/framework/stevedore/target_types.py b/src/python/pants/backend/python/framework/stevedore/target_types.py
--- a/src/python/pants/backend/python/framework/stevedore/target_types.py
+++ b/src/python/pants/backend/python/framework/stevedore/target_types.py
@@ -11,20 +11,22 @@
class StevedoreNamespace(str):
- f"""Tag a namespace in entry_points as a stevedore namespace.
+ """Tag a namespace in entry_points as a stevedore namespace.
This is required for the entry_point to be visible to dep inference
based on the `stevedore_namespaces` field.
For example:
- {PythonDistribution.alias}(
- ...
- entry_points={{
- stevedore_namespace("a.b.c"): {{
- "plugin_name": "some.entry:point",
- }},
- }},
- )
+ ```python
+ python_distribution(
+ ...
+ entry_points={
+ stevedore_namespace("a.b.c"): {
+ "plugin_name": "some.entry:point",
+ },
+ },
+ )
+ ```
"""
alias = "stevedore_namespace"
|
{"golden_diff": "diff --git a/src/python/pants/backend/python/framework/stevedore/target_types.py b/src/python/pants/backend/python/framework/stevedore/target_types.py\n--- a/src/python/pants/backend/python/framework/stevedore/target_types.py\n+++ b/src/python/pants/backend/python/framework/stevedore/target_types.py\n@@ -11,20 +11,22 @@\n \n \n class StevedoreNamespace(str):\n- f\"\"\"Tag a namespace in entry_points as a stevedore namespace.\n+ \"\"\"Tag a namespace in entry_points as a stevedore namespace.\n \n This is required for the entry_point to be visible to dep inference\n based on the `stevedore_namespaces` field.\n \n For example:\n- {PythonDistribution.alias}(\n- ...\n- entry_points={{\n- stevedore_namespace(\"a.b.c\"): {{\n- \"plugin_name\": \"some.entry:point\",\n- }},\n- }},\n- )\n+ ```python\n+ python_distribution(\n+ ...\n+ entry_points={\n+ stevedore_namespace(\"a.b.c\"): {\n+ \"plugin_name\": \"some.entry:point\",\n+ },\n+ },\n+ )\n+ ```\n \"\"\"\n \n alias = \"stevedore_namespace\"\n", "issue": "`stevedore_namespace` documentation shows `str`'s doc string\n**Describe the bug**\r\n\r\nThe `stevedore_namespace` BUILD file symbol has a doc-string, but it isn't shown in `pants help-all`. It instead shows what looks like the doc string for `str`.\r\n\r\nhttps://github.com/pantsbuild/pants/blob/ec86d19cd954cd49a9562880a7c0dbc45632778c/src/python/pants/backend/python/framework/stevedore/target_types.py#L13-L30\r\n\r\nTo reproduce, enable the stevedore backend and look at `help` or `help-all`:\r\n\r\n```shell\r\nPANTS_VERSION=2.22.0.dev3 pants --backend-packages=pants.backend.experimental.python.framework.stevedore help stevedore_namespace\r\n```\r\n```\r\n`stevedore_namespace` BUILD file symbol\r\n---------------------------------------\r\n\r\nstr(object='') -> str\r\nstr(bytes_or_buffer[, encoding[, errors]]) -> str\r\n\r\nCreate a new string object from the given object. If encoding or\r\nerrors is specified, then the object must expose a data buffer\r\nthat will be decoded using the given encoding and error handler.\r\nOtherwise, returns the result of object.__str__() (if defined)\r\nor repr(object).\r\nencoding defaults to sys.getdefaultencoding().\r\nerrors defaults to 'strict'.\r\n```\r\n\r\n```shell\r\nPANTS_VERSION=2.22.0.dev3 pants --backend-packages=pants.backend.experimental.python.framework.stevedore help-all | \\\r\n jq .name_to_build_file_info.stevedore_namespace\r\n```\r\n```json\r\n{\r\n \"documentation\": \"str(object='') -> str\\nstr(bytes_or_buffer[, encoding[, errors]]) -> str\\n\\nCreate a new string object from the given object. If encoding or\\nerrors is specified, then the object must expose a data buffer\\nthat will be decoded using the given encoding and error handler.\\nOtherwise, returns the result of object.__str__() (if defined)\\nor repr(object).\\nencoding defaults to sys.getdefaultencoding().\\nerrors defaults to 'strict'.\",\r\n \"is_target\": false,\r\n \"name\": \"stevedore_namespace\",\r\n \"signature\": null\r\n}\r\n```\r\n\r\n**Pants version**\r\nSeems to be visible in 2.16 through to the currently latest.\r\n\r\n**OS**\r\nboth\r\n\r\n**Additional info**\r\n\r\n- Will appear in online docs too after https://github.com/pantsbuild/pantsbuild.org/pull/216\r\n- Relevant issues:\r\n - https://github.com/pantsbuild/pants/discussions/18117\r\n - https://github.com/pantsbuild/pants/issues/14832\n", "before_files": [{"content": "# Copyright 2023 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom pants.backend.python.target_types import PythonDistribution\nfrom pants.engine.target import StringSequenceField, Targets\nfrom pants.util.strutil import help_text\n\n\nclass StevedoreNamespace(str):\n f\"\"\"Tag a namespace in entry_points as a stevedore namespace.\n\n This is required for the entry_point to be visible to dep inference\n based on the `stevedore_namespaces` field.\n\n For example:\n {PythonDistribution.alias}(\n ...\n entry_points={{\n stevedore_namespace(\"a.b.c\"): {{\n \"plugin_name\": \"some.entry:point\",\n }},\n }},\n )\n \"\"\"\n\n alias = \"stevedore_namespace\"\n\n\n# This is a lot like a SpecialCasedDependencies field, but it doesn't list targets directly.\nclass StevedoreNamespacesField(StringSequenceField):\n alias = \"stevedore_namespaces\"\n help = help_text(\n f\"\"\"\n List the stevedore namespaces required by this target.\n\n Code for all `entry_points` on `{PythonDistribution.alias}` targets with\n these namespaces will be added as dependencies so that they are\n available on PYTHONPATH during tests. Note that this is only a subset\n of the `{PythonDistribution.alias}`s dependencies, so the `entry_points`\n only need to be defined on one `{PythonDistribution.alias}` even if the\n test only needs some of the `entry_points` namespaces on it.\n\n Plus, an `entry_points.txt` file will be generated in the sandbox so that\n each of the `{PythonDistribution.alias}`s appear to be \"installed\". The\n `entry_points.txt` file will only include the namespaces requested on this\n field. Without this, stevedore would not be able to look up plugins in\n the setuptools `entry_points` metadata.\n\n NOTE: Each `{PythonDistribution.alias}` must opt-in to being included in\n this repo-wide inference by tagging the namespaces with\n `{StevedoreNamespace.alias}(\"my.stevedore.extension\")`.\n\n The stevedore namespace format (`my.stevedore.extension`) is similar\n to a Python namespace.\n \"\"\"\n )\n\n\nclass AllStevedoreExtensionTargets(Targets):\n pass\n\n\n@dataclass(frozen=True)\nclass StevedoreNamespacesProviderTargetsRequest:\n stevedore_namespaces: StevedoreNamespacesField\n\n\nclass StevedoreExtensionTargets(Targets):\n pass\n", "path": "src/python/pants/backend/python/framework/stevedore/target_types.py"}]}
| 1,837 | 272 |
gh_patches_debug_37847
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-3912
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mount_pvc failing when pipeline includes ResourceOps
### What steps did you take:
Created a persistent volume claim using a `VolumeOp`:
```python
vop = dsl.VolumeOp(
name="shared-data",
resource_name="shared-data",
size="100G",
storage_class="nfs",
).set_display_name("Prepare volume")
```
Attempted to mount the PVC to every operation in a pipeline using:
```python
dsl.get_pipeline_conf().add_op_transformer(
mount_pvc(pvc_name=vop.volume.persistent_volume_claim.claim_name, volume_mount_path=MOUNT_PATH)
)
```
### What happened:
During compilation, the `add_op_transformer` attempts to invoke the `add_volume_mount` method on the `VolumeOp`, producing an error:
```
AttributeError: 'VolumeOp' object has no attribute 'add_volume_mount'
```
### What did you expect to happen:
The documentation states that transfomers take and return a `ContainerOp`:
https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.PipelineConf.add_op_transformer
Given that, I would expect them to be bypassed for `VolumeOp` (or other `ResourceOp` operations), and hence this compilation should succeed.
### Environment:
<!-- Please fill in those that seem relevant. -->
KFP version: Build commit: 9c16e12
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
kfp 0.5.1
kfp-server-api 0.3.0
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
</issue>
<code>
[start of sdk/python/kfp/dsl/_pipeline.py]
1 # Copyright 2018-2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from . import _container_op
17 from . import _resource_op
18 from . import _ops_group
19 from ._component_bridge import _create_container_op_from_component_and_arguments
20 from ..components import _components
21 from ..components._naming import _make_name_unique_by_adding_index
22 import sys
23
24
25 # This handler is called whenever the @pipeline decorator is applied.
26 # It can be used by command-line DSL compiler to inject code that runs for every pipeline definition.
27 _pipeline_decorator_handler = None
28
29
30 def pipeline(name : str = None, description : str = None):
31 """Decorator of pipeline functions.
32
33 Usage:
34 ```python
35 @pipeline(
36 name='my awesome pipeline',
37 description='Is it really awesome?'
38 )
39 def my_pipeline(a: PipelineParam, b: PipelineParam):
40 ...
41 ```
42 """
43 def _pipeline(func):
44 if name:
45 func._component_human_name = name
46 if description:
47 func._component_description = description
48
49 if _pipeline_decorator_handler:
50 return _pipeline_decorator_handler(func) or func
51 else:
52 return func
53
54 return _pipeline
55
56 class PipelineConf():
57 """PipelineConf contains pipeline level settings
58 """
59 def __init__(self):
60 self.image_pull_secrets = []
61 self.timeout = 0
62 self.ttl_seconds_after_finished = -1
63 self.op_transformers = []
64 self.default_pod_node_selector = {}
65 self.image_pull_policy = None
66 self.parallelism = None
67
68 def set_image_pull_secrets(self, image_pull_secrets):
69 """Configures the pipeline level imagepullsecret
70
71 Args:
72 image_pull_secrets: a list of Kubernetes V1LocalObjectReference
73 For detailed description, check Kubernetes V1LocalObjectReference definition
74 https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1LocalObjectReference.md
75 """
76 self.image_pull_secrets = image_pull_secrets
77 return self
78
79 def set_timeout(self, seconds: int):
80 """Configures the pipeline level timeout
81
82 Args:
83 seconds: number of seconds for timeout
84 """
85 self.timeout = seconds
86 return self
87
88 def set_parallelism(self, max_num_pods: int):
89 """Configures the max number of total parallel pods that can execute at the same time in a workflow.
90
91 Args:
92 max_num_pods (int): max number of total parallel pods.
93 """
94 self.parallelism = max_num_pods
95 return self
96
97 def set_ttl_seconds_after_finished(self, seconds: int):
98 """Configures the ttl after the pipeline has finished.
99
100 Args:
101 seconds: number of seconds for the workflow to be garbage collected after it is finished.
102 """
103 self.ttl_seconds_after_finished = seconds
104 return self
105
106 def set_default_pod_node_selector(self, label_name: str, value: str):
107 """Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the
108 container to be eligible to run on a node, the node must have each of the constraints appeared
109 as labels.
110
111 Args:
112 label_name: The name of the constraint label.
113 value: The value of the constraint label.
114 """
115 self.default_pod_node_selector[label_name] = value
116 return self
117
118
119 def set_image_pull_policy(self, policy: str):
120 """Configures the default image pull policy
121
122 Args:
123 policy: the pull policy, has to be one of: Always, Never, IfNotPresent.
124 For more info: https://github.com/kubernetes-client/python/blob/10a7f95435c0b94a6d949ba98375f8cc85a70e5a/kubernetes/docs/V1Container.md
125 """
126 self.image_pull_policy = policy
127 return self
128
129 def add_op_transformer(self, transformer):
130 """Configures the op_transformers which will be applied to all ops in the pipeline.
131
132 Args:
133 transformer: a function that takes a ContainOp as input and returns a ContainerOp
134 """
135 self.op_transformers.append(transformer)
136
137
138 def get_pipeline_conf():
139 """Configure the pipeline level setting to the current pipeline
140 Note: call the function inside the user defined pipeline function.
141 """
142 return Pipeline.get_default_pipeline().conf
143
144 #TODO: Pipeline is in fact an opsgroup, refactor the code.
145 class Pipeline():
146 """A pipeline contains a list of operators.
147
148 This class is not supposed to be used by pipeline authors since pipeline authors can use
149 pipeline functions (decorated with @pipeline) to reference their pipelines. This class
150 is useful for implementing a compiler. For example, the compiler can use the following
151 to get the pipeline object and its ops:
152
153 ```python
154 with Pipeline() as p:
155 pipeline_func(*args_list)
156
157 traverse(p.ops)
158 ```
159 """
160
161 # _default_pipeline is set when it (usually a compiler) runs "with Pipeline()"
162 _default_pipeline = None
163
164 @staticmethod
165 def get_default_pipeline():
166 """Get default pipeline. """
167 return Pipeline._default_pipeline
168
169 @staticmethod
170 def add_pipeline(name, description, func):
171 """Add a pipeline function with the specified name and description."""
172 # Applying the @pipeline decorator to the pipeline function
173 func = pipeline(name=name, description=description)(func)
174
175 def __init__(self, name: str):
176 """Create a new instance of Pipeline.
177
178 Args:
179 name: the name of the pipeline. Once deployed, the name will show up in Pipeline System UI.
180 """
181 self.name = name
182 self.ops = {}
183 # Add the root group.
184 self.groups = [_ops_group.OpsGroup('pipeline', name=name)]
185 self.group_id = 0
186 self.conf = PipelineConf()
187 self._metadata = None
188
189 def __enter__(self):
190 if Pipeline._default_pipeline:
191 raise Exception('Nested pipelines are not allowed.')
192
193 Pipeline._default_pipeline = self
194 self._old_container_task_constructor = _components._container_task_constructor
195 _components._container_task_constructor = _create_container_op_from_component_and_arguments
196
197 def register_op_and_generate_id(op):
198 return self.add_op(op, op.is_exit_handler)
199
200 self._old__register_op_handler = _container_op._register_op_handler
201 _container_op._register_op_handler = register_op_and_generate_id
202 return self
203
204 def __exit__(self, *args):
205 Pipeline._default_pipeline = None
206 _container_op._register_op_handler = self._old__register_op_handler
207 _components._container_task_constructor = self._old_container_task_constructor
208
209 def add_op(self, op: _container_op.BaseOp, define_only: bool):
210 """Add a new operator.
211
212 Args:
213 op: An operator of ContainerOp, ResourceOp or their inherited types.
214
215 Returns
216 op_name: a unique op name.
217 """
218 #If there is an existing op with this name then generate a new name.
219 op_name = _make_name_unique_by_adding_index(op.human_name, list(self.ops.keys()), ' ')
220
221 self.ops[op_name] = op
222 if not define_only:
223 self.groups[-1].ops.append(op)
224
225 return op_name
226
227 def push_ops_group(self, group: _ops_group.OpsGroup):
228 """Push an OpsGroup into the stack.
229
230 Args:
231 group: An OpsGroup. Typically it is one of ExitHandler, Branch, and Loop.
232 """
233 self.groups[-1].groups.append(group)
234 self.groups.append(group)
235
236 def pop_ops_group(self):
237 """Remove the current OpsGroup from the stack."""
238 del self.groups[-1]
239
240 def remove_op_from_groups(self, op):
241 for group in self.groups:
242 group.remove_op_recursive(op)
243
244 def get_next_group_id(self):
245 """Get next id for a new group. """
246
247 self.group_id += 1
248 return self.group_id
249
250 def _set_metadata(self, metadata):
251 '''_set_metadata passes the containerop the metadata information
252 Args:
253 metadata (ComponentMeta): component metadata
254 '''
255 self._metadata = metadata
256
[end of sdk/python/kfp/dsl/_pipeline.py]
[start of sdk/python/kfp/onprem.py]
1
2 def mount_pvc(pvc_name='pipeline-claim', volume_name='pipeline', volume_mount_path='/mnt/pipeline'):
3 """
4 Modifier function to apply to a Container Op to simplify volume, volume mount addition and
5 enable better reuse of volumes, volume claims across container ops.
6 Usage:
7 train = train_op(...)
8 train.apply(mount_pvc('claim-name', 'pipeline', '/mnt/pipeline'))
9 """
10 def _mount_pvc(task):
11 from kubernetes import client as k8s_client
12 local_pvc = k8s_client.V1PersistentVolumeClaimVolumeSource(claim_name=pvc_name)
13 return (
14 task
15 .add_volume(
16 k8s_client.V1Volume(name=volume_name, persistent_volume_claim=local_pvc)
17 )
18 .add_volume_mount(
19 k8s_client.V1VolumeMount(mount_path=volume_mount_path, name=volume_name)
20 )
21 )
22 return _mount_pvc
23
[end of sdk/python/kfp/onprem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/kfp/dsl/_pipeline.py b/sdk/python/kfp/dsl/_pipeline.py
--- a/sdk/python/kfp/dsl/_pipeline.py
+++ b/sdk/python/kfp/dsl/_pipeline.py
@@ -102,9 +102,9 @@
"""
self.ttl_seconds_after_finished = seconds
return self
-
- def set_default_pod_node_selector(self, label_name: str, value: str):
- """Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the
+
+ def set_default_pod_node_selector(self, label_name: str, value: str):
+ """Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the
container to be eligible to run on a node, the node must have each of the constraints appeared
as labels.
@@ -114,7 +114,7 @@
"""
self.default_pod_node_selector[label_name] = value
return self
-
+
def set_image_pull_policy(self, policy: str):
"""Configures the default image pull policy
@@ -128,9 +128,10 @@
def add_op_transformer(self, transformer):
"""Configures the op_transformers which will be applied to all ops in the pipeline.
+ The ops can be ResourceOp, VolumenOp, or ContainerOp.
Args:
- transformer: a function that takes a ContainOp as input and returns a ContainerOp
+ transformer: a function that takes a kfp Op as input and returns a kfp Op
"""
self.op_transformers.append(transformer)
diff --git a/sdk/python/kfp/onprem.py b/sdk/python/kfp/onprem.py
--- a/sdk/python/kfp/onprem.py
+++ b/sdk/python/kfp/onprem.py
@@ -1,7 +1,7 @@
def mount_pvc(pvc_name='pipeline-claim', volume_name='pipeline', volume_mount_path='/mnt/pipeline'):
"""
- Modifier function to apply to a Container Op to simplify volume, volume mount addition and
+ Modifier function to apply to a Container Op to simplify volume, volume mount addition and
enable better reuse of volumes, volume claims across container ops.
Usage:
train = train_op(...)
@@ -9,6 +9,10 @@
"""
def _mount_pvc(task):
from kubernetes import client as k8s_client
+ # there can be other ops in a pipeline (e.g. ResourceOp, VolumeOp)
+ # refer to #3906
+ if not hasattr(task, "add_volume") or not hasattr(task, "add_volume_mount"):
+ return task
local_pvc = k8s_client.V1PersistentVolumeClaimVolumeSource(claim_name=pvc_name)
return (
task
|
{"golden_diff": "diff --git a/sdk/python/kfp/dsl/_pipeline.py b/sdk/python/kfp/dsl/_pipeline.py\n--- a/sdk/python/kfp/dsl/_pipeline.py\n+++ b/sdk/python/kfp/dsl/_pipeline.py\n@@ -102,9 +102,9 @@\n \"\"\"\n self.ttl_seconds_after_finished = seconds\n return self\n- \n- def set_default_pod_node_selector(self, label_name: str, value: str): \n- \"\"\"Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the \n+\n+ def set_default_pod_node_selector(self, label_name: str, value: str):\n+ \"\"\"Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the\n container to be eligible to run on a node, the node must have each of the constraints appeared\n as labels.\n \n@@ -114,7 +114,7 @@\n \"\"\"\n self.default_pod_node_selector[label_name] = value\n return self\n- \n+\n \n def set_image_pull_policy(self, policy: str):\n \"\"\"Configures the default image pull policy\n@@ -128,9 +128,10 @@\n \n def add_op_transformer(self, transformer):\n \"\"\"Configures the op_transformers which will be applied to all ops in the pipeline.\n+ The ops can be ResourceOp, VolumenOp, or ContainerOp.\n \n Args:\n- transformer: a function that takes a ContainOp as input and returns a ContainerOp\n+ transformer: a function that takes a kfp Op as input and returns a kfp Op\n \"\"\"\n self.op_transformers.append(transformer)\n \ndiff --git a/sdk/python/kfp/onprem.py b/sdk/python/kfp/onprem.py\n--- a/sdk/python/kfp/onprem.py\n+++ b/sdk/python/kfp/onprem.py\n@@ -1,7 +1,7 @@\n \n def mount_pvc(pvc_name='pipeline-claim', volume_name='pipeline', volume_mount_path='/mnt/pipeline'):\n \"\"\"\n- Modifier function to apply to a Container Op to simplify volume, volume mount addition and \n+ Modifier function to apply to a Container Op to simplify volume, volume mount addition and\n enable better reuse of volumes, volume claims across container ops.\n Usage:\n train = train_op(...)\n@@ -9,6 +9,10 @@\n \"\"\"\n def _mount_pvc(task):\n from kubernetes import client as k8s_client\n+ # there can be other ops in a pipeline (e.g. ResourceOp, VolumeOp)\n+ # refer to #3906\n+ if not hasattr(task, \"add_volume\") or not hasattr(task, \"add_volume_mount\"):\n+ return task\n local_pvc = k8s_client.V1PersistentVolumeClaimVolumeSource(claim_name=pvc_name)\n return (\n task\n", "issue": "mount_pvc failing when pipeline includes ResourceOps\n### What steps did you take:\r\n\r\nCreated a persistent volume claim using a `VolumeOp`:\r\n\r\n```python\r\n vop = dsl.VolumeOp(\r\n name=\"shared-data\",\r\n resource_name=\"shared-data\",\r\n size=\"100G\",\r\n storage_class=\"nfs\",\r\n ).set_display_name(\"Prepare volume\")\r\n```\r\n\r\nAttempted to mount the PVC to every operation in a pipeline using:\r\n\r\n```python\r\n dsl.get_pipeline_conf().add_op_transformer(\r\n mount_pvc(pvc_name=vop.volume.persistent_volume_claim.claim_name, volume_mount_path=MOUNT_PATH)\r\n )\r\n```\r\n\r\n### What happened:\r\nDuring compilation, the `add_op_transformer` attempts to invoke the `add_volume_mount` method on the `VolumeOp`, producing an error:\r\n\r\n```\r\nAttributeError: 'VolumeOp' object has no attribute 'add_volume_mount'\r\n```\r\n\r\n### What did you expect to happen:\r\n\r\nThe documentation states that transfomers take and return a `ContainerOp`:\r\n\r\nhttps://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.PipelineConf.add_op_transformer\r\n\r\nGiven that, I would expect them to be bypassed for `VolumeOp` (or other `ResourceOp` operations), and hence this compilation should succeed.\r\n\r\n### Environment:\r\n<!-- Please fill in those that seem relevant. -->\r\n\r\nKFP version: Build commit: 9c16e12\r\n\r\nKFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->\r\nkfp 0.5.1\r\nkfp-server-api 0.3.0\r\n\r\n### Anything else you would like to add:\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n/kind bug\r\n<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->\r\n<!--\r\n// /area frontend\r\n// /area backend\r\n// /area sdk\r\n// /area testing\r\n// /area engprod\r\n-->\r\n\n", "before_files": [{"content": "# Copyright 2018-2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom . import _container_op\nfrom . import _resource_op\nfrom . import _ops_group\nfrom ._component_bridge import _create_container_op_from_component_and_arguments\nfrom ..components import _components\nfrom ..components._naming import _make_name_unique_by_adding_index\nimport sys\n\n\n# This handler is called whenever the @pipeline decorator is applied.\n# It can be used by command-line DSL compiler to inject code that runs for every pipeline definition.\n_pipeline_decorator_handler = None\n\n\ndef pipeline(name : str = None, description : str = None):\n \"\"\"Decorator of pipeline functions.\n\n Usage:\n ```python\n @pipeline(\n name='my awesome pipeline',\n description='Is it really awesome?'\n )\n def my_pipeline(a: PipelineParam, b: PipelineParam):\n ...\n ```\n \"\"\"\n def _pipeline(func):\n if name:\n func._component_human_name = name\n if description:\n func._component_description = description\n\n if _pipeline_decorator_handler:\n return _pipeline_decorator_handler(func) or func\n else:\n return func\n\n return _pipeline\n\nclass PipelineConf():\n \"\"\"PipelineConf contains pipeline level settings\n \"\"\"\n def __init__(self):\n self.image_pull_secrets = []\n self.timeout = 0\n self.ttl_seconds_after_finished = -1\n self.op_transformers = []\n self.default_pod_node_selector = {}\n self.image_pull_policy = None\n self.parallelism = None\n\n def set_image_pull_secrets(self, image_pull_secrets):\n \"\"\"Configures the pipeline level imagepullsecret\n\n Args:\n image_pull_secrets: a list of Kubernetes V1LocalObjectReference\n For detailed description, check Kubernetes V1LocalObjectReference definition\n https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1LocalObjectReference.md\n \"\"\"\n self.image_pull_secrets = image_pull_secrets\n return self\n\n def set_timeout(self, seconds: int):\n \"\"\"Configures the pipeline level timeout\n\n Args:\n seconds: number of seconds for timeout\n \"\"\"\n self.timeout = seconds\n return self\n\n def set_parallelism(self, max_num_pods: int):\n \"\"\"Configures the max number of total parallel pods that can execute at the same time in a workflow.\n\n Args:\n max_num_pods (int): max number of total parallel pods.\n \"\"\"\n self.parallelism = max_num_pods\n return self\n\n def set_ttl_seconds_after_finished(self, seconds: int):\n \"\"\"Configures the ttl after the pipeline has finished.\n\n Args:\n seconds: number of seconds for the workflow to be garbage collected after it is finished.\n \"\"\"\n self.ttl_seconds_after_finished = seconds\n return self\n \n def set_default_pod_node_selector(self, label_name: str, value: str): \n \"\"\"Add a constraint for nodeSelector for a pipeline. Each constraint is a key-value pair label. For the \n container to be eligible to run on a node, the node must have each of the constraints appeared\n as labels.\n\n Args:\n label_name: The name of the constraint label.\n value: The value of the constraint label.\n \"\"\"\n self.default_pod_node_selector[label_name] = value\n return self\n \n\n def set_image_pull_policy(self, policy: str):\n \"\"\"Configures the default image pull policy\n\n Args:\n policy: the pull policy, has to be one of: Always, Never, IfNotPresent.\n For more info: https://github.com/kubernetes-client/python/blob/10a7f95435c0b94a6d949ba98375f8cc85a70e5a/kubernetes/docs/V1Container.md\n \"\"\"\n self.image_pull_policy = policy\n return self\n\n def add_op_transformer(self, transformer):\n \"\"\"Configures the op_transformers which will be applied to all ops in the pipeline.\n\n Args:\n transformer: a function that takes a ContainOp as input and returns a ContainerOp\n \"\"\"\n self.op_transformers.append(transformer)\n\n\ndef get_pipeline_conf():\n \"\"\"Configure the pipeline level setting to the current pipeline\n Note: call the function inside the user defined pipeline function.\n \"\"\"\n return Pipeline.get_default_pipeline().conf\n\n#TODO: Pipeline is in fact an opsgroup, refactor the code.\nclass Pipeline():\n \"\"\"A pipeline contains a list of operators.\n\n This class is not supposed to be used by pipeline authors since pipeline authors can use\n pipeline functions (decorated with @pipeline) to reference their pipelines. This class\n is useful for implementing a compiler. For example, the compiler can use the following\n to get the pipeline object and its ops:\n\n ```python\n with Pipeline() as p:\n pipeline_func(*args_list)\n\n traverse(p.ops)\n ```\n \"\"\"\n\n # _default_pipeline is set when it (usually a compiler) runs \"with Pipeline()\"\n _default_pipeline = None\n\n @staticmethod\n def get_default_pipeline():\n \"\"\"Get default pipeline. \"\"\"\n return Pipeline._default_pipeline\n\n @staticmethod\n def add_pipeline(name, description, func):\n \"\"\"Add a pipeline function with the specified name and description.\"\"\"\n # Applying the @pipeline decorator to the pipeline function\n func = pipeline(name=name, description=description)(func)\n\n def __init__(self, name: str):\n \"\"\"Create a new instance of Pipeline.\n\n Args:\n name: the name of the pipeline. Once deployed, the name will show up in Pipeline System UI.\n \"\"\"\n self.name = name\n self.ops = {}\n # Add the root group.\n self.groups = [_ops_group.OpsGroup('pipeline', name=name)]\n self.group_id = 0\n self.conf = PipelineConf()\n self._metadata = None\n\n def __enter__(self):\n if Pipeline._default_pipeline:\n raise Exception('Nested pipelines are not allowed.')\n\n Pipeline._default_pipeline = self\n self._old_container_task_constructor = _components._container_task_constructor\n _components._container_task_constructor = _create_container_op_from_component_and_arguments\n\n def register_op_and_generate_id(op):\n return self.add_op(op, op.is_exit_handler)\n\n self._old__register_op_handler = _container_op._register_op_handler\n _container_op._register_op_handler = register_op_and_generate_id\n return self\n\n def __exit__(self, *args):\n Pipeline._default_pipeline = None\n _container_op._register_op_handler = self._old__register_op_handler\n _components._container_task_constructor = self._old_container_task_constructor\n\n def add_op(self, op: _container_op.BaseOp, define_only: bool):\n \"\"\"Add a new operator.\n\n Args:\n op: An operator of ContainerOp, ResourceOp or their inherited types.\n\n Returns\n op_name: a unique op name.\n \"\"\"\n #If there is an existing op with this name then generate a new name.\n op_name = _make_name_unique_by_adding_index(op.human_name, list(self.ops.keys()), ' ')\n\n self.ops[op_name] = op\n if not define_only:\n self.groups[-1].ops.append(op)\n\n return op_name\n\n def push_ops_group(self, group: _ops_group.OpsGroup):\n \"\"\"Push an OpsGroup into the stack.\n\n Args:\n group: An OpsGroup. Typically it is one of ExitHandler, Branch, and Loop.\n \"\"\"\n self.groups[-1].groups.append(group)\n self.groups.append(group)\n\n def pop_ops_group(self):\n \"\"\"Remove the current OpsGroup from the stack.\"\"\"\n del self.groups[-1]\n\n def remove_op_from_groups(self, op):\n for group in self.groups:\n group.remove_op_recursive(op)\n\n def get_next_group_id(self):\n \"\"\"Get next id for a new group. \"\"\"\n\n self.group_id += 1\n return self.group_id\n\n def _set_metadata(self, metadata):\n '''_set_metadata passes the containerop the metadata information\n Args:\n metadata (ComponentMeta): component metadata\n '''\n self._metadata = metadata\n", "path": "sdk/python/kfp/dsl/_pipeline.py"}, {"content": "\ndef mount_pvc(pvc_name='pipeline-claim', volume_name='pipeline', volume_mount_path='/mnt/pipeline'):\n \"\"\"\n Modifier function to apply to a Container Op to simplify volume, volume mount addition and \n enable better reuse of volumes, volume claims across container ops.\n Usage:\n train = train_op(...)\n train.apply(mount_pvc('claim-name', 'pipeline', '/mnt/pipeline'))\n \"\"\"\n def _mount_pvc(task):\n from kubernetes import client as k8s_client\n local_pvc = k8s_client.V1PersistentVolumeClaimVolumeSource(claim_name=pvc_name)\n return (\n task\n .add_volume(\n k8s_client.V1Volume(name=volume_name, persistent_volume_claim=local_pvc)\n )\n .add_volume_mount(\n k8s_client.V1VolumeMount(mount_path=volume_mount_path, name=volume_name)\n )\n )\n return _mount_pvc\n", "path": "sdk/python/kfp/onprem.py"}]}
| 3,815 | 636 |
gh_patches_debug_18816
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-646
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for --reload to monitor additional file types.
The "reload" process currently only monitors ".py" files in various directories. I have a changes that will pass in a list of additional "reload_suffixes" that the process will monitor. This allows the service to monitor data files in addition to code files.
Any feedback on whether this is useful to others?
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/528">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/528/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/528/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of uvicorn/supervisors/statreload.py]
1 import logging
2 import os
3 from pathlib import Path
4
5 from uvicorn.supervisors.basereload import BaseReload
6
7 logger = logging.getLogger("uvicorn.error")
8
9
10 class StatReload(BaseReload):
11 def __init__(self, config, target, sockets):
12 super().__init__(config, target, sockets)
13 self.reloader_name = "statreload"
14 self.mtimes = {}
15
16 def should_restart(self):
17 for filename in self.iter_py_files():
18 try:
19 mtime = os.path.getmtime(filename)
20 except OSError: # pragma: nocover
21 continue
22
23 old_time = self.mtimes.get(filename)
24 if old_time is None:
25 self.mtimes[filename] = mtime
26 continue
27 elif mtime > old_time:
28 display_path = os.path.normpath(filename)
29 if Path.cwd() in Path(filename).parents:
30 display_path = os.path.normpath(os.path.relpath(filename))
31 message = "Detected file change in '%s'. Reloading..."
32 logger.warning(message, display_path)
33 return True
34 return False
35
36 def iter_py_files(self):
37 for reload_dir in self.config.reload_dirs:
38 for subdir, dirs, files in os.walk(reload_dir):
39 for file in files:
40 if file.endswith(".py"):
41 yield subdir + os.sep + file
42
[end of uvicorn/supervisors/statreload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/uvicorn/supervisors/statreload.py b/uvicorn/supervisors/statreload.py
--- a/uvicorn/supervisors/statreload.py
+++ b/uvicorn/supervisors/statreload.py
@@ -14,7 +14,7 @@
self.mtimes = {}
def should_restart(self):
- for filename in self.iter_py_files():
+ for filename in self.iter_files():
try:
mtime = os.path.getmtime(filename)
except OSError: # pragma: nocover
@@ -33,9 +33,9 @@
return True
return False
- def iter_py_files(self):
+ def iter_files(self):
for reload_dir in self.config.reload_dirs:
for subdir, dirs, files in os.walk(reload_dir):
for file in files:
- if file.endswith(".py"):
+ if not file.startswith("."):
yield subdir + os.sep + file
|
{"golden_diff": "diff --git a/uvicorn/supervisors/statreload.py b/uvicorn/supervisors/statreload.py\n--- a/uvicorn/supervisors/statreload.py\n+++ b/uvicorn/supervisors/statreload.py\n@@ -14,7 +14,7 @@\n self.mtimes = {}\n \n def should_restart(self):\n- for filename in self.iter_py_files():\n+ for filename in self.iter_files():\n try:\n mtime = os.path.getmtime(filename)\n except OSError: # pragma: nocover\n@@ -33,9 +33,9 @@\n return True\n return False\n \n- def iter_py_files(self):\n+ def iter_files(self):\n for reload_dir in self.config.reload_dirs:\n for subdir, dirs, files in os.walk(reload_dir):\n for file in files:\n- if file.endswith(\".py\"):\n+ if not file.startswith(\".\"):\n yield subdir + os.sep + file\n", "issue": "Add support for --reload to monitor additional file types.\nThe \"reload\" process currently only monitors \".py\" files in various directories. I have a changes that will pass in a list of additional \"reload_suffixes\" that the process will monitor. This allows the service to monitor data files in addition to code files.\r\n\r\nAny feedback on whether this is useful to others?\n\n<!-- POLAR PLEDGE BADGE START -->\n> [!IMPORTANT]\n> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.\n> - We receive the funding once the issue is completed & confirmed by you.\n> - Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/encode/uvicorn/issues/528\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/encode/uvicorn/issues/528/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/encode/uvicorn/issues/528/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "import logging\nimport os\nfrom pathlib import Path\n\nfrom uvicorn.supervisors.basereload import BaseReload\n\nlogger = logging.getLogger(\"uvicorn.error\")\n\n\nclass StatReload(BaseReload):\n def __init__(self, config, target, sockets):\n super().__init__(config, target, sockets)\n self.reloader_name = \"statreload\"\n self.mtimes = {}\n\n def should_restart(self):\n for filename in self.iter_py_files():\n try:\n mtime = os.path.getmtime(filename)\n except OSError: # pragma: nocover\n continue\n\n old_time = self.mtimes.get(filename)\n if old_time is None:\n self.mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n display_path = os.path.normpath(filename)\n if Path.cwd() in Path(filename).parents:\n display_path = os.path.normpath(os.path.relpath(filename))\n message = \"Detected file change in '%s'. Reloading...\"\n logger.warning(message, display_path)\n return True\n return False\n\n def iter_py_files(self):\n for reload_dir in self.config.reload_dirs:\n for subdir, dirs, files in os.walk(reload_dir):\n for file in files:\n if file.endswith(\".py\"):\n yield subdir + os.sep + file\n", "path": "uvicorn/supervisors/statreload.py"}]}
| 1,167 | 210 |
gh_patches_debug_12915
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-792
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for PyMySQL v1.x series
pymysql instrumentation does not work with 1.x series as it lists <1.0 as the compatible versions. I've tested the instrumentatoin with a simple application that uses PyMySQL 1.0 and it seems to work. We should update the instrumentation to support <2.0.
original issue: https://github.com/open-telemetry/opentelemetry-python/issues/2259
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 _instruments = ("PyMySQL ~= 0.10.1",)
17
[end of instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py]
[start of opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.
16 # RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.
17
18 libraries = {
19 "aiohttp": {
20 "library": "aiohttp ~= 3.0",
21 "instrumentation": "opentelemetry-instrumentation-aiohttp-client==0.25b2",
22 },
23 "aiopg": {
24 "library": "aiopg >= 0.13.0, < 1.3.0",
25 "instrumentation": "opentelemetry-instrumentation-aiopg==0.25b2",
26 },
27 "asgiref": {
28 "library": "asgiref ~= 3.0",
29 "instrumentation": "opentelemetry-instrumentation-asgi==0.25b2",
30 },
31 "asyncpg": {
32 "library": "asyncpg >= 0.12.0",
33 "instrumentation": "opentelemetry-instrumentation-asyncpg==0.25b2",
34 },
35 "boto": {
36 "library": "boto~=2.0",
37 "instrumentation": "opentelemetry-instrumentation-boto==0.25b2",
38 },
39 "botocore": {
40 "library": "botocore ~= 1.0",
41 "instrumentation": "opentelemetry-instrumentation-botocore==0.25b2",
42 },
43 "celery": {
44 "library": "celery >= 4.0, < 6.0",
45 "instrumentation": "opentelemetry-instrumentation-celery==0.25b2",
46 },
47 "django": {
48 "library": "django >= 1.10",
49 "instrumentation": "opentelemetry-instrumentation-django==0.25b2",
50 },
51 "elasticsearch": {
52 "library": "elasticsearch >= 2.0",
53 "instrumentation": "opentelemetry-instrumentation-elasticsearch==0.25b2",
54 },
55 "falcon": {
56 "library": "falcon >= 2.0.0, < 4.0.0",
57 "instrumentation": "opentelemetry-instrumentation-falcon==0.25b2",
58 },
59 "fastapi": {
60 "library": "fastapi ~= 0.58",
61 "instrumentation": "opentelemetry-instrumentation-fastapi==0.25b2",
62 },
63 "flask": {
64 "library": "flask >= 1.0, < 3.0",
65 "instrumentation": "opentelemetry-instrumentation-flask==0.25b2",
66 },
67 "grpcio": {
68 "library": "grpcio ~= 1.27",
69 "instrumentation": "opentelemetry-instrumentation-grpc==0.25b2",
70 },
71 "httpx": {
72 "library": "httpx >= 0.18.0, < 0.19.0",
73 "instrumentation": "opentelemetry-instrumentation-httpx==0.25b2",
74 },
75 "jinja2": {
76 "library": "jinja2 >= 2.7, < 4.0",
77 "instrumentation": "opentelemetry-instrumentation-jinja2==0.25b2",
78 },
79 "mysql-connector-python": {
80 "library": "mysql-connector-python ~= 8.0",
81 "instrumentation": "opentelemetry-instrumentation-mysql==0.25b2",
82 },
83 "pika": {
84 "library": "pika >= 1.1.0",
85 "instrumentation": "opentelemetry-instrumentation-pika==0.25b2",
86 },
87 "psycopg2": {
88 "library": "psycopg2 >= 2.7.3.1",
89 "instrumentation": "opentelemetry-instrumentation-psycopg2==0.25b2",
90 },
91 "pymemcache": {
92 "library": "pymemcache ~= 1.3",
93 "instrumentation": "opentelemetry-instrumentation-pymemcache==0.25b2",
94 },
95 "pymongo": {
96 "library": "pymongo ~= 3.1",
97 "instrumentation": "opentelemetry-instrumentation-pymongo==0.25b2",
98 },
99 "PyMySQL": {
100 "library": "PyMySQL ~= 0.10.1",
101 "instrumentation": "opentelemetry-instrumentation-pymysql==0.25b2",
102 },
103 "pyramid": {
104 "library": "pyramid >= 1.7",
105 "instrumentation": "opentelemetry-instrumentation-pyramid==0.25b2",
106 },
107 "redis": {
108 "library": "redis >= 2.6",
109 "instrumentation": "opentelemetry-instrumentation-redis==0.25b2",
110 },
111 "requests": {
112 "library": "requests ~= 2.0",
113 "instrumentation": "opentelemetry-instrumentation-requests==0.25b2",
114 },
115 "scikit-learn": {
116 "library": "scikit-learn ~= 0.24.0",
117 "instrumentation": "opentelemetry-instrumentation-sklearn==0.25b2",
118 },
119 "sqlalchemy": {
120 "library": "sqlalchemy",
121 "instrumentation": "opentelemetry-instrumentation-sqlalchemy==0.25b2",
122 },
123 "starlette": {
124 "library": "starlette ~= 0.13.0",
125 "instrumentation": "opentelemetry-instrumentation-starlette==0.25b2",
126 },
127 "tornado": {
128 "library": "tornado >= 6.0",
129 "instrumentation": "opentelemetry-instrumentation-tornado==0.25b2",
130 },
131 "urllib3": {
132 "library": "urllib3 >= 1.0.0, < 2.0.0",
133 "instrumentation": "opentelemetry-instrumentation-urllib3==0.25b2",
134 },
135 }
136 default_instrumentations = [
137 "opentelemetry-instrumentation-aws-lambda==0.25b2",
138 "opentelemetry-instrumentation-dbapi==0.25b2",
139 "opentelemetry-instrumentation-logging==0.25b2",
140 "opentelemetry-instrumentation-sqlite3==0.25b2",
141 "opentelemetry-instrumentation-urllib==0.25b2",
142 "opentelemetry-instrumentation-wsgi==0.25b2",
143 ]
144
[end of opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py b/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py
--- a/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py
+++ b/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py
@@ -13,4 +13,4 @@
# limitations under the License.
-_instruments = ("PyMySQL ~= 0.10.1",)
+_instruments = ("PyMySQL < 2",)
diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
@@ -97,7 +97,7 @@
"instrumentation": "opentelemetry-instrumentation-pymongo==0.25b2",
},
"PyMySQL": {
- "library": "PyMySQL ~= 0.10.1",
+ "library": "PyMySQL < 2",
"instrumentation": "opentelemetry-instrumentation-pymysql==0.25b2",
},
"pyramid": {
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py b/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py\n--- a/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py\n+++ b/instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py\n@@ -13,4 +13,4 @@\n # limitations under the License.\n \n \n-_instruments = (\"PyMySQL ~= 0.10.1\",)\n+_instruments = (\"PyMySQL < 2\",)\ndiff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n@@ -97,7 +97,7 @@\n \"instrumentation\": \"opentelemetry-instrumentation-pymongo==0.25b2\",\n },\n \"PyMySQL\": {\n- \"library\": \"PyMySQL ~= 0.10.1\",\n+ \"library\": \"PyMySQL < 2\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymysql==0.25b2\",\n },\n \"pyramid\": {\n", "issue": "Add support for PyMySQL v1.x series\npymysql instrumentation does not work with 1.x series as it lists <1.0 as the compatible versions. I've tested the instrumentatoin with a simple application that uses PyMySQL 1.0 and it seems to work. We should update the instrumentation to support <2.0.\r\n\r\noriginal issue: https://github.com/open-telemetry/opentelemetry-python/issues/2259\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n_instruments = (\"PyMySQL ~= 0.10.1\",)\n", "path": "instrumentation/opentelemetry-instrumentation-pymysql/src/opentelemetry/instrumentation/pymysql/package.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.\n# RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.\n\nlibraries = {\n \"aiohttp\": {\n \"library\": \"aiohttp ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiohttp-client==0.25b2\",\n },\n \"aiopg\": {\n \"library\": \"aiopg >= 0.13.0, < 1.3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiopg==0.25b2\",\n },\n \"asgiref\": {\n \"library\": \"asgiref ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asgi==0.25b2\",\n },\n \"asyncpg\": {\n \"library\": \"asyncpg >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asyncpg==0.25b2\",\n },\n \"boto\": {\n \"library\": \"boto~=2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto==0.25b2\",\n },\n \"botocore\": {\n \"library\": \"botocore ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-botocore==0.25b2\",\n },\n \"celery\": {\n \"library\": \"celery >= 4.0, < 6.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-celery==0.25b2\",\n },\n \"django\": {\n \"library\": \"django >= 1.10\",\n \"instrumentation\": \"opentelemetry-instrumentation-django==0.25b2\",\n },\n \"elasticsearch\": {\n \"library\": \"elasticsearch >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-elasticsearch==0.25b2\",\n },\n \"falcon\": {\n \"library\": \"falcon >= 2.0.0, < 4.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-falcon==0.25b2\",\n },\n \"fastapi\": {\n \"library\": \"fastapi ~= 0.58\",\n \"instrumentation\": \"opentelemetry-instrumentation-fastapi==0.25b2\",\n },\n \"flask\": {\n \"library\": \"flask >= 1.0, < 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-flask==0.25b2\",\n },\n \"grpcio\": {\n \"library\": \"grpcio ~= 1.27\",\n \"instrumentation\": \"opentelemetry-instrumentation-grpc==0.25b2\",\n },\n \"httpx\": {\n \"library\": \"httpx >= 0.18.0, < 0.19.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-httpx==0.25b2\",\n },\n \"jinja2\": {\n \"library\": \"jinja2 >= 2.7, < 4.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-jinja2==0.25b2\",\n },\n \"mysql-connector-python\": {\n \"library\": \"mysql-connector-python ~= 8.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-mysql==0.25b2\",\n },\n \"pika\": {\n \"library\": \"pika >= 1.1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pika==0.25b2\",\n },\n \"psycopg2\": {\n \"library\": \"psycopg2 >= 2.7.3.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-psycopg2==0.25b2\",\n },\n \"pymemcache\": {\n \"library\": \"pymemcache ~= 1.3\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymemcache==0.25b2\",\n },\n \"pymongo\": {\n \"library\": \"pymongo ~= 3.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymongo==0.25b2\",\n },\n \"PyMySQL\": {\n \"library\": \"PyMySQL ~= 0.10.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymysql==0.25b2\",\n },\n \"pyramid\": {\n \"library\": \"pyramid >= 1.7\",\n \"instrumentation\": \"opentelemetry-instrumentation-pyramid==0.25b2\",\n },\n \"redis\": {\n \"library\": \"redis >= 2.6\",\n \"instrumentation\": \"opentelemetry-instrumentation-redis==0.25b2\",\n },\n \"requests\": {\n \"library\": \"requests ~= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-requests==0.25b2\",\n },\n \"scikit-learn\": {\n \"library\": \"scikit-learn ~= 0.24.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-sklearn==0.25b2\",\n },\n \"sqlalchemy\": {\n \"library\": \"sqlalchemy\",\n \"instrumentation\": \"opentelemetry-instrumentation-sqlalchemy==0.25b2\",\n },\n \"starlette\": {\n \"library\": \"starlette ~= 0.13.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-starlette==0.25b2\",\n },\n \"tornado\": {\n \"library\": \"tornado >= 6.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-tornado==0.25b2\",\n },\n \"urllib3\": {\n \"library\": \"urllib3 >= 1.0.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-urllib3==0.25b2\",\n },\n}\ndefault_instrumentations = [\n \"opentelemetry-instrumentation-aws-lambda==0.25b2\",\n \"opentelemetry-instrumentation-dbapi==0.25b2\",\n \"opentelemetry-instrumentation-logging==0.25b2\",\n \"opentelemetry-instrumentation-sqlite3==0.25b2\",\n \"opentelemetry-instrumentation-urllib==0.25b2\",\n \"opentelemetry-instrumentation-wsgi==0.25b2\",\n]\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py"}]}
| 2,851 | 335 |
gh_patches_debug_14284
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-4721
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Install Plugin" via GUI fails to add plugin to plugins.txt correctly for first plugin
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
When trying to install a plugin via the GUI function "Install Plugin" without any prior installed plugins, the plugin is appended to the first line of the plugins.txt.
As the first line is already populated with `# InvenTree Plugins (uses PIP framework to install)` the result is e.g. `# InvenTree Plugins (uses PIP framework to install)inventree-zebra-plugin@git+https://github.com/SergeoLacruz/inventree-zebra-plugin # Installed 2023-04-28 06:55:40.146423+00:00 by xyz` thus ignoring the plugin in a docker env restart because of the `#`.
Subsequently installed plugins are correctly placed in an empty line of the plugins.txt.
### Steps to Reproduce
1. Have a fresh install without installed plugins.
2. Go to "Settings" > "Plugin Settings" > "Plugins" > "Install Plugin"
3. Fill "Package Name" with `inventree-zebra-plugin`
4. Fill "Source URL" with `git+https://github.com/SergeoLacruz/inventree-zebra-plugin`
5. "Submit"
6. Check plugins.txt file
### Expected behaviour
The installed plugin should be added below the comment such that it actually will be parsed by the plugin management.
### Deployment Method
- [X] Docker
- [ ] Bare metal
### Version Information
# Version Information:
InvenTree-Version: 0.11.0
Django Version: 3.2.18
Commit Hash: 593a716
Commit Date: 2023-04-18
Database: postgresql
Debug-Mode: False
Deployed using Docker: True
Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.0.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}]
### Relevant log output
_No response_
</issue>
<code>
[start of InvenTree/plugin/serializers.py]
1 """JSON serializers for plugin app."""
2
3 import subprocess
4
5 from django.conf import settings
6 from django.core.exceptions import ValidationError
7 from django.utils import timezone
8 from django.utils.translation import gettext_lazy as _
9
10 from rest_framework import serializers
11
12 from common.serializers import GenericReferencedSettingSerializer
13 from InvenTree.tasks import check_for_migrations, offload_task
14 from plugin.models import NotificationUserSetting, PluginConfig, PluginSetting
15
16
17 class MetadataSerializer(serializers.ModelSerializer):
18 """Serializer class for model metadata API access."""
19
20 metadata = serializers.JSONField(required=True)
21
22 class Meta:
23 """Metaclass options."""
24
25 fields = [
26 'metadata',
27 ]
28
29 def __init__(self, model_type, *args, **kwargs):
30 """Initialize the metadata serializer with information on the model type"""
31 self.Meta.model = model_type
32 super().__init__(*args, **kwargs)
33
34 def update(self, instance, data):
35 """Perform update on the metadata field:
36
37 - If this is a partial (PATCH) update, try to 'merge' data in
38 - Else, if it is a PUT update, overwrite any existing metadata
39 """
40 if self.partial:
41 # Default behaviour is to "merge" new data in
42 metadata = instance.metadata.copy() if instance.metadata else {}
43 metadata.update(data['metadata'])
44 data['metadata'] = metadata
45
46 return super().update(instance, data)
47
48
49 class PluginConfigSerializer(serializers.ModelSerializer):
50 """Serializer for a PluginConfig."""
51
52 class Meta:
53 """Meta for serializer."""
54 model = PluginConfig
55 fields = [
56 'key',
57 'name',
58 'active',
59 'meta',
60 'mixins',
61 ]
62
63 meta = serializers.DictField(read_only=True)
64 mixins = serializers.DictField(read_only=True)
65
66
67 class PluginConfigInstallSerializer(serializers.Serializer):
68 """Serializer for installing a new plugin."""
69
70 class Meta:
71 """Meta for serializer."""
72 fields = [
73 'url',
74 'packagename',
75 'confirm',
76 ]
77
78 url = serializers.CharField(
79 required=False,
80 allow_blank=True,
81 label=_('Source URL'),
82 help_text=_('Source for the package - this can be a custom registry or a VCS path')
83 )
84 packagename = serializers.CharField(
85 required=False,
86 allow_blank=True,
87 label=_('Package Name'),
88 help_text=_('Name for the Plugin Package - can also contain a version indicator'),
89 )
90 confirm = serializers.BooleanField(
91 label=_('Confirm plugin installation'),
92 help_text=_('This will install this plugin now into the current instance. The instance will go into maintenance.')
93 )
94
95 def validate(self, data):
96 """Validate inputs.
97
98 Make sure both confirm and url are provided.
99 """
100 super().validate(data)
101
102 # check the base requirements are met
103 if not data.get('confirm'):
104 raise ValidationError({'confirm': _('Installation not confirmed')})
105 if (not data.get('url')) and (not data.get('packagename')):
106 msg = _('Either packagename of URL must be provided')
107 raise ValidationError({'url': msg, 'packagename': msg})
108
109 return data
110
111 def save(self):
112 """Install a plugin from a package registry and set operational results as instance data."""
113 data = self.validated_data
114
115 packagename = data.get('packagename', '')
116 url = data.get('url', '')
117
118 # build up the command
119 install_name = []
120
121 if url:
122 # use custom registration / VCS
123 if True in [identifier in url for identifier in ['git+https', 'hg+https', 'svn+svn', ]]:
124 # using a VCS provider
125 if packagename:
126 install_name.append(f'{packagename}@{url}')
127 else:
128 install_name.append(url)
129 else: # pragma: no cover
130 # using a custom package repositories
131 # This is only for pypa compliant directory services (all current are tested above)
132 # and not covered by tests.
133 install_name.append('-i')
134 install_name.append(url)
135 install_name.append(packagename)
136
137 elif packagename:
138 # use pypi
139 install_name.append(packagename)
140
141 command = 'python -m pip install'.split()
142 command.extend(install_name)
143 ret = {'command': ' '.join(command)}
144 success = False
145 # execute pypi
146 try:
147 result = subprocess.check_output(command, cwd=settings.BASE_DIR.parent)
148 ret['result'] = str(result, 'utf-8')
149 ret['success'] = True
150 success = True
151 except subprocess.CalledProcessError as error: # pragma: no cover
152 ret['result'] = str(error.output, 'utf-8')
153 ret['error'] = True
154
155 # save plugin to plugin_file if installed successfull
156 if success:
157 with open(settings.PLUGIN_FILE, "a") as plugin_file:
158 plugin_file.write(f'{" ".join(install_name)} # Installed {timezone.now()} by {str(self.context["request"].user)}\n')
159
160 # Check for migrations
161 offload_task(check_for_migrations, worker=True)
162
163 return ret
164
165
166 class PluginConfigEmptySerializer(serializers.Serializer):
167 """Serializer for a PluginConfig."""
168
169
170 class PluginSettingSerializer(GenericReferencedSettingSerializer):
171 """Serializer for the PluginSetting model."""
172
173 MODEL = PluginSetting
174 EXTRA_FIELDS = [
175 'plugin',
176 ]
177
178 plugin = serializers.CharField(source='plugin.key', read_only=True)
179
180
181 class NotificationUserSettingSerializer(GenericReferencedSettingSerializer):
182 """Serializer for the PluginSetting model."""
183
184 MODEL = NotificationUserSetting
185 EXTRA_FIELDS = ['method', ]
186
187 method = serializers.CharField(read_only=True)
188
[end of InvenTree/plugin/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/InvenTree/plugin/serializers.py b/InvenTree/plugin/serializers.py
--- a/InvenTree/plugin/serializers.py
+++ b/InvenTree/plugin/serializers.py
@@ -154,7 +154,13 @@
# save plugin to plugin_file if installed successfull
if success:
+ # Read content of plugin file
+ plg_lines = open(settings.PLUGIN_FILE).readlines()
with open(settings.PLUGIN_FILE, "a") as plugin_file:
+ # Check if last line has a newline
+ if plg_lines[-1][-1:] != '\n':
+ plugin_file.write('\n')
+ # Write new plugin to file
plugin_file.write(f'{" ".join(install_name)} # Installed {timezone.now()} by {str(self.context["request"].user)}\n')
# Check for migrations
|
{"golden_diff": "diff --git a/InvenTree/plugin/serializers.py b/InvenTree/plugin/serializers.py\n--- a/InvenTree/plugin/serializers.py\n+++ b/InvenTree/plugin/serializers.py\n@@ -154,7 +154,13 @@\n \n # save plugin to plugin_file if installed successfull\n if success:\n+ # Read content of plugin file\n+ plg_lines = open(settings.PLUGIN_FILE).readlines()\n with open(settings.PLUGIN_FILE, \"a\") as plugin_file:\n+ # Check if last line has a newline\n+ if plg_lines[-1][-1:] != '\\n':\n+ plugin_file.write('\\n')\n+ # Write new plugin to file\n plugin_file.write(f'{\" \".join(install_name)} # Installed {timezone.now()} by {str(self.context[\"request\"].user)}\\n')\n \n # Check for migrations\n", "issue": "\"Install Plugin\" via GUI fails to add plugin to plugins.txt correctly for first plugin\n### Please verify that this bug has NOT been raised before.\n\n- [X] I checked and didn't find a similar issue\n\n### Describe the bug*\n\nWhen trying to install a plugin via the GUI function \"Install Plugin\" without any prior installed plugins, the plugin is appended to the first line of the plugins.txt.\r\n\r\nAs the first line is already populated with `# InvenTree Plugins (uses PIP framework to install)` the result is e.g. `# InvenTree Plugins (uses PIP framework to install)inventree-zebra-plugin@git+https://github.com/SergeoLacruz/inventree-zebra-plugin # Installed 2023-04-28 06:55:40.146423+00:00 by xyz` thus ignoring the plugin in a docker env restart because of the `#`.\r\n\r\nSubsequently installed plugins are correctly placed in an empty line of the plugins.txt.\n\n### Steps to Reproduce\n\n1. Have a fresh install without installed plugins.\r\n2. Go to \"Settings\" > \"Plugin Settings\" > \"Plugins\" > \"Install Plugin\"\r\n3. Fill \"Package Name\" with `inventree-zebra-plugin`\r\n4. Fill \"Source URL\" with `git+https://github.com/SergeoLacruz/inventree-zebra-plugin`\r\n5. \"Submit\"\r\n6. Check plugins.txt file\n\n### Expected behaviour\n\nThe installed plugin should be added below the comment such that it actually will be parsed by the plugin management.\n\n### Deployment Method\n\n- [X] Docker\n- [ ] Bare metal\n\n### Version Information\n\n# Version Information:\r\nInvenTree-Version: 0.11.0\r\nDjango Version: 3.2.18\r\nCommit Hash: 593a716\r\nCommit Date: 2023-04-18\r\nDatabase: postgresql\r\nDebug-Mode: False\r\nDeployed using Docker: True\r\nActive plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.0.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}]\r\n\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "\"\"\"JSON serializers for plugin app.\"\"\"\n\nimport subprocess\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom rest_framework import serializers\n\nfrom common.serializers import GenericReferencedSettingSerializer\nfrom InvenTree.tasks import check_for_migrations, offload_task\nfrom plugin.models import NotificationUserSetting, PluginConfig, PluginSetting\n\n\nclass MetadataSerializer(serializers.ModelSerializer):\n \"\"\"Serializer class for model metadata API access.\"\"\"\n\n metadata = serializers.JSONField(required=True)\n\n class Meta:\n \"\"\"Metaclass options.\"\"\"\n\n fields = [\n 'metadata',\n ]\n\n def __init__(self, model_type, *args, **kwargs):\n \"\"\"Initialize the metadata serializer with information on the model type\"\"\"\n self.Meta.model = model_type\n super().__init__(*args, **kwargs)\n\n def update(self, instance, data):\n \"\"\"Perform update on the metadata field:\n\n - If this is a partial (PATCH) update, try to 'merge' data in\n - Else, if it is a PUT update, overwrite any existing metadata\n \"\"\"\n if self.partial:\n # Default behaviour is to \"merge\" new data in\n metadata = instance.metadata.copy() if instance.metadata else {}\n metadata.update(data['metadata'])\n data['metadata'] = metadata\n\n return super().update(instance, data)\n\n\nclass PluginConfigSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for a PluginConfig.\"\"\"\n\n class Meta:\n \"\"\"Meta for serializer.\"\"\"\n model = PluginConfig\n fields = [\n 'key',\n 'name',\n 'active',\n 'meta',\n 'mixins',\n ]\n\n meta = serializers.DictField(read_only=True)\n mixins = serializers.DictField(read_only=True)\n\n\nclass PluginConfigInstallSerializer(serializers.Serializer):\n \"\"\"Serializer for installing a new plugin.\"\"\"\n\n class Meta:\n \"\"\"Meta for serializer.\"\"\"\n fields = [\n 'url',\n 'packagename',\n 'confirm',\n ]\n\n url = serializers.CharField(\n required=False,\n allow_blank=True,\n label=_('Source URL'),\n help_text=_('Source for the package - this can be a custom registry or a VCS path')\n )\n packagename = serializers.CharField(\n required=False,\n allow_blank=True,\n label=_('Package Name'),\n help_text=_('Name for the Plugin Package - can also contain a version indicator'),\n )\n confirm = serializers.BooleanField(\n label=_('Confirm plugin installation'),\n help_text=_('This will install this plugin now into the current instance. The instance will go into maintenance.')\n )\n\n def validate(self, data):\n \"\"\"Validate inputs.\n\n Make sure both confirm and url are provided.\n \"\"\"\n super().validate(data)\n\n # check the base requirements are met\n if not data.get('confirm'):\n raise ValidationError({'confirm': _('Installation not confirmed')})\n if (not data.get('url')) and (not data.get('packagename')):\n msg = _('Either packagename of URL must be provided')\n raise ValidationError({'url': msg, 'packagename': msg})\n\n return data\n\n def save(self):\n \"\"\"Install a plugin from a package registry and set operational results as instance data.\"\"\"\n data = self.validated_data\n\n packagename = data.get('packagename', '')\n url = data.get('url', '')\n\n # build up the command\n install_name = []\n\n if url:\n # use custom registration / VCS\n if True in [identifier in url for identifier in ['git+https', 'hg+https', 'svn+svn', ]]:\n # using a VCS provider\n if packagename:\n install_name.append(f'{packagename}@{url}')\n else:\n install_name.append(url)\n else: # pragma: no cover\n # using a custom package repositories\n # This is only for pypa compliant directory services (all current are tested above)\n # and not covered by tests.\n install_name.append('-i')\n install_name.append(url)\n install_name.append(packagename)\n\n elif packagename:\n # use pypi\n install_name.append(packagename)\n\n command = 'python -m pip install'.split()\n command.extend(install_name)\n ret = {'command': ' '.join(command)}\n success = False\n # execute pypi\n try:\n result = subprocess.check_output(command, cwd=settings.BASE_DIR.parent)\n ret['result'] = str(result, 'utf-8')\n ret['success'] = True\n success = True\n except subprocess.CalledProcessError as error: # pragma: no cover\n ret['result'] = str(error.output, 'utf-8')\n ret['error'] = True\n\n # save plugin to plugin_file if installed successfull\n if success:\n with open(settings.PLUGIN_FILE, \"a\") as plugin_file:\n plugin_file.write(f'{\" \".join(install_name)} # Installed {timezone.now()} by {str(self.context[\"request\"].user)}\\n')\n\n # Check for migrations\n offload_task(check_for_migrations, worker=True)\n\n return ret\n\n\nclass PluginConfigEmptySerializer(serializers.Serializer):\n \"\"\"Serializer for a PluginConfig.\"\"\"\n\n\nclass PluginSettingSerializer(GenericReferencedSettingSerializer):\n \"\"\"Serializer for the PluginSetting model.\"\"\"\n\n MODEL = PluginSetting\n EXTRA_FIELDS = [\n 'plugin',\n ]\n\n plugin = serializers.CharField(source='plugin.key', read_only=True)\n\n\nclass NotificationUserSettingSerializer(GenericReferencedSettingSerializer):\n \"\"\"Serializer for the PluginSetting model.\"\"\"\n\n MODEL = NotificationUserSetting\n EXTRA_FIELDS = ['method', ]\n\n method = serializers.CharField(read_only=True)\n", "path": "InvenTree/plugin/serializers.py"}]}
| 2,747 | 202 |
gh_patches_debug_31192
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6118
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Verify `meltano install` telemetry fires for malformed plugin entries
In #6109 @pnadolny13 noticed that with the following entry no events where fired:
```yaml
- name: tap-something-invalid
variant: meltanolabs
pip_url: git+https://github.com/foo/tap-something-invalid.git
```
I wasn't able to reproduce that at the time and did see two events (started/aborted) come across. We should double check though, its entirely possible that my local dev setup had a seperate issue that was triggering the `aborted` event.
</issue>
<code>
[start of src/meltano/cli/install.py]
1 """CLI command `meltano install`."""
2 from __future__ import annotations
3
4 import click
5
6 from meltano.core.legacy_tracking import LegacyTracker
7 from meltano.core.plugin import PluginType
8 from meltano.core.plugin.error import PluginNotFoundError
9 from meltano.core.project_plugins_service import ProjectPluginsService
10 from meltano.core.tracking import PluginsTrackingContext, Tracker
11 from meltano.core.tracking import cli as cli_tracking
12 from meltano.core.tracking import cli_context_builder
13
14 from . import cli
15 from .params import pass_project
16 from .utils import CliError, install_plugins
17
18
19 @cli.command(short_help="Install project dependencies.")
20 @click.argument(
21 "plugin_type", type=click.Choice(PluginType.cli_arguments()), required=False
22 )
23 @click.argument("plugin_name", nargs=-1, required=False)
24 @click.option(
25 "--clean",
26 is_flag=True,
27 help="Completely reinstall a plugin rather than simply upgrading if necessary.",
28 )
29 @click.option(
30 "--parallelism",
31 "-p",
32 type=click.INT,
33 default=None,
34 help="Limit the number of plugins to install in parallel. Defaults to the number of cores.",
35 )
36 @pass_project(migrate=True)
37 def install(project, plugin_type, plugin_name, clean, parallelism):
38 """
39 Install all the dependencies of your project based on the meltano.yml file.
40
41 \b\nRead more at https://www.meltano.com/docs/command-line-interface.html#install
42 """
43 tracker = Tracker(project)
44 tracker.add_contexts(
45 cli_context_builder(
46 "install",
47 None,
48 clean=clean,
49 parallelism=parallelism,
50 )
51 )
52
53 plugins_service = ProjectPluginsService(project)
54
55 if plugin_type:
56 try:
57 plugin_type = PluginType.from_cli_argument(plugin_type)
58 except ValueError:
59 # if we fail because plugin_type is not valid we have no plugins to instrument
60 tracker.track_command_event(cli_tracking.STARTED)
61 tracker.track_command_event(cli_tracking.ABORTED)
62 raise
63 plugins = plugins_service.get_plugins_of_type(plugin_type)
64 if plugin_name:
65 plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
66 else:
67 try:
68 plugins = list(plugins_service.plugins())
69 except PluginNotFoundError:
70 tracker.track_command_event(cli_tracking.STARTED)
71 tracker.track_command_event(cli_tracking.ABORTED)
72 raise
73
74 click.echo(f"Installing {len(plugins)} plugins...")
75 tracker.add_contexts(
76 PluginsTrackingContext([(candidate, None) for candidate in plugins])
77 )
78 tracker.track_command_event(cli_tracking.STARTED)
79
80 success = install_plugins(project, plugins, parallelism=parallelism, clean=clean)
81
82 legacy_tracker = LegacyTracker(project)
83 legacy_tracker.track_meltano_install()
84
85 if not success:
86 tracker.track_command_event(cli_tracking.FAILED)
87 raise CliError("Failed to install plugin(s)")
88 tracker.track_command_event(cli_tracking.COMPLETED)
89
[end of src/meltano/cli/install.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py
--- a/src/meltano/cli/install.py
+++ b/src/meltano/cli/install.py
@@ -5,7 +5,6 @@
from meltano.core.legacy_tracking import LegacyTracker
from meltano.core.plugin import PluginType
-from meltano.core.plugin.error import PluginNotFoundError
from meltano.core.project_plugins_service import ProjectPluginsService
from meltano.core.tracking import PluginsTrackingContext, Tracker
from meltano.core.tracking import cli as cli_tracking
@@ -52,24 +51,18 @@
plugins_service = ProjectPluginsService(project)
- if plugin_type:
- try:
+ try:
+ if plugin_type:
plugin_type = PluginType.from_cli_argument(plugin_type)
- except ValueError:
- # if we fail because plugin_type is not valid we have no plugins to instrument
- tracker.track_command_event(cli_tracking.STARTED)
- tracker.track_command_event(cli_tracking.ABORTED)
- raise
- plugins = plugins_service.get_plugins_of_type(plugin_type)
- if plugin_name:
- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
- else:
- try:
+ plugins = plugins_service.get_plugins_of_type(plugin_type)
+ if plugin_name:
+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
+ else:
plugins = list(plugins_service.plugins())
- except PluginNotFoundError:
- tracker.track_command_event(cli_tracking.STARTED)
- tracker.track_command_event(cli_tracking.ABORTED)
- raise
+ except Exception:
+ tracker.track_command_event(cli_tracking.STARTED)
+ tracker.track_command_event(cli_tracking.ABORTED)
+ raise
click.echo(f"Installing {len(plugins)} plugins...")
tracker.add_contexts(
|
{"golden_diff": "diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py\n--- a/src/meltano/cli/install.py\n+++ b/src/meltano/cli/install.py\n@@ -5,7 +5,6 @@\n \n from meltano.core.legacy_tracking import LegacyTracker\n from meltano.core.plugin import PluginType\n-from meltano.core.plugin.error import PluginNotFoundError\n from meltano.core.project_plugins_service import ProjectPluginsService\n from meltano.core.tracking import PluginsTrackingContext, Tracker\n from meltano.core.tracking import cli as cli_tracking\n@@ -52,24 +51,18 @@\n \n plugins_service = ProjectPluginsService(project)\n \n- if plugin_type:\n- try:\n+ try:\n+ if plugin_type:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n- except ValueError:\n- # if we fail because plugin_type is not valid we have no plugins to instrument\n- tracker.track_command_event(cli_tracking.STARTED)\n- tracker.track_command_event(cli_tracking.ABORTED)\n- raise\n- plugins = plugins_service.get_plugins_of_type(plugin_type)\n- if plugin_name:\n- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n- else:\n- try:\n+ plugins = plugins_service.get_plugins_of_type(plugin_type)\n+ if plugin_name:\n+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n+ else:\n plugins = list(plugins_service.plugins())\n- except PluginNotFoundError:\n- tracker.track_command_event(cli_tracking.STARTED)\n- tracker.track_command_event(cli_tracking.ABORTED)\n- raise\n+ except Exception:\n+ tracker.track_command_event(cli_tracking.STARTED)\n+ tracker.track_command_event(cli_tracking.ABORTED)\n+ raise\n \n click.echo(f\"Installing {len(plugins)} plugins...\")\n tracker.add_contexts(\n", "issue": "Verify `meltano install` telemetry fires for malformed plugin entries\nIn #6109 @pnadolny13 noticed that with the following entry no events where fired:\r\n\r\n```yaml\r\n - name: tap-something-invalid\r\n variant: meltanolabs\r\n pip_url: git+https://github.com/foo/tap-something-invalid.git\r\n```\r\n\r\nI wasn't able to reproduce that at the time and did see two events (started/aborted) come across. We should double check though, its entirely possible that my local dev setup had a seperate issue that was triggering the `aborted` event.\n", "before_files": [{"content": "\"\"\"CLI command `meltano install`.\"\"\"\nfrom __future__ import annotations\n\nimport click\n\nfrom meltano.core.legacy_tracking import LegacyTracker\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.plugin.error import PluginNotFoundError\nfrom meltano.core.project_plugins_service import ProjectPluginsService\nfrom meltano.core.tracking import PluginsTrackingContext, Tracker\nfrom meltano.core.tracking import cli as cli_tracking\nfrom meltano.core.tracking import cli_context_builder\n\nfrom . import cli\nfrom .params import pass_project\nfrom .utils import CliError, install_plugins\n\n\[email protected](short_help=\"Install project dependencies.\")\[email protected](\n \"plugin_type\", type=click.Choice(PluginType.cli_arguments()), required=False\n)\[email protected](\"plugin_name\", nargs=-1, required=False)\[email protected](\n \"--clean\",\n is_flag=True,\n help=\"Completely reinstall a plugin rather than simply upgrading if necessary.\",\n)\[email protected](\n \"--parallelism\",\n \"-p\",\n type=click.INT,\n default=None,\n help=\"Limit the number of plugins to install in parallel. Defaults to the number of cores.\",\n)\n@pass_project(migrate=True)\ndef install(project, plugin_type, plugin_name, clean, parallelism):\n \"\"\"\n Install all the dependencies of your project based on the meltano.yml file.\n\n \\b\\nRead more at https://www.meltano.com/docs/command-line-interface.html#install\n \"\"\"\n tracker = Tracker(project)\n tracker.add_contexts(\n cli_context_builder(\n \"install\",\n None,\n clean=clean,\n parallelism=parallelism,\n )\n )\n\n plugins_service = ProjectPluginsService(project)\n\n if plugin_type:\n try:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n except ValueError:\n # if we fail because plugin_type is not valid we have no plugins to instrument\n tracker.track_command_event(cli_tracking.STARTED)\n tracker.track_command_event(cli_tracking.ABORTED)\n raise\n plugins = plugins_service.get_plugins_of_type(plugin_type)\n if plugin_name:\n plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n else:\n try:\n plugins = list(plugins_service.plugins())\n except PluginNotFoundError:\n tracker.track_command_event(cli_tracking.STARTED)\n tracker.track_command_event(cli_tracking.ABORTED)\n raise\n\n click.echo(f\"Installing {len(plugins)} plugins...\")\n tracker.add_contexts(\n PluginsTrackingContext([(candidate, None) for candidate in plugins])\n )\n tracker.track_command_event(cli_tracking.STARTED)\n\n success = install_plugins(project, plugins, parallelism=parallelism, clean=clean)\n\n legacy_tracker = LegacyTracker(project)\n legacy_tracker.track_meltano_install()\n\n if not success:\n tracker.track_command_event(cli_tracking.FAILED)\n raise CliError(\"Failed to install plugin(s)\")\n tracker.track_command_event(cli_tracking.COMPLETED)\n", "path": "src/meltano/cli/install.py"}]}
| 1,465 | 406 |
gh_patches_debug_10915
|
rasdani/github-patches
|
git_diff
|
psf__black-2437
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failing to parse single-quote f-string with backslash inside it
**Describe the bug**
The formatter is failing to parse single quote f-string as it cannot include a backslash inside it.
**To Reproduce**
1. Take this test.py containing:
```python
b = "example1"
sts = "example2"
f"\"{b}\"{' ' * (long-len(b)+1)}: \"{sts}\",\n"
```
2. Run black without arguments.
3. See error -->
```
File "/var/task/black/__init__.py", line 986, in assert_equivalent
dst_ast = parse_ast(dst)
File "/var/task/black/parsing.py", line 125, in parse_ast
raise SyntaxError(
f'"{b}"{\' \' * (long-len(b)+1)}: "{sts}",\n'
```
**Expected behavior**
The formatter is expected to throw the said error and fail to format the said file.
**Environment (please complete the following information):**
- Version: [main]
- OS and Python version: [Linux/Python 3.9.5]
**Does this bug also happen on main?**
Yes
</issue>
<code>
[start of src/black/strings.py]
1 """
2 Simple formatting on strings. Further string formatting code is in trans.py.
3 """
4
5 import regex as re
6 import sys
7 from typing import List, Pattern
8
9
10 STRING_PREFIX_CHARS = "furbFURB" # All possible string prefix characters.
11
12
13 def sub_twice(regex: Pattern[str], replacement: str, original: str) -> str:
14 """Replace `regex` with `replacement` twice on `original`.
15
16 This is used by string normalization to perform replaces on
17 overlapping matches.
18 """
19 return regex.sub(replacement, regex.sub(replacement, original))
20
21
22 def has_triple_quotes(string: str) -> bool:
23 """
24 Returns:
25 True iff @string starts with three quotation characters.
26 """
27 raw_string = string.lstrip(STRING_PREFIX_CHARS)
28 return raw_string[:3] in {'"""', "'''"}
29
30
31 def lines_with_leading_tabs_expanded(s: str) -> List[str]:
32 """
33 Splits string into lines and expands only leading tabs (following the normal
34 Python rules)
35 """
36 lines = []
37 for line in s.splitlines():
38 # Find the index of the first non-whitespace character after a string of
39 # whitespace that includes at least one tab
40 match = re.match(r"\s*\t+\s*(\S)", line)
41 if match:
42 first_non_whitespace_idx = match.start(1)
43
44 lines.append(
45 line[:first_non_whitespace_idx].expandtabs()
46 + line[first_non_whitespace_idx:]
47 )
48 else:
49 lines.append(line)
50 return lines
51
52
53 def fix_docstring(docstring: str, prefix: str) -> str:
54 # https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation
55 if not docstring:
56 return ""
57 lines = lines_with_leading_tabs_expanded(docstring)
58 # Determine minimum indentation (first line doesn't count):
59 indent = sys.maxsize
60 for line in lines[1:]:
61 stripped = line.lstrip()
62 if stripped:
63 indent = min(indent, len(line) - len(stripped))
64 # Remove indentation (first line is special):
65 trimmed = [lines[0].strip()]
66 if indent < sys.maxsize:
67 last_line_idx = len(lines) - 2
68 for i, line in enumerate(lines[1:]):
69 stripped_line = line[indent:].rstrip()
70 if stripped_line or i == last_line_idx:
71 trimmed.append(prefix + stripped_line)
72 else:
73 trimmed.append("")
74 return "\n".join(trimmed)
75
76
77 def get_string_prefix(string: str) -> str:
78 """
79 Pre-conditions:
80 * assert_is_leaf_string(@string)
81
82 Returns:
83 @string's prefix (e.g. '', 'r', 'f', or 'rf').
84 """
85 assert_is_leaf_string(string)
86
87 prefix = ""
88 prefix_idx = 0
89 while string[prefix_idx] in STRING_PREFIX_CHARS:
90 prefix += string[prefix_idx]
91 prefix_idx += 1
92
93 return prefix
94
95
96 def assert_is_leaf_string(string: str) -> None:
97 """
98 Checks the pre-condition that @string has the format that you would expect
99 of `leaf.value` where `leaf` is some Leaf such that `leaf.type ==
100 token.STRING`. A more precise description of the pre-conditions that are
101 checked are listed below.
102
103 Pre-conditions:
104 * @string starts with either ', ", <prefix>', or <prefix>" where
105 `set(<prefix>)` is some subset of `set(STRING_PREFIX_CHARS)`.
106 * @string ends with a quote character (' or ").
107
108 Raises:
109 AssertionError(...) if the pre-conditions listed above are not
110 satisfied.
111 """
112 dquote_idx = string.find('"')
113 squote_idx = string.find("'")
114 if -1 in [dquote_idx, squote_idx]:
115 quote_idx = max(dquote_idx, squote_idx)
116 else:
117 quote_idx = min(squote_idx, dquote_idx)
118
119 assert (
120 0 <= quote_idx < len(string) - 1
121 ), f"{string!r} is missing a starting quote character (' or \")."
122 assert string[-1] in (
123 "'",
124 '"',
125 ), f"{string!r} is missing an ending quote character (' or \")."
126 assert set(string[:quote_idx]).issubset(
127 set(STRING_PREFIX_CHARS)
128 ), f"{set(string[:quote_idx])} is NOT a subset of {set(STRING_PREFIX_CHARS)}."
129
130
131 def normalize_string_prefix(s: str, remove_u_prefix: bool = False) -> str:
132 """Make all string prefixes lowercase.
133
134 If remove_u_prefix is given, also removes any u prefix from the string.
135 """
136 match = re.match(r"^([" + STRING_PREFIX_CHARS + r"]*)(.*)$", s, re.DOTALL)
137 assert match is not None, f"failed to match string {s!r}"
138 orig_prefix = match.group(1)
139 new_prefix = orig_prefix.replace("F", "f").replace("B", "b").replace("U", "u")
140 if remove_u_prefix:
141 new_prefix = new_prefix.replace("u", "")
142 return f"{new_prefix}{match.group(2)}"
143
144
145 def normalize_string_quotes(s: str) -> str:
146 """Prefer double quotes but only if it doesn't cause more escaping.
147
148 Adds or removes backslashes as appropriate. Doesn't parse and fix
149 strings nested in f-strings.
150 """
151 value = s.lstrip(STRING_PREFIX_CHARS)
152 if value[:3] == '"""':
153 return s
154
155 elif value[:3] == "'''":
156 orig_quote = "'''"
157 new_quote = '"""'
158 elif value[0] == '"':
159 orig_quote = '"'
160 new_quote = "'"
161 else:
162 orig_quote = "'"
163 new_quote = '"'
164 first_quote_pos = s.find(orig_quote)
165 if first_quote_pos == -1:
166 return s # There's an internal error
167
168 prefix = s[:first_quote_pos]
169 unescaped_new_quote = re.compile(rf"(([^\\]|^)(\\\\)*){new_quote}")
170 escaped_new_quote = re.compile(rf"([^\\]|^)\\((?:\\\\)*){new_quote}")
171 escaped_orig_quote = re.compile(rf"([^\\]|^)\\((?:\\\\)*){orig_quote}")
172 body = s[first_quote_pos + len(orig_quote) : -len(orig_quote)]
173 if "r" in prefix.casefold():
174 if unescaped_new_quote.search(body):
175 # There's at least one unescaped new_quote in this raw string
176 # so converting is impossible
177 return s
178
179 # Do not introduce or remove backslashes in raw strings
180 new_body = body
181 else:
182 # remove unnecessary escapes
183 new_body = sub_twice(escaped_new_quote, rf"\1\2{new_quote}", body)
184 if body != new_body:
185 # Consider the string without unnecessary escapes as the original
186 body = new_body
187 s = f"{prefix}{orig_quote}{body}{orig_quote}"
188 new_body = sub_twice(escaped_orig_quote, rf"\1\2{orig_quote}", new_body)
189 new_body = sub_twice(unescaped_new_quote, rf"\1\\{new_quote}", new_body)
190 if "f" in prefix.casefold():
191 matches = re.findall(
192 r"""
193 (?:[^{]|^)\{ # start of the string or a non-{ followed by a single {
194 ([^{].*?) # contents of the brackets except if begins with {{
195 \}(?:[^}]|$) # A } followed by end of the string or a non-}
196 """,
197 new_body,
198 re.VERBOSE,
199 )
200 for m in matches:
201 if "\\" in str(m):
202 # Do not introduce backslashes in interpolated expressions
203 return s
204
205 if new_quote == '"""' and new_body[-1:] == '"':
206 # edge case:
207 new_body = new_body[:-1] + '\\"'
208 orig_escape_count = body.count("\\")
209 new_escape_count = new_body.count("\\")
210 if new_escape_count > orig_escape_count:
211 return s # Do not introduce more escaping
212
213 if new_escape_count == orig_escape_count and orig_quote == '"':
214 return s # Prefer double quotes
215
216 return f"{prefix}{new_quote}{new_body}{new_quote}"
217
[end of src/black/strings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/black/strings.py b/src/black/strings.py
--- a/src/black/strings.py
+++ b/src/black/strings.py
@@ -190,9 +190,9 @@
if "f" in prefix.casefold():
matches = re.findall(
r"""
- (?:[^{]|^)\{ # start of the string or a non-{ followed by a single {
+ (?:(?<!\{)|^)\{ # start of the string or a non-{ followed by a single {
([^{].*?) # contents of the brackets except if begins with {{
- \}(?:[^}]|$) # A } followed by end of the string or a non-}
+ \}(?:(?!\})|$) # A } followed by end of the string or a non-}
""",
new_body,
re.VERBOSE,
|
{"golden_diff": "diff --git a/src/black/strings.py b/src/black/strings.py\n--- a/src/black/strings.py\n+++ b/src/black/strings.py\n@@ -190,9 +190,9 @@\n if \"f\" in prefix.casefold():\n matches = re.findall(\n r\"\"\"\n- (?:[^{]|^)\\{ # start of the string or a non-{ followed by a single {\n+ (?:(?<!\\{)|^)\\{ # start of the string or a non-{ followed by a single {\n ([^{].*?) # contents of the brackets except if begins with {{\n- \\}(?:[^}]|$) # A } followed by end of the string or a non-}\n+ \\}(?:(?!\\})|$) # A } followed by end of the string or a non-}\n \"\"\",\n new_body,\n re.VERBOSE,\n", "issue": "Failing to parse single-quote f-string with backslash inside it\n**Describe the bug**\r\n\r\nThe formatter is failing to parse single quote f-string as it cannot include a backslash inside it.\r\n\r\n**To Reproduce**\r\n\r\n1. Take this test.py containing:\r\n```python\r\nb = \"example1\"\r\nsts = \"example2\"\r\nf\"\\\"{b}\\\"{' ' * (long-len(b)+1)}: \\\"{sts}\\\",\\n\"\r\n```\r\n2. Run black without arguments.\r\n3. See error -->\r\n```\r\n File \"/var/task/black/__init__.py\", line 986, in assert_equivalent\r\n dst_ast = parse_ast(dst)\r\n File \"/var/task/black/parsing.py\", line 125, in parse_ast\r\n raise SyntaxError(\r\nf'\"{b}\"{\\' \\' * (long-len(b)+1)}: \"{sts}\",\\n'\r\n```\r\n\r\n**Expected behavior**\r\nThe formatter is expected to throw the said error and fail to format the said file.\r\n\r\n**Environment (please complete the following information):**\r\n\r\n- Version: [main]\r\n- OS and Python version: [Linux/Python 3.9.5]\r\n\r\n**Does this bug also happen on main?**\r\nYes\r\n\n", "before_files": [{"content": "\"\"\"\nSimple formatting on strings. Further string formatting code is in trans.py.\n\"\"\"\n\nimport regex as re\nimport sys\nfrom typing import List, Pattern\n\n\nSTRING_PREFIX_CHARS = \"furbFURB\" # All possible string prefix characters.\n\n\ndef sub_twice(regex: Pattern[str], replacement: str, original: str) -> str:\n \"\"\"Replace `regex` with `replacement` twice on `original`.\n\n This is used by string normalization to perform replaces on\n overlapping matches.\n \"\"\"\n return regex.sub(replacement, regex.sub(replacement, original))\n\n\ndef has_triple_quotes(string: str) -> bool:\n \"\"\"\n Returns:\n True iff @string starts with three quotation characters.\n \"\"\"\n raw_string = string.lstrip(STRING_PREFIX_CHARS)\n return raw_string[:3] in {'\"\"\"', \"'''\"}\n\n\ndef lines_with_leading_tabs_expanded(s: str) -> List[str]:\n \"\"\"\n Splits string into lines and expands only leading tabs (following the normal\n Python rules)\n \"\"\"\n lines = []\n for line in s.splitlines():\n # Find the index of the first non-whitespace character after a string of\n # whitespace that includes at least one tab\n match = re.match(r\"\\s*\\t+\\s*(\\S)\", line)\n if match:\n first_non_whitespace_idx = match.start(1)\n\n lines.append(\n line[:first_non_whitespace_idx].expandtabs()\n + line[first_non_whitespace_idx:]\n )\n else:\n lines.append(line)\n return lines\n\n\ndef fix_docstring(docstring: str, prefix: str) -> str:\n # https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation\n if not docstring:\n return \"\"\n lines = lines_with_leading_tabs_expanded(docstring)\n # Determine minimum indentation (first line doesn't count):\n indent = sys.maxsize\n for line in lines[1:]:\n stripped = line.lstrip()\n if stripped:\n indent = min(indent, len(line) - len(stripped))\n # Remove indentation (first line is special):\n trimmed = [lines[0].strip()]\n if indent < sys.maxsize:\n last_line_idx = len(lines) - 2\n for i, line in enumerate(lines[1:]):\n stripped_line = line[indent:].rstrip()\n if stripped_line or i == last_line_idx:\n trimmed.append(prefix + stripped_line)\n else:\n trimmed.append(\"\")\n return \"\\n\".join(trimmed)\n\n\ndef get_string_prefix(string: str) -> str:\n \"\"\"\n Pre-conditions:\n * assert_is_leaf_string(@string)\n\n Returns:\n @string's prefix (e.g. '', 'r', 'f', or 'rf').\n \"\"\"\n assert_is_leaf_string(string)\n\n prefix = \"\"\n prefix_idx = 0\n while string[prefix_idx] in STRING_PREFIX_CHARS:\n prefix += string[prefix_idx]\n prefix_idx += 1\n\n return prefix\n\n\ndef assert_is_leaf_string(string: str) -> None:\n \"\"\"\n Checks the pre-condition that @string has the format that you would expect\n of `leaf.value` where `leaf` is some Leaf such that `leaf.type ==\n token.STRING`. A more precise description of the pre-conditions that are\n checked are listed below.\n\n Pre-conditions:\n * @string starts with either ', \", <prefix>', or <prefix>\" where\n `set(<prefix>)` is some subset of `set(STRING_PREFIX_CHARS)`.\n * @string ends with a quote character (' or \").\n\n Raises:\n AssertionError(...) if the pre-conditions listed above are not\n satisfied.\n \"\"\"\n dquote_idx = string.find('\"')\n squote_idx = string.find(\"'\")\n if -1 in [dquote_idx, squote_idx]:\n quote_idx = max(dquote_idx, squote_idx)\n else:\n quote_idx = min(squote_idx, dquote_idx)\n\n assert (\n 0 <= quote_idx < len(string) - 1\n ), f\"{string!r} is missing a starting quote character (' or \\\").\"\n assert string[-1] in (\n \"'\",\n '\"',\n ), f\"{string!r} is missing an ending quote character (' or \\\").\"\n assert set(string[:quote_idx]).issubset(\n set(STRING_PREFIX_CHARS)\n ), f\"{set(string[:quote_idx])} is NOT a subset of {set(STRING_PREFIX_CHARS)}.\"\n\n\ndef normalize_string_prefix(s: str, remove_u_prefix: bool = False) -> str:\n \"\"\"Make all string prefixes lowercase.\n\n If remove_u_prefix is given, also removes any u prefix from the string.\n \"\"\"\n match = re.match(r\"^([\" + STRING_PREFIX_CHARS + r\"]*)(.*)$\", s, re.DOTALL)\n assert match is not None, f\"failed to match string {s!r}\"\n orig_prefix = match.group(1)\n new_prefix = orig_prefix.replace(\"F\", \"f\").replace(\"B\", \"b\").replace(\"U\", \"u\")\n if remove_u_prefix:\n new_prefix = new_prefix.replace(\"u\", \"\")\n return f\"{new_prefix}{match.group(2)}\"\n\n\ndef normalize_string_quotes(s: str) -> str:\n \"\"\"Prefer double quotes but only if it doesn't cause more escaping.\n\n Adds or removes backslashes as appropriate. Doesn't parse and fix\n strings nested in f-strings.\n \"\"\"\n value = s.lstrip(STRING_PREFIX_CHARS)\n if value[:3] == '\"\"\"':\n return s\n\n elif value[:3] == \"'''\":\n orig_quote = \"'''\"\n new_quote = '\"\"\"'\n elif value[0] == '\"':\n orig_quote = '\"'\n new_quote = \"'\"\n else:\n orig_quote = \"'\"\n new_quote = '\"'\n first_quote_pos = s.find(orig_quote)\n if first_quote_pos == -1:\n return s # There's an internal error\n\n prefix = s[:first_quote_pos]\n unescaped_new_quote = re.compile(rf\"(([^\\\\]|^)(\\\\\\\\)*){new_quote}\")\n escaped_new_quote = re.compile(rf\"([^\\\\]|^)\\\\((?:\\\\\\\\)*){new_quote}\")\n escaped_orig_quote = re.compile(rf\"([^\\\\]|^)\\\\((?:\\\\\\\\)*){orig_quote}\")\n body = s[first_quote_pos + len(orig_quote) : -len(orig_quote)]\n if \"r\" in prefix.casefold():\n if unescaped_new_quote.search(body):\n # There's at least one unescaped new_quote in this raw string\n # so converting is impossible\n return s\n\n # Do not introduce or remove backslashes in raw strings\n new_body = body\n else:\n # remove unnecessary escapes\n new_body = sub_twice(escaped_new_quote, rf\"\\1\\2{new_quote}\", body)\n if body != new_body:\n # Consider the string without unnecessary escapes as the original\n body = new_body\n s = f\"{prefix}{orig_quote}{body}{orig_quote}\"\n new_body = sub_twice(escaped_orig_quote, rf\"\\1\\2{orig_quote}\", new_body)\n new_body = sub_twice(unescaped_new_quote, rf\"\\1\\\\{new_quote}\", new_body)\n if \"f\" in prefix.casefold():\n matches = re.findall(\n r\"\"\"\n (?:[^{]|^)\\{ # start of the string or a non-{ followed by a single {\n ([^{].*?) # contents of the brackets except if begins with {{\n \\}(?:[^}]|$) # A } followed by end of the string or a non-}\n \"\"\",\n new_body,\n re.VERBOSE,\n )\n for m in matches:\n if \"\\\\\" in str(m):\n # Do not introduce backslashes in interpolated expressions\n return s\n\n if new_quote == '\"\"\"' and new_body[-1:] == '\"':\n # edge case:\n new_body = new_body[:-1] + '\\\\\"'\n orig_escape_count = body.count(\"\\\\\")\n new_escape_count = new_body.count(\"\\\\\")\n if new_escape_count > orig_escape_count:\n return s # Do not introduce more escaping\n\n if new_escape_count == orig_escape_count and orig_quote == '\"':\n return s # Prefer double quotes\n\n return f\"{prefix}{new_quote}{new_body}{new_quote}\"\n", "path": "src/black/strings.py"}]}
| 3,174 | 196 |
gh_patches_debug_34990
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-838
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
azubu.tv: remove plugin
http://www.azubu.tv/
`Soon a new future for Azubu and Hitbox, together as a single force in the world of eSports and competitive gaming, will be revealed. We will be launching a new brand, website, and mobile apps. There you will find the best offerings from both Azubu and Hitbox in one new place.`
</issue>
<code>
[start of src/streamlink/plugins/azubutv.py]
1 #!/usr/bin/env python
2 import json
3 import requests
4
5 import re
6
7 from io import BytesIO
8 from time import sleep
9
10 from streamlink.exceptions import PluginError
11
12 from streamlink.plugin import Plugin
13 from streamlink.plugin.api import http, validate
14 from streamlink.stream import HLSStream
15
16
17 HTTP_HEADERS = {
18 "User-Agent": ("Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
19 "(KHTML, like Gecko) Chrome/36.0.1944.9 Safari/537.36"),
20 'Accept': 'application/json;pk=BCpkADawqM1gvI0oGWg8dxQHlgT8HkdE2LnAlWAZkOlznO39bSZX726u4JqnDsK3MDXcO01JxXK2tZtJbgQChxgaFzEVdHRjaDoxaOu8hHOO8NYhwdxw9BzvgkvLUlpbDNUuDoc4E4wxDToV'
21
22 }
23
24 _url_re = re.compile(r"http(s)?://(\w+\.)?azubu.tv/(?P<domain>\w+)")
25
26 PARAMS_REGEX = r"(\w+)=({.+?}|\[.+?\]|\(.+?\)|'(?:[^'\\]|\\')*'|\"(?:[^\"\\]|\\\")*\"|\S+)"
27 stream_video_url = "http://api.azubu.tv/public/channel/{}/player"
28
29
30 class AzubuTV(Plugin):
31 @classmethod
32 def can_handle_url(cls, url):
33 return _url_re.match(url)
34
35 @classmethod
36 def stream_weight(cls, stream):
37 if stream == "source":
38 weight = 1080
39 else:
40 weight, group = Plugin.stream_weight(stream)
41
42 return weight, "azubutv"
43
44 def _parse_params(self, params):
45 rval = {}
46 matches = re.findall(PARAMS_REGEX, params)
47
48 for key, value in matches:
49 try:
50 value = ast.literal_eval(value)
51 except Exception:
52 pass
53
54 rval[key] = value
55
56 return rval
57
58 def _get_stream_url(self, o):
59
60 match = _url_re.match(self.url)
61 channel = match.group('domain')
62
63 channel_info = requests.get(stream_video_url.format(channel))
64 j = json.loads(channel_info.text)
65
66 if j["data"]["is_live"] is not True:
67 return "", False
68 else:
69 is_live = True
70
71 stream_url = 'https://edge.api.brightcove.com/playback/v1/accounts/3361910549001/videos/ref:{0}'
72
73 r = requests.get(stream_url.format(j["data"]["stream_video"]["reference_id"]), headers=HTTP_HEADERS)
74 t = json.loads(r.text)
75
76 stream_url = t["sources"][0]["src"]
77 return stream_url, is_live
78
79 def _get_streams(self):
80 hls_url, is_live = self._get_stream_url(self)
81
82 if not is_live:
83 return
84
85 split = self.url.split(" ")
86 params = (" ").join(split[1:])
87 params = self._parse_params(params)
88
89 try:
90 streams = HLSStream.parse_variant_playlist(self.session, hls_url, **params)
91 except IOError as err:
92 raise PluginError(err)
93
94 return streams
95
96
97 __plugin__ = AzubuTV
98
[end of src/streamlink/plugins/azubutv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/azubutv.py b/src/streamlink/plugins/azubutv.py
deleted file mode 100644
--- a/src/streamlink/plugins/azubutv.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/env python
-import json
-import requests
-
-import re
-
-from io import BytesIO
-from time import sleep
-
-from streamlink.exceptions import PluginError
-
-from streamlink.plugin import Plugin
-from streamlink.plugin.api import http, validate
-from streamlink.stream import HLSStream
-
-
-HTTP_HEADERS = {
- "User-Agent": ("Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
- "(KHTML, like Gecko) Chrome/36.0.1944.9 Safari/537.36"),
- 'Accept': 'application/json;pk=BCpkADawqM1gvI0oGWg8dxQHlgT8HkdE2LnAlWAZkOlznO39bSZX726u4JqnDsK3MDXcO01JxXK2tZtJbgQChxgaFzEVdHRjaDoxaOu8hHOO8NYhwdxw9BzvgkvLUlpbDNUuDoc4E4wxDToV'
-
-}
-
-_url_re = re.compile(r"http(s)?://(\w+\.)?azubu.tv/(?P<domain>\w+)")
-
-PARAMS_REGEX = r"(\w+)=({.+?}|\[.+?\]|\(.+?\)|'(?:[^'\\]|\\')*'|\"(?:[^\"\\]|\\\")*\"|\S+)"
-stream_video_url = "http://api.azubu.tv/public/channel/{}/player"
-
-
-class AzubuTV(Plugin):
- @classmethod
- def can_handle_url(cls, url):
- return _url_re.match(url)
-
- @classmethod
- def stream_weight(cls, stream):
- if stream == "source":
- weight = 1080
- else:
- weight, group = Plugin.stream_weight(stream)
-
- return weight, "azubutv"
-
- def _parse_params(self, params):
- rval = {}
- matches = re.findall(PARAMS_REGEX, params)
-
- for key, value in matches:
- try:
- value = ast.literal_eval(value)
- except Exception:
- pass
-
- rval[key] = value
-
- return rval
-
- def _get_stream_url(self, o):
-
- match = _url_re.match(self.url)
- channel = match.group('domain')
-
- channel_info = requests.get(stream_video_url.format(channel))
- j = json.loads(channel_info.text)
-
- if j["data"]["is_live"] is not True:
- return "", False
- else:
- is_live = True
-
- stream_url = 'https://edge.api.brightcove.com/playback/v1/accounts/3361910549001/videos/ref:{0}'
-
- r = requests.get(stream_url.format(j["data"]["stream_video"]["reference_id"]), headers=HTTP_HEADERS)
- t = json.loads(r.text)
-
- stream_url = t["sources"][0]["src"]
- return stream_url, is_live
-
- def _get_streams(self):
- hls_url, is_live = self._get_stream_url(self)
-
- if not is_live:
- return
-
- split = self.url.split(" ")
- params = (" ").join(split[1:])
- params = self._parse_params(params)
-
- try:
- streams = HLSStream.parse_variant_playlist(self.session, hls_url, **params)
- except IOError as err:
- raise PluginError(err)
-
- return streams
-
-
-__plugin__ = AzubuTV
|
{"golden_diff": "diff --git a/src/streamlink/plugins/azubutv.py b/src/streamlink/plugins/azubutv.py\ndeleted file mode 100644\n--- a/src/streamlink/plugins/azubutv.py\n+++ /dev/null\n@@ -1,97 +0,0 @@\n-#!/usr/bin/env python\n-import json\n-import requests\n-\n-import re\n-\n-from io import BytesIO\n-from time import sleep\n-\n-from streamlink.exceptions import PluginError\n-\n-from streamlink.plugin import Plugin\n-from streamlink.plugin.api import http, validate\n-from streamlink.stream import HLSStream\n-\n-\n-HTTP_HEADERS = {\n- \"User-Agent\": (\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 \"\n- \"(KHTML, like Gecko) Chrome/36.0.1944.9 Safari/537.36\"),\n- 'Accept': 'application/json;pk=BCpkADawqM1gvI0oGWg8dxQHlgT8HkdE2LnAlWAZkOlznO39bSZX726u4JqnDsK3MDXcO01JxXK2tZtJbgQChxgaFzEVdHRjaDoxaOu8hHOO8NYhwdxw9BzvgkvLUlpbDNUuDoc4E4wxDToV'\n-\n-}\n-\n-_url_re = re.compile(r\"http(s)?://(\\w+\\.)?azubu.tv/(?P<domain>\\w+)\")\n-\n-PARAMS_REGEX = r\"(\\w+)=({.+?}|\\[.+?\\]|\\(.+?\\)|'(?:[^'\\\\]|\\\\')*'|\\\"(?:[^\\\"\\\\]|\\\\\\\")*\\\"|\\S+)\"\n-stream_video_url = \"http://api.azubu.tv/public/channel/{}/player\"\n-\n-\n-class AzubuTV(Plugin):\n- @classmethod\n- def can_handle_url(cls, url):\n- return _url_re.match(url)\n-\n- @classmethod\n- def stream_weight(cls, stream):\n- if stream == \"source\":\n- weight = 1080\n- else:\n- weight, group = Plugin.stream_weight(stream)\n-\n- return weight, \"azubutv\"\n-\n- def _parse_params(self, params):\n- rval = {}\n- matches = re.findall(PARAMS_REGEX, params)\n-\n- for key, value in matches:\n- try:\n- value = ast.literal_eval(value)\n- except Exception:\n- pass\n-\n- rval[key] = value\n-\n- return rval\n-\n- def _get_stream_url(self, o):\n-\n- match = _url_re.match(self.url)\n- channel = match.group('domain')\n-\n- channel_info = requests.get(stream_video_url.format(channel))\n- j = json.loads(channel_info.text)\n-\n- if j[\"data\"][\"is_live\"] is not True:\n- return \"\", False\n- else:\n- is_live = True\n-\n- stream_url = 'https://edge.api.brightcove.com/playback/v1/accounts/3361910549001/videos/ref:{0}'\n-\n- r = requests.get(stream_url.format(j[\"data\"][\"stream_video\"][\"reference_id\"]), headers=HTTP_HEADERS)\n- t = json.loads(r.text)\n-\n- stream_url = t[\"sources\"][0][\"src\"]\n- return stream_url, is_live\n-\n- def _get_streams(self):\n- hls_url, is_live = self._get_stream_url(self)\n-\n- if not is_live:\n- return\n-\n- split = self.url.split(\" \")\n- params = (\" \").join(split[1:])\n- params = self._parse_params(params)\n-\n- try:\n- streams = HLSStream.parse_variant_playlist(self.session, hls_url, **params)\n- except IOError as err:\n- raise PluginError(err)\n-\n- return streams\n-\n-\n-__plugin__ = AzubuTV\n", "issue": "azubu.tv: remove plugin\nhttp://www.azubu.tv/\r\n`Soon a new future for Azubu and Hitbox, together as a single force in the world of eSports and competitive gaming, will be revealed. We will be launching a new brand, website, and mobile apps. There you will find the best offerings from both Azubu and Hitbox in one new place.`\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport json\nimport requests\n\nimport re\n\nfrom io import BytesIO\nfrom time import sleep\n\nfrom streamlink.exceptions import PluginError\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.stream import HLSStream\n\n\nHTTP_HEADERS = {\n \"User-Agent\": (\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 \"\n \"(KHTML, like Gecko) Chrome/36.0.1944.9 Safari/537.36\"),\n 'Accept': 'application/json;pk=BCpkADawqM1gvI0oGWg8dxQHlgT8HkdE2LnAlWAZkOlznO39bSZX726u4JqnDsK3MDXcO01JxXK2tZtJbgQChxgaFzEVdHRjaDoxaOu8hHOO8NYhwdxw9BzvgkvLUlpbDNUuDoc4E4wxDToV'\n\n}\n\n_url_re = re.compile(r\"http(s)?://(\\w+\\.)?azubu.tv/(?P<domain>\\w+)\")\n\nPARAMS_REGEX = r\"(\\w+)=({.+?}|\\[.+?\\]|\\(.+?\\)|'(?:[^'\\\\]|\\\\')*'|\\\"(?:[^\\\"\\\\]|\\\\\\\")*\\\"|\\S+)\"\nstream_video_url = \"http://api.azubu.tv/public/channel/{}/player\"\n\n\nclass AzubuTV(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n @classmethod\n def stream_weight(cls, stream):\n if stream == \"source\":\n weight = 1080\n else:\n weight, group = Plugin.stream_weight(stream)\n\n return weight, \"azubutv\"\n\n def _parse_params(self, params):\n rval = {}\n matches = re.findall(PARAMS_REGEX, params)\n\n for key, value in matches:\n try:\n value = ast.literal_eval(value)\n except Exception:\n pass\n\n rval[key] = value\n\n return rval\n\n def _get_stream_url(self, o):\n\n match = _url_re.match(self.url)\n channel = match.group('domain')\n\n channel_info = requests.get(stream_video_url.format(channel))\n j = json.loads(channel_info.text)\n\n if j[\"data\"][\"is_live\"] is not True:\n return \"\", False\n else:\n is_live = True\n\n stream_url = 'https://edge.api.brightcove.com/playback/v1/accounts/3361910549001/videos/ref:{0}'\n\n r = requests.get(stream_url.format(j[\"data\"][\"stream_video\"][\"reference_id\"]), headers=HTTP_HEADERS)\n t = json.loads(r.text)\n\n stream_url = t[\"sources\"][0][\"src\"]\n return stream_url, is_live\n\n def _get_streams(self):\n hls_url, is_live = self._get_stream_url(self)\n\n if not is_live:\n return\n\n split = self.url.split(\" \")\n params = (\" \").join(split[1:])\n params = self._parse_params(params)\n\n try:\n streams = HLSStream.parse_variant_playlist(self.session, hls_url, **params)\n except IOError as err:\n raise PluginError(err)\n\n return streams\n\n\n__plugin__ = AzubuTV\n", "path": "src/streamlink/plugins/azubutv.py"}]}
| 1,595 | 898 |
gh_patches_debug_5927
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-3504
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Old version banner uses a version number format present nowhere else
## Details
- Project URL: https://github.com/dfhack/dfhack/
- Build URL: http://dfhack.readthedocs.io/en/v0.40.24-r5/ and http://dfhack.readthedocs.io/en/0.42.06-r1/
- Read the Docs username: lethosor
## Expected result
The banner on documentation pages for old versions of our project should refer to the newest version using the same version number format present used elsewhere - in this case, "0.43.03-r1"
## Actual result
The banner refers to "0.43.3-post1", which isn't used anywhere else, as far as I can tell.
## Additional Information
It would be great if this banner could use the version numbers that our project uses instead of the ones with "post1" at the end. I realize the scheme we use is a bit unconventional, but it's tied to another project which uses the "0.43.03" format that we don't really have control over.
I made a tentative patch at https://github.com/lethosor/readthedocs.org/commit/7d84130471885905a3f663324af602b7be1f7f64, although I haven't tested it enough to be confident that it'll work for other projects. In particular, I'm not sure in what situations (if any) `slugs` could contain multiple items.
</issue>
<code>
[start of readthedocs/restapi/views/footer_views.py]
1 # -*- coding: utf-8 -*-
2 """Endpoint to generate footer HTML."""
3
4 from __future__ import (
5 absolute_import, division, print_function, unicode_literals)
6
7 import six
8 from django.conf import settings
9 from django.shortcuts import get_object_or_404
10 from django.template import loader as template_loader
11 from rest_framework import decorators, permissions
12 from rest_framework.renderers import JSONRenderer
13 from rest_framework.response import Response
14 from rest_framework_jsonp.renderers import JSONPRenderer
15
16 from readthedocs.builds.constants import LATEST, TAG
17 from readthedocs.builds.models import Version
18 from readthedocs.projects.models import Project
19 from readthedocs.projects.version_handling import (
20 highest_version, parse_version_failsafe)
21 from readthedocs.restapi.signals import footer_response
22
23
24 def get_version_compare_data(project, base_version=None):
25 """
26 Retrieve metadata about the highest version available for this project.
27
28 :param base_version: We assert whether or not the base_version is also the
29 highest version in the resulting "is_highest" value.
30 """
31 versions_qs = project.versions.public().filter(active=True)
32
33 # Take preferences over tags only if the project has at least one tag
34 if versions_qs.filter(type=TAG).exists():
35 versions_qs = versions_qs.filter(type=TAG)
36
37 highest_version_obj, highest_version_comparable = highest_version(
38 versions_qs)
39 ret_val = {
40 'project': six.text_type(highest_version_obj),
41 'version': six.text_type(highest_version_comparable),
42 'is_highest': True,
43 }
44 if highest_version_obj:
45 ret_val['url'] = highest_version_obj.get_absolute_url()
46 ret_val['slug'] = (highest_version_obj.slug,)
47 if base_version and base_version.slug != LATEST:
48 try:
49 base_version_comparable = parse_version_failsafe(
50 base_version.verbose_name)
51 if base_version_comparable:
52 # This is only place where is_highest can get set. All error
53 # cases will be set to True, for non- standard versions.
54 ret_val['is_highest'] = (
55 base_version_comparable >= highest_version_comparable)
56 else:
57 ret_val['is_highest'] = True
58 except (Version.DoesNotExist, TypeError):
59 ret_val['is_highest'] = True
60 return ret_val
61
62
63 @decorators.api_view(['GET'])
64 @decorators.permission_classes((permissions.AllowAny,))
65 @decorators.renderer_classes((JSONRenderer, JSONPRenderer))
66 def footer_html(request):
67 """Render and return footer markup."""
68 # TODO refactor this function
69 # pylint: disable=too-many-locals
70 project_slug = request.GET.get('project', None)
71 version_slug = request.GET.get('version', None)
72 page_slug = request.GET.get('page', None)
73 theme = request.GET.get('theme', False)
74 docroot = request.GET.get('docroot', '')
75 subproject = request.GET.get('subproject', False)
76 source_suffix = request.GET.get('source_suffix', '.rst')
77
78 # Hack in a fix for missing version slug deploy that went out a while back
79 if version_slug == '':
80 version_slug = LATEST
81
82 new_theme = (theme == 'sphinx_rtd_theme')
83 using_theme = (theme == 'default')
84 project = get_object_or_404(Project, slug=project_slug)
85 version = get_object_or_404(
86 Version.objects.public(
87 request.user, project=project, only_active=False),
88 slug__iexact=version_slug)
89 main_project = project.main_language_project or project
90
91 if page_slug and page_slug != 'index':
92 if (main_project.documentation_type == 'sphinx_htmldir' or
93 main_project.documentation_type == 'mkdocs'):
94 path = page_slug + '/'
95 elif main_project.documentation_type == 'sphinx_singlehtml':
96 path = 'index.html#document-' + page_slug
97 else:
98 path = page_slug + '.html'
99 else:
100 path = ''
101
102 version_compare_data = get_version_compare_data(project, version)
103
104 context = {
105 'project': project,
106 'version': version,
107 'path': path,
108 'downloads': version.get_downloads(pretty=True),
109 'current_version': version.verbose_name,
110 'versions': project.ordered_active_versions(user=request.user),
111 'main_project': main_project,
112 'translations': main_project.translations.all(),
113 'current_language': project.language,
114 'using_theme': using_theme,
115 'new_theme': new_theme,
116 'settings': settings,
117 'subproject': subproject,
118 'github_edit_url': version.get_github_url(
119 docroot,
120 page_slug,
121 source_suffix,
122 'edit',
123 ),
124 'github_view_url': version.get_github_url(
125 docroot,
126 page_slug,
127 source_suffix,
128 'view',
129 ),
130 'gitlab_edit_url': version.get_gitlab_url(
131 docroot,
132 page_slug,
133 source_suffix,
134 'edit',
135 ),
136 'gitlab_view_url': version.get_gitlab_url(
137 docroot,
138 page_slug,
139 source_suffix,
140 'view',
141 ),
142 'bitbucket_url': version.get_bitbucket_url(
143 docroot,
144 page_slug,
145 source_suffix,
146 ),
147 'theme': theme,
148 }
149
150 html = template_loader.get_template('restapi/footer.html').render(
151 context,
152 request,
153 )
154 resp_data = {
155 'html': html,
156 'version_active': version.active,
157 'version_compare': version_compare_data,
158 'version_supported': version.supported,
159 }
160
161 # Allow folks to hook onto the footer response for various information
162 # collection, or to modify the resp_data.
163 footer_response.send(
164 sender=None,
165 request=request,
166 context=context,
167 resp_data=resp_data,
168 )
169
170 return Response(resp_data)
171
[end of readthedocs/restapi/views/footer_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/restapi/views/footer_views.py b/readthedocs/restapi/views/footer_views.py
--- a/readthedocs/restapi/views/footer_views.py
+++ b/readthedocs/restapi/views/footer_views.py
@@ -43,7 +43,7 @@
}
if highest_version_obj:
ret_val['url'] = highest_version_obj.get_absolute_url()
- ret_val['slug'] = (highest_version_obj.slug,)
+ ret_val['slug'] = highest_version_obj.slug
if base_version and base_version.slug != LATEST:
try:
base_version_comparable = parse_version_failsafe(
|
{"golden_diff": "diff --git a/readthedocs/restapi/views/footer_views.py b/readthedocs/restapi/views/footer_views.py\n--- a/readthedocs/restapi/views/footer_views.py\n+++ b/readthedocs/restapi/views/footer_views.py\n@@ -43,7 +43,7 @@\n }\n if highest_version_obj:\n ret_val['url'] = highest_version_obj.get_absolute_url()\n- ret_val['slug'] = (highest_version_obj.slug,)\n+ ret_val['slug'] = highest_version_obj.slug\n if base_version and base_version.slug != LATEST:\n try:\n base_version_comparable = parse_version_failsafe(\n", "issue": "Old version banner uses a version number format present nowhere else\n## Details\n- Project URL: https://github.com/dfhack/dfhack/\n- Build URL: http://dfhack.readthedocs.io/en/v0.40.24-r5/ and http://dfhack.readthedocs.io/en/0.42.06-r1/\n- Read the Docs username: lethosor\n## Expected result\n\nThe banner on documentation pages for old versions of our project should refer to the newest version using the same version number format present used elsewhere - in this case, \"0.43.03-r1\"\n## Actual result\n\nThe banner refers to \"0.43.3-post1\", which isn't used anywhere else, as far as I can tell.\n## Additional Information\n\nIt would be great if this banner could use the version numbers that our project uses instead of the ones with \"post1\" at the end. I realize the scheme we use is a bit unconventional, but it's tied to another project which uses the \"0.43.03\" format that we don't really have control over.\n\nI made a tentative patch at https://github.com/lethosor/readthedocs.org/commit/7d84130471885905a3f663324af602b7be1f7f64, although I haven't tested it enough to be confident that it'll work for other projects. In particular, I'm not sure in what situations (if any) `slugs` could contain multiple items.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Endpoint to generate footer HTML.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport six\nfrom django.conf import settings\nfrom django.shortcuts import get_object_or_404\nfrom django.template import loader as template_loader\nfrom rest_framework import decorators, permissions\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\nfrom rest_framework_jsonp.renderers import JSONPRenderer\n\nfrom readthedocs.builds.constants import LATEST, TAG\nfrom readthedocs.builds.models import Version\nfrom readthedocs.projects.models import Project\nfrom readthedocs.projects.version_handling import (\n highest_version, parse_version_failsafe)\nfrom readthedocs.restapi.signals import footer_response\n\n\ndef get_version_compare_data(project, base_version=None):\n \"\"\"\n Retrieve metadata about the highest version available for this project.\n\n :param base_version: We assert whether or not the base_version is also the\n highest version in the resulting \"is_highest\" value.\n \"\"\"\n versions_qs = project.versions.public().filter(active=True)\n\n # Take preferences over tags only if the project has at least one tag\n if versions_qs.filter(type=TAG).exists():\n versions_qs = versions_qs.filter(type=TAG)\n\n highest_version_obj, highest_version_comparable = highest_version(\n versions_qs)\n ret_val = {\n 'project': six.text_type(highest_version_obj),\n 'version': six.text_type(highest_version_comparable),\n 'is_highest': True,\n }\n if highest_version_obj:\n ret_val['url'] = highest_version_obj.get_absolute_url()\n ret_val['slug'] = (highest_version_obj.slug,)\n if base_version and base_version.slug != LATEST:\n try:\n base_version_comparable = parse_version_failsafe(\n base_version.verbose_name)\n if base_version_comparable:\n # This is only place where is_highest can get set. All error\n # cases will be set to True, for non- standard versions.\n ret_val['is_highest'] = (\n base_version_comparable >= highest_version_comparable)\n else:\n ret_val['is_highest'] = True\n except (Version.DoesNotExist, TypeError):\n ret_val['is_highest'] = True\n return ret_val\n\n\[email protected]_view(['GET'])\[email protected]_classes((permissions.AllowAny,))\[email protected]_classes((JSONRenderer, JSONPRenderer))\ndef footer_html(request):\n \"\"\"Render and return footer markup.\"\"\"\n # TODO refactor this function\n # pylint: disable=too-many-locals\n project_slug = request.GET.get('project', None)\n version_slug = request.GET.get('version', None)\n page_slug = request.GET.get('page', None)\n theme = request.GET.get('theme', False)\n docroot = request.GET.get('docroot', '')\n subproject = request.GET.get('subproject', False)\n source_suffix = request.GET.get('source_suffix', '.rst')\n\n # Hack in a fix for missing version slug deploy that went out a while back\n if version_slug == '':\n version_slug = LATEST\n\n new_theme = (theme == 'sphinx_rtd_theme')\n using_theme = (theme == 'default')\n project = get_object_or_404(Project, slug=project_slug)\n version = get_object_or_404(\n Version.objects.public(\n request.user, project=project, only_active=False),\n slug__iexact=version_slug)\n main_project = project.main_language_project or project\n\n if page_slug and page_slug != 'index':\n if (main_project.documentation_type == 'sphinx_htmldir' or\n main_project.documentation_type == 'mkdocs'):\n path = page_slug + '/'\n elif main_project.documentation_type == 'sphinx_singlehtml':\n path = 'index.html#document-' + page_slug\n else:\n path = page_slug + '.html'\n else:\n path = ''\n\n version_compare_data = get_version_compare_data(project, version)\n\n context = {\n 'project': project,\n 'version': version,\n 'path': path,\n 'downloads': version.get_downloads(pretty=True),\n 'current_version': version.verbose_name,\n 'versions': project.ordered_active_versions(user=request.user),\n 'main_project': main_project,\n 'translations': main_project.translations.all(),\n 'current_language': project.language,\n 'using_theme': using_theme,\n 'new_theme': new_theme,\n 'settings': settings,\n 'subproject': subproject,\n 'github_edit_url': version.get_github_url(\n docroot,\n page_slug,\n source_suffix,\n 'edit',\n ),\n 'github_view_url': version.get_github_url(\n docroot,\n page_slug,\n source_suffix,\n 'view',\n ),\n 'gitlab_edit_url': version.get_gitlab_url(\n docroot,\n page_slug,\n source_suffix,\n 'edit',\n ),\n 'gitlab_view_url': version.get_gitlab_url(\n docroot,\n page_slug,\n source_suffix,\n 'view',\n ),\n 'bitbucket_url': version.get_bitbucket_url(\n docroot,\n page_slug,\n source_suffix,\n ),\n 'theme': theme,\n }\n\n html = template_loader.get_template('restapi/footer.html').render(\n context,\n request,\n )\n resp_data = {\n 'html': html,\n 'version_active': version.active,\n 'version_compare': version_compare_data,\n 'version_supported': version.supported,\n }\n\n # Allow folks to hook onto the footer response for various information\n # collection, or to modify the resp_data.\n footer_response.send(\n sender=None,\n request=request,\n context=context,\n resp_data=resp_data,\n )\n\n return Response(resp_data)\n", "path": "readthedocs/restapi/views/footer_views.py"}]}
| 2,554 | 136 |
gh_patches_debug_48199
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-1779
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scapy crashes when tcpdump is not in $PATH
Here is the fix:
```diff
git diff scapy/arch/common.py
diff --git a/scapy/arch/common.py b/scapy/arch/common.py
index 9da19141..f103bebe 100644
--- a/scapy/arch/common.py
+++ b/scapy/arch/common.py
@@ -27,8 +27,11 @@ import scapy.modules.six as six
def _check_tcpdump():
with open(os.devnull, 'wb') as devnull:
- proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
- stdout=devnull, stderr=subprocess.STDOUT)
+ try:
+ proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
+ stdout=devnull, stderr=subprocess.STDOUT)
+ except OSError:
+ return False
return proc.wait() == 0
```
</issue>
<code>
[start of scapy/arch/common.py]
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Philippe Biondi <[email protected]>
4 # This program is published under a GPLv2 license
5
6 """
7 Functions common to different architectures
8 """
9
10 # Important Note: This file is not needed on Windows, and mustn't be loaded
11
12 import socket
13 import subprocess
14 from fcntl import ioctl
15 import os
16 import struct
17 import ctypes
18 from ctypes import POINTER, Structure
19 from ctypes import c_uint, c_uint32, c_ushort, c_ubyte
20 from scapy.config import conf
21 from scapy.data import MTU
22 from scapy.error import Scapy_Exception
23 import scapy.modules.six as six
24
25 # BOOT
26
27
28 def _check_tcpdump():
29 with open(os.devnull, 'wb') as devnull:
30 proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
31 stdout=devnull, stderr=subprocess.STDOUT)
32 return proc.wait() == 0
33
34
35 TCPDUMP = _check_tcpdump()
36
37 # UTILS
38
39
40 def get_if(iff, cmd):
41 """Ease SIOCGIF* ioctl calls"""
42
43 sck = socket.socket()
44 ifreq = ioctl(sck, cmd, struct.pack("16s16x", iff.encode("utf8")))
45 sck.close()
46 return ifreq
47
48 # BPF HANDLERS
49
50
51 class bpf_insn(Structure):
52 """"The BPF instruction data structure"""
53 _fields_ = [("code", c_ushort),
54 ("jt", c_ubyte),
55 ("jf", c_ubyte),
56 ("k", c_uint32)]
57
58
59 class bpf_program(Structure):
60 """"Structure for BIOCSETF"""
61 _fields_ = [("bf_len", c_uint),
62 ("bf_insns", POINTER(bpf_insn))]
63
64
65 def _legacy_bpf_pointer(tcpdump_lines):
66 """Get old-format BPF Pointer. Deprecated"""
67 X86_64 = os.uname()[4] in ['x86_64', 'aarch64']
68 size = int(tcpdump_lines[0])
69 bpf = b""
70 for l in tcpdump_lines[1:]:
71 if six.PY2:
72 int_type = long # noqa: F821
73 else:
74 int_type = int
75 bpf += struct.pack("HBBI", *map(int_type, l.split()))
76
77 # Thanks to http://www.netprojects.de/scapy-with-pypy-solved/ for the pypy trick # noqa: E501
78 if conf.use_pypy:
79 str_buffer = ctypes.create_string_buffer(bpf)
80 return struct.pack('HL', size, ctypes.addressof(str_buffer))
81 else:
82 # XXX. Argl! We need to give the kernel a pointer on the BPF,
83 # Python object header seems to be 20 bytes. 36 bytes for x86 64bits arch. # noqa: E501
84 if X86_64:
85 return struct.pack("HL", size, id(bpf) + 36)
86 else:
87 return struct.pack("HI", size, id(bpf) + 20)
88
89
90 def get_bpf_pointer(tcpdump_lines):
91 """Create a BPF Pointer for TCPDump filter"""
92 if conf.use_pypy:
93 return _legacy_bpf_pointer(tcpdump_lines)
94
95 # Allocate BPF instructions
96 size = int(tcpdump_lines[0])
97 bpf_insn_a = bpf_insn * size
98 bip = bpf_insn_a()
99
100 # Fill the BPF instruction structures with the byte code
101 tcpdump_lines = tcpdump_lines[1:]
102 i = 0
103 for line in tcpdump_lines:
104 values = [int(v) for v in line.split()]
105 bip[i].code = c_ushort(values[0])
106 bip[i].jt = c_ubyte(values[1])
107 bip[i].jf = c_ubyte(values[2])
108 bip[i].k = c_uint(values[3])
109 i += 1
110
111 # Create the BPF program
112 return bpf_program(size, bip)
113
114
115 def compile_filter(bpf_filter, iface=None):
116 """Asks Tcpdump to parse the filter, then build the matching
117 BPF bytecode using get_bpf_pointer.
118 """
119 if not TCPDUMP:
120 raise Scapy_Exception("tcpdump is not available. Cannot use filter !")
121 try:
122 process = subprocess.Popen([
123 conf.prog.tcpdump,
124 "-p",
125 "-i", (conf.iface if iface is None else iface),
126 "-ddd",
127 "-s", str(MTU),
128 bpf_filter],
129 stdout=subprocess.PIPE,
130 stderr=subprocess.PIPE
131 )
132 except OSError as ex:
133 raise Scapy_Exception("Failed to attach filter: %s" % ex)
134 lines, err = process.communicate()
135 ret = process.returncode
136 if ret:
137 raise Scapy_Exception(
138 "Failed to attach filter: tcpdump returned: %s" % err
139 )
140 lines = lines.strip().split(b"\n")
141 return get_bpf_pointer(lines)
142
[end of scapy/arch/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scapy/arch/common.py b/scapy/arch/common.py
--- a/scapy/arch/common.py
+++ b/scapy/arch/common.py
@@ -26,9 +26,15 @@
def _check_tcpdump():
+ """
+ Return True if the tcpdump command can be started
+ """
with open(os.devnull, 'wb') as devnull:
- proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
- stdout=devnull, stderr=subprocess.STDOUT)
+ try:
+ proc = subprocess.Popen([conf.prog.tcpdump, "--version"],
+ stdout=devnull, stderr=subprocess.STDOUT)
+ except OSError:
+ return False
return proc.wait() == 0
|
{"golden_diff": "diff --git a/scapy/arch/common.py b/scapy/arch/common.py\n--- a/scapy/arch/common.py\n+++ b/scapy/arch/common.py\n@@ -26,9 +26,15 @@\n \n \n def _check_tcpdump():\n+ \"\"\"\n+ Return True if the tcpdump command can be started\n+ \"\"\"\n with open(os.devnull, 'wb') as devnull:\n- proc = subprocess.Popen([conf.prog.tcpdump, \"--version\"],\n- stdout=devnull, stderr=subprocess.STDOUT)\n+ try:\n+ proc = subprocess.Popen([conf.prog.tcpdump, \"--version\"],\n+ stdout=devnull, stderr=subprocess.STDOUT)\n+ except OSError:\n+ return False\n return proc.wait() == 0\n", "issue": "Scapy crashes when tcpdump is not in $PATH\nHere is the fix:\r\n\r\n```diff\r\ngit diff scapy/arch/common.py\r\ndiff --git a/scapy/arch/common.py b/scapy/arch/common.py\r\nindex 9da19141..f103bebe 100644\r\n--- a/scapy/arch/common.py\r\n+++ b/scapy/arch/common.py\r\n@@ -27,8 +27,11 @@ import scapy.modules.six as six\r\n \r\n def _check_tcpdump():\r\n with open(os.devnull, 'wb') as devnull:\r\n- proc = subprocess.Popen([conf.prog.tcpdump, \"--version\"],\r\n- stdout=devnull, stderr=subprocess.STDOUT)\r\n+ try:\r\n+ proc = subprocess.Popen([conf.prog.tcpdump, \"--version\"],\r\n+ stdout=devnull, stderr=subprocess.STDOUT)\r\n+ except OSError:\r\n+ return False\r\n return proc.wait() == 0\r\n```\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nFunctions common to different architectures\n\"\"\"\n\n# Important Note: This file is not needed on Windows, and mustn't be loaded\n\nimport socket\nimport subprocess\nfrom fcntl import ioctl\nimport os\nimport struct\nimport ctypes\nfrom ctypes import POINTER, Structure\nfrom ctypes import c_uint, c_uint32, c_ushort, c_ubyte\nfrom scapy.config import conf\nfrom scapy.data import MTU\nfrom scapy.error import Scapy_Exception\nimport scapy.modules.six as six\n\n# BOOT\n\n\ndef _check_tcpdump():\n with open(os.devnull, 'wb') as devnull:\n proc = subprocess.Popen([conf.prog.tcpdump, \"--version\"],\n stdout=devnull, stderr=subprocess.STDOUT)\n return proc.wait() == 0\n\n\nTCPDUMP = _check_tcpdump()\n\n# UTILS\n\n\ndef get_if(iff, cmd):\n \"\"\"Ease SIOCGIF* ioctl calls\"\"\"\n\n sck = socket.socket()\n ifreq = ioctl(sck, cmd, struct.pack(\"16s16x\", iff.encode(\"utf8\")))\n sck.close()\n return ifreq\n\n# BPF HANDLERS\n\n\nclass bpf_insn(Structure):\n \"\"\"\"The BPF instruction data structure\"\"\"\n _fields_ = [(\"code\", c_ushort),\n (\"jt\", c_ubyte),\n (\"jf\", c_ubyte),\n (\"k\", c_uint32)]\n\n\nclass bpf_program(Structure):\n \"\"\"\"Structure for BIOCSETF\"\"\"\n _fields_ = [(\"bf_len\", c_uint),\n (\"bf_insns\", POINTER(bpf_insn))]\n\n\ndef _legacy_bpf_pointer(tcpdump_lines):\n \"\"\"Get old-format BPF Pointer. Deprecated\"\"\"\n X86_64 = os.uname()[4] in ['x86_64', 'aarch64']\n size = int(tcpdump_lines[0])\n bpf = b\"\"\n for l in tcpdump_lines[1:]:\n if six.PY2:\n int_type = long # noqa: F821\n else:\n int_type = int\n bpf += struct.pack(\"HBBI\", *map(int_type, l.split()))\n\n # Thanks to http://www.netprojects.de/scapy-with-pypy-solved/ for the pypy trick # noqa: E501\n if conf.use_pypy:\n str_buffer = ctypes.create_string_buffer(bpf)\n return struct.pack('HL', size, ctypes.addressof(str_buffer))\n else:\n # XXX. Argl! We need to give the kernel a pointer on the BPF,\n # Python object header seems to be 20 bytes. 36 bytes for x86 64bits arch. # noqa: E501\n if X86_64:\n return struct.pack(\"HL\", size, id(bpf) + 36)\n else:\n return struct.pack(\"HI\", size, id(bpf) + 20)\n\n\ndef get_bpf_pointer(tcpdump_lines):\n \"\"\"Create a BPF Pointer for TCPDump filter\"\"\"\n if conf.use_pypy:\n return _legacy_bpf_pointer(tcpdump_lines)\n\n # Allocate BPF instructions\n size = int(tcpdump_lines[0])\n bpf_insn_a = bpf_insn * size\n bip = bpf_insn_a()\n\n # Fill the BPF instruction structures with the byte code\n tcpdump_lines = tcpdump_lines[1:]\n i = 0\n for line in tcpdump_lines:\n values = [int(v) for v in line.split()]\n bip[i].code = c_ushort(values[0])\n bip[i].jt = c_ubyte(values[1])\n bip[i].jf = c_ubyte(values[2])\n bip[i].k = c_uint(values[3])\n i += 1\n\n # Create the BPF program\n return bpf_program(size, bip)\n\n\ndef compile_filter(bpf_filter, iface=None):\n \"\"\"Asks Tcpdump to parse the filter, then build the matching\n BPF bytecode using get_bpf_pointer.\n \"\"\"\n if not TCPDUMP:\n raise Scapy_Exception(\"tcpdump is not available. Cannot use filter !\")\n try:\n process = subprocess.Popen([\n conf.prog.tcpdump,\n \"-p\",\n \"-i\", (conf.iface if iface is None else iface),\n \"-ddd\",\n \"-s\", str(MTU),\n bpf_filter],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE\n )\n except OSError as ex:\n raise Scapy_Exception(\"Failed to attach filter: %s\" % ex)\n lines, err = process.communicate()\n ret = process.returncode\n if ret:\n raise Scapy_Exception(\n \"Failed to attach filter: tcpdump returned: %s\" % err\n )\n lines = lines.strip().split(b\"\\n\")\n return get_bpf_pointer(lines)\n", "path": "scapy/arch/common.py"}]}
| 2,193 | 166 |
gh_patches_debug_5984
|
rasdani/github-patches
|
git_diff
|
pydantic__pydantic-8110
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`CallableDiscriminator` isn't in the docs -- note in release notes (it's `Discriminator`)
### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
It's a trap!
The release notes for 2.5.0 say `CallableDiscriminator`, but the class was renamed to `Discriminator` before release - so searching the docs for "CallableDiscriminator" doesn't find it.
Maybe worth a note in the line of the release notes to indicate it is now called `Discriminator` to direct users to this cool new feature.
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc.
</issue>
<code>
[start of pydantic/_internal/_validate_call.py]
1 from __future__ import annotations as _annotations
2
3 import inspect
4 from dataclasses import dataclass
5 from functools import partial
6 from typing import Any, Awaitable, Callable
7
8 import pydantic_core
9
10 from ..config import ConfigDict
11 from ..plugin._schema_validator import create_schema_validator
12 from . import _generate_schema, _typing_extra
13 from ._config import ConfigWrapper
14
15
16 @dataclass
17 class CallMarker:
18 function: Callable[..., Any]
19 validate_return: bool
20
21
22 class ValidateCallWrapper:
23 """This is a wrapper around a function that validates the arguments passed to it, and optionally the return value.
24
25 It's partially inspired by `wraps` which in turn uses `partial`, but extended to be a descriptor so
26 these functions can be applied to instance methods, class methods, static methods, as well as normal functions.
27 """
28
29 __slots__ = (
30 'raw_function',
31 '_config',
32 '_validate_return',
33 '__pydantic_core_schema__',
34 '__pydantic_validator__',
35 '__signature__',
36 '__name__',
37 '__qualname__',
38 '__annotations__',
39 '__dict__', # required for __module__
40 )
41
42 def __init__(self, function: Callable[..., Any], config: ConfigDict | None, validate_return: bool):
43 self.raw_function = function
44 self._config = config
45 self._validate_return = validate_return
46 self.__signature__ = inspect.signature(function)
47 if isinstance(function, partial):
48 func = function.func
49 schema_type = func
50 self.__name__ = f'partial({func.__name__})'
51 self.__qualname__ = f'partial({func.__qualname__})'
52 self.__annotations__ = func.__annotations__
53 self.__module__ = func.__module__
54 self.__doc__ = func.__doc__
55 else:
56 schema_type = function
57 self.__name__ = function.__name__
58 self.__qualname__ = function.__qualname__
59 self.__annotations__ = function.__annotations__
60 self.__module__ = function.__module__
61 self.__doc__ = function.__doc__
62
63 namespace = _typing_extra.add_module_globals(function, None)
64 config_wrapper = ConfigWrapper(config)
65 gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)
66 schema = gen_schema.clean_schema(gen_schema.generate_schema(function))
67 self.__pydantic_core_schema__ = schema
68 core_config = config_wrapper.core_config(self)
69
70 self.__pydantic_validator__ = create_schema_validator(
71 schema,
72 schema_type,
73 self.__module__,
74 self.__qualname__,
75 'validate_call',
76 core_config,
77 config_wrapper.plugin_settings,
78 )
79
80 if self._validate_return:
81 return_type = (
82 self.__signature__.return_annotation
83 if self.__signature__.return_annotation is not self.__signature__.empty
84 else Any
85 )
86 gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)
87 schema = gen_schema.clean_schema(gen_schema.generate_schema(return_type))
88 self.__return_pydantic_core_schema__ = schema
89 validator = create_schema_validator(
90 schema,
91 schema_type,
92 self.__module__,
93 self.__qualname__,
94 'validate_call',
95 core_config,
96 config_wrapper.plugin_settings,
97 )
98 if inspect.iscoroutinefunction(self.raw_function):
99
100 async def return_val_wrapper(aw: Awaitable[Any]) -> None:
101 return validator.validate_python(await aw)
102
103 self.__return_pydantic_validator__ = return_val_wrapper
104 else:
105 self.__return_pydantic_validator__ = validator.validate_python
106 else:
107 self.__return_pydantic_core_schema__ = None
108 self.__return_pydantic_validator__ = None
109
110 self._name: str | None = None # set by __get__, used to set the instance attribute when decorating methods
111
112 def __call__(self, *args: Any, **kwargs: Any) -> Any:
113 res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
114 if self.__return_pydantic_validator__:
115 return self.__return_pydantic_validator__(res)
116 return res
117
118 def __get__(self, obj: Any, objtype: type[Any] | None = None) -> ValidateCallWrapper:
119 """Bind the raw function and return another ValidateCallWrapper wrapping that."""
120 if obj is None:
121 try:
122 # Handle the case where a method is accessed as a class attribute
123 return objtype.__getattribute__(objtype, self._name) # type: ignore
124 except AttributeError:
125 # This will happen the first time the attribute is accessed
126 pass
127
128 bound_function = self.raw_function.__get__(obj, objtype)
129 result = self.__class__(bound_function, self._config, self._validate_return)
130
131 # skip binding to instance when obj or objtype has __slots__ attribute
132 if hasattr(obj, '__slots__') or hasattr(objtype, '__slots__'):
133 return result
134
135 if self._name is not None:
136 if obj is not None:
137 object.__setattr__(obj, self._name, result)
138 else:
139 object.__setattr__(objtype, self._name, result)
140 return result
141
142 def __set_name__(self, owner: Any, name: str) -> None:
143 self._name = name
144
145 def __repr__(self) -> str:
146 return f'ValidateCallWrapper({self.raw_function})'
147
148 def __eq__(self, other):
149 return self.raw_function == other.raw_function
150
[end of pydantic/_internal/_validate_call.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pydantic/_internal/_validate_call.py b/pydantic/_internal/_validate_call.py
--- a/pydantic/_internal/_validate_call.py
+++ b/pydantic/_internal/_validate_call.py
@@ -145,5 +145,12 @@
def __repr__(self) -> str:
return f'ValidateCallWrapper({self.raw_function})'
- def __eq__(self, other):
- return self.raw_function == other.raw_function
+ def __eq__(self, other) -> bool:
+ return (
+ (self.raw_function == other.raw_function)
+ and (self._config == other._config)
+ and (self._validate_return == other._validate_return)
+ )
+
+ def __hash__(self):
+ return hash(self.raw_function)
|
{"golden_diff": "diff --git a/pydantic/_internal/_validate_call.py b/pydantic/_internal/_validate_call.py\n--- a/pydantic/_internal/_validate_call.py\n+++ b/pydantic/_internal/_validate_call.py\n@@ -145,5 +145,12 @@\n def __repr__(self) -> str:\n return f'ValidateCallWrapper({self.raw_function})'\n \n- def __eq__(self, other):\n- return self.raw_function == other.raw_function\n+ def __eq__(self, other) -> bool:\n+ return (\n+ (self.raw_function == other.raw_function)\n+ and (self._config == other._config)\n+ and (self._validate_return == other._validate_return)\n+ )\n+\n+ def __hash__(self):\n+ return hash(self.raw_function)\n", "issue": "`CallableDiscriminator` isn't in the docs -- note in release notes (it's `Discriminator`)\n### Initial Checks\n\n- [X] I have searched Google & GitHub for similar requests and couldn't find anything\n- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing\n\n### Description\n\nIt's a trap!\r\n\r\nThe release notes for 2.5.0 say `CallableDiscriminator`, but the class was renamed to `Discriminator` before release - so searching the docs for \"CallableDiscriminator\" doesn't find it.\r\n\r\nMaybe worth a note in the line of the release notes to indicate it is now called `Discriminator` to direct users to this cool new feature.\n\n### Affected Components\n\n- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)\n- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)\n- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`\n- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)\n- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)\n- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)\n- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type\n- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)\n- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)\n- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode\n- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc.\n", "before_files": [{"content": "from __future__ import annotations as _annotations\n\nimport inspect\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import Any, Awaitable, Callable\n\nimport pydantic_core\n\nfrom ..config import ConfigDict\nfrom ..plugin._schema_validator import create_schema_validator\nfrom . import _generate_schema, _typing_extra\nfrom ._config import ConfigWrapper\n\n\n@dataclass\nclass CallMarker:\n function: Callable[..., Any]\n validate_return: bool\n\n\nclass ValidateCallWrapper:\n \"\"\"This is a wrapper around a function that validates the arguments passed to it, and optionally the return value.\n\n It's partially inspired by `wraps` which in turn uses `partial`, but extended to be a descriptor so\n these functions can be applied to instance methods, class methods, static methods, as well as normal functions.\n \"\"\"\n\n __slots__ = (\n 'raw_function',\n '_config',\n '_validate_return',\n '__pydantic_core_schema__',\n '__pydantic_validator__',\n '__signature__',\n '__name__',\n '__qualname__',\n '__annotations__',\n '__dict__', # required for __module__\n )\n\n def __init__(self, function: Callable[..., Any], config: ConfigDict | None, validate_return: bool):\n self.raw_function = function\n self._config = config\n self._validate_return = validate_return\n self.__signature__ = inspect.signature(function)\n if isinstance(function, partial):\n func = function.func\n schema_type = func\n self.__name__ = f'partial({func.__name__})'\n self.__qualname__ = f'partial({func.__qualname__})'\n self.__annotations__ = func.__annotations__\n self.__module__ = func.__module__\n self.__doc__ = func.__doc__\n else:\n schema_type = function\n self.__name__ = function.__name__\n self.__qualname__ = function.__qualname__\n self.__annotations__ = function.__annotations__\n self.__module__ = function.__module__\n self.__doc__ = function.__doc__\n\n namespace = _typing_extra.add_module_globals(function, None)\n config_wrapper = ConfigWrapper(config)\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(function))\n self.__pydantic_core_schema__ = schema\n core_config = config_wrapper.core_config(self)\n\n self.__pydantic_validator__ = create_schema_validator(\n schema,\n schema_type,\n self.__module__,\n self.__qualname__,\n 'validate_call',\n core_config,\n config_wrapper.plugin_settings,\n )\n\n if self._validate_return:\n return_type = (\n self.__signature__.return_annotation\n if self.__signature__.return_annotation is not self.__signature__.empty\n else Any\n )\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(return_type))\n self.__return_pydantic_core_schema__ = schema\n validator = create_schema_validator(\n schema,\n schema_type,\n self.__module__,\n self.__qualname__,\n 'validate_call',\n core_config,\n config_wrapper.plugin_settings,\n )\n if inspect.iscoroutinefunction(self.raw_function):\n\n async def return_val_wrapper(aw: Awaitable[Any]) -> None:\n return validator.validate_python(await aw)\n\n self.__return_pydantic_validator__ = return_val_wrapper\n else:\n self.__return_pydantic_validator__ = validator.validate_python\n else:\n self.__return_pydantic_core_schema__ = None\n self.__return_pydantic_validator__ = None\n\n self._name: str | None = None # set by __get__, used to set the instance attribute when decorating methods\n\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))\n if self.__return_pydantic_validator__:\n return self.__return_pydantic_validator__(res)\n return res\n\n def __get__(self, obj: Any, objtype: type[Any] | None = None) -> ValidateCallWrapper:\n \"\"\"Bind the raw function and return another ValidateCallWrapper wrapping that.\"\"\"\n if obj is None:\n try:\n # Handle the case where a method is accessed as a class attribute\n return objtype.__getattribute__(objtype, self._name) # type: ignore\n except AttributeError:\n # This will happen the first time the attribute is accessed\n pass\n\n bound_function = self.raw_function.__get__(obj, objtype)\n result = self.__class__(bound_function, self._config, self._validate_return)\n\n # skip binding to instance when obj or objtype has __slots__ attribute\n if hasattr(obj, '__slots__') or hasattr(objtype, '__slots__'):\n return result\n\n if self._name is not None:\n if obj is not None:\n object.__setattr__(obj, self._name, result)\n else:\n object.__setattr__(objtype, self._name, result)\n return result\n\n def __set_name__(self, owner: Any, name: str) -> None:\n self._name = name\n\n def __repr__(self) -> str:\n return f'ValidateCallWrapper({self.raw_function})'\n\n def __eq__(self, other):\n return self.raw_function == other.raw_function\n", "path": "pydantic/_internal/_validate_call.py"}]}
| 2,526 | 182 |
gh_patches_debug_22072
|
rasdani/github-patches
|
git_diff
|
dask__distributed-3056
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cpu cores estimate based on cgroups
I was reading the announcement for 2.4.0 and got interested in https://github.com/dask/distributed/pull/3039 by @jcrist
That did lead me to this part of the code:
https://github.com/dask/distributed/blob/7d017c467590c758fa4b8cb2b1193205fe5aa7ad/distributed/system.py#L62
Just by looking at it (and although I'm not an expert I think I know what's going on), I have to observations -- half way between a question and a bugreport.
1. in my docker environment I have here (ubuntu 18.04), the filename is different:
```
~$ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us
100000
~$ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us
220000
```
in the code is that path:
```
~$ ls /sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us
ls: cannot access '/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us': No such file or directory
```
2. The actual calculation is `int(quota / period)`. I think this should round up, not down. The point is, if you have a fraction like "2.5 cores", it will report 2 cores and keep half a core unused. It would be better if it reports 3 cores and then cgroups limits to the actual 2.5.
</issue>
<code>
[start of distributed/system.py]
1 import os
2 import sys
3
4 import psutil
5
6 __all__ = ("memory_limit", "cpu_count", "MEMORY_LIMIT", "CPU_COUNT")
7
8
9 def memory_limit():
10 """Get the memory limit (in bytes) for this system.
11
12 Takes the minimum value from the following locations:
13
14 - Total system host memory
15 - Cgroups limit (if set)
16 - RSS rlimit (if set)
17 """
18 limit = psutil.virtual_memory().total
19
20 # Check cgroups if available
21 if sys.platform == "linux":
22 try:
23 with open("/sys/fs/cgroup/memory/memory.limit_in_bytes") as f:
24 cgroups_limit = int(f.read())
25 if cgroups_limit > 0:
26 limit = min(limit, cgroups_limit)
27 except Exception:
28 pass
29
30 # Check rlimit if available
31 try:
32 import resource
33
34 hard_limit = resource.getrlimit(resource.RLIMIT_RSS)[1]
35 if hard_limit > 0:
36 limit = min(limit, hard_limit)
37 except (ImportError, OSError):
38 pass
39
40 return limit
41
42
43 def cpu_count():
44 """Get the available CPU count for this system.
45
46 Takes the minimum value from the following locations:
47
48 - Total system cpus available on the host.
49 - CPU Affinity (if set)
50 - Cgroups limit (if set)
51 """
52 count = os.cpu_count()
53
54 # Check CPU affinity if available
55 try:
56 affinity_count = len(psutil.Process().cpu_affinity())
57 if affinity_count > 0:
58 count = min(count, affinity_count)
59 except Exception:
60 pass
61
62 # Check cgroups if available
63 if sys.platform == "linux":
64 try:
65 with open("/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us") as f:
66 quota = int(f.read())
67 with open("/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_period_us") as f:
68 period = int(f.read())
69 cgroups_count = int(quota / period)
70 if cgroups_count > 0:
71 count = min(count, cgroups_count)
72 except Exception:
73 pass
74
75 return count
76
77
78 MEMORY_LIMIT = memory_limit()
79 CPU_COUNT = cpu_count()
80
[end of distributed/system.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/system.py b/distributed/system.py
--- a/distributed/system.py
+++ b/distributed/system.py
@@ -1,3 +1,4 @@
+import math
import os
import sys
@@ -61,16 +62,20 @@
# Check cgroups if available
if sys.platform == "linux":
- try:
- with open("/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us") as f:
- quota = int(f.read())
- with open("/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_period_us") as f:
- period = int(f.read())
- cgroups_count = int(quota / period)
- if cgroups_count > 0:
- count = min(count, cgroups_count)
- except Exception:
- pass
+ # The directory name isn't standardized across linux distros, check both
+ for dirname in ["cpuacct,cpu", "cpu,cpuacct"]:
+ try:
+ with open("/sys/fs/cgroup/%s/cpu.cfs_quota_us" % dirname) as f:
+ quota = int(f.read())
+ with open("/sys/fs/cgroup/%s/cpu.cfs_period_us" % dirname) as f:
+ period = int(f.read())
+ # We round up on fractional CPUs
+ cgroups_count = math.ceil(quota / period)
+ if cgroups_count > 0:
+ count = min(count, cgroups_count)
+ break
+ except Exception:
+ pass
return count
|
{"golden_diff": "diff --git a/distributed/system.py b/distributed/system.py\n--- a/distributed/system.py\n+++ b/distributed/system.py\n@@ -1,3 +1,4 @@\n+import math\n import os\n import sys\n \n@@ -61,16 +62,20 @@\n \n # Check cgroups if available\n if sys.platform == \"linux\":\n- try:\n- with open(\"/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us\") as f:\n- quota = int(f.read())\n- with open(\"/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_period_us\") as f:\n- period = int(f.read())\n- cgroups_count = int(quota / period)\n- if cgroups_count > 0:\n- count = min(count, cgroups_count)\n- except Exception:\n- pass\n+ # The directory name isn't standardized across linux distros, check both\n+ for dirname in [\"cpuacct,cpu\", \"cpu,cpuacct\"]:\n+ try:\n+ with open(\"/sys/fs/cgroup/%s/cpu.cfs_quota_us\" % dirname) as f:\n+ quota = int(f.read())\n+ with open(\"/sys/fs/cgroup/%s/cpu.cfs_period_us\" % dirname) as f:\n+ period = int(f.read())\n+ # We round up on fractional CPUs\n+ cgroups_count = math.ceil(quota / period)\n+ if cgroups_count > 0:\n+ count = min(count, cgroups_count)\n+ break\n+ except Exception:\n+ pass\n \n return count\n", "issue": "cpu cores estimate based on cgroups\nI was reading the announcement for 2.4.0 and got interested in https://github.com/dask/distributed/pull/3039 by @jcrist \r\n\r\nThat did lead me to this part of the code:\r\nhttps://github.com/dask/distributed/blob/7d017c467590c758fa4b8cb2b1193205fe5aa7ad/distributed/system.py#L62\r\n\r\nJust by looking at it (and although I'm not an expert I think I know what's going on), I have to observations -- half way between a question and a bugreport.\r\n\r\n1. in my docker environment I have here (ubuntu 18.04), the filename is different:\r\n\r\n```\r\n~$ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us \r\n100000\r\n~$ cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us \r\n220000\r\n```\r\nin the code is that path:\r\n```\r\n~$ ls /sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us\r\nls: cannot access '/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us': No such file or directory\r\n```\r\n\r\n2. The actual calculation is `int(quota / period)`. I think this should round up, not down. The point is, if you have a fraction like \"2.5 cores\", it will report 2 cores and keep half a core unused. It would be better if it reports 3 cores and then cgroups limits to the actual 2.5.\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\n\nimport psutil\n\n__all__ = (\"memory_limit\", \"cpu_count\", \"MEMORY_LIMIT\", \"CPU_COUNT\")\n\n\ndef memory_limit():\n \"\"\"Get the memory limit (in bytes) for this system.\n\n Takes the minimum value from the following locations:\n\n - Total system host memory\n - Cgroups limit (if set)\n - RSS rlimit (if set)\n \"\"\"\n limit = psutil.virtual_memory().total\n\n # Check cgroups if available\n if sys.platform == \"linux\":\n try:\n with open(\"/sys/fs/cgroup/memory/memory.limit_in_bytes\") as f:\n cgroups_limit = int(f.read())\n if cgroups_limit > 0:\n limit = min(limit, cgroups_limit)\n except Exception:\n pass\n\n # Check rlimit if available\n try:\n import resource\n\n hard_limit = resource.getrlimit(resource.RLIMIT_RSS)[1]\n if hard_limit > 0:\n limit = min(limit, hard_limit)\n except (ImportError, OSError):\n pass\n\n return limit\n\n\ndef cpu_count():\n \"\"\"Get the available CPU count for this system.\n\n Takes the minimum value from the following locations:\n\n - Total system cpus available on the host.\n - CPU Affinity (if set)\n - Cgroups limit (if set)\n \"\"\"\n count = os.cpu_count()\n\n # Check CPU affinity if available\n try:\n affinity_count = len(psutil.Process().cpu_affinity())\n if affinity_count > 0:\n count = min(count, affinity_count)\n except Exception:\n pass\n\n # Check cgroups if available\n if sys.platform == \"linux\":\n try:\n with open(\"/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_quota_us\") as f:\n quota = int(f.read())\n with open(\"/sys/fs/cgroup/cpuacct,cpu/cpu.cfs_period_us\") as f:\n period = int(f.read())\n cgroups_count = int(quota / period)\n if cgroups_count > 0:\n count = min(count, cgroups_count)\n except Exception:\n pass\n\n return count\n\n\nMEMORY_LIMIT = memory_limit()\nCPU_COUNT = cpu_count()\n", "path": "distributed/system.py"}]}
| 1,524 | 346 |
gh_patches_debug_1615
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-987
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
urllib3 fails to install on centos7 due to old setuptools not supporting <=, < environment markers.
Current urllib3 fails to install on centos7. This bug was most likely introduced after https://github.com/shazow/urllib3/commit/9f5454eac808a105307b2d363c99ce97e5109821.
centos7 ships a very old version of setuptools (0.9.8) which does not support `<=` as an environment marker. See https://github.com/pypa/setuptools/issues/380.
```
$ python --version
Python 2.7.5
$ rpm -qa python-setuptools
python-setuptools-0.9.8-4.el7.noarch
$ lsb_release -a
...
Description: CentOS Linux release 7.2.1511 (Core)
Release: 7.2.1511
$ virtualenv venv
...
$ venv/bin/pip install urllib3
Downloading/unpacking urllib3
Downloading urllib3-1.18.tar.gz (183kB): 183kB downloaded
Running setup.py egg_info for package urllib3
error in urllib3 setup command: Invalid environment marker: python_version <= "2.7"
Complete output from command python setup.py egg_info:
error in urllib3 setup command: Invalid environment marker: python_version <= "2.7"
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/rene/src/venv/build/urllib3
Storing complete log in /home/rene/.pip/pip.log
```
Installing https://github.com/shazow/urllib3/commit/f620d997134708b09560ca5797aa79a59a2ef4c0 (commit before 9f5454eac808a105307b2d363c99ce97e5109821) works fine.
```
$ venv/bin/pip install git+git://github.com/shazow/urllib3.git@f620d997134708b09560ca5797aa79a59a2ef4c0
...
Successfully installed urllib3
Cleaning up...
```
But 9f5454eac808a105307b2d363c99ce97e5109821 fails.
```
$ venv/bin/pip install git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821
Downloading/unpacking git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821
Cloning git://github.com/shazow/urllib3.git (to 9f5454eac808a105307b2d363c99ce97e5109821) to /tmp/pip-lnVDAG-build
Could not find a tag or branch '9f5454eac808a105307b2d363c99ce97e5109821', assuming commit.
Running setup.py egg_info for package from git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821
error in urllib3 setup command: Invalid environment marker: python_version < "3.3"
Complete output from command python setup.py egg_info:
error in urllib3 setup command: Invalid environment marker: python_version < "3.3"
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-lnVDAG-build
Storing complete log in /home/rene/.pip/pip.log
```
urllib3 1.17 setup.py does not ship with < or <= markers so my workaround right now is to install urllib3==1.17.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup
4
5 import os
6 import re
7 import codecs
8
9 base_path = os.path.dirname(__file__)
10
11 # Get the version (borrowed from SQLAlchemy)
12 with open(os.path.join(base_path, 'urllib3', '__init__.py')) as fp:
13 VERSION = re.compile(r".*__version__ = '(.*?)'",
14 re.S).match(fp.read()).group(1)
15
16 with codecs.open('README.rst', encoding='utf-8') as fp:
17 readme = fp.read()
18 with codecs.open('CHANGES.rst', encoding='utf-8') as fp:
19 changes = fp.read()
20 version = VERSION
21
22 setup(name='urllib3',
23 version=version,
24 description="HTTP library with thread-safe connection pooling, file post, and more.",
25 long_description=u'\n\n'.join([readme, changes]),
26 classifiers=[
27 'Environment :: Web Environment',
28 'Intended Audience :: Developers',
29 'License :: OSI Approved :: MIT License',
30 'Operating System :: OS Independent',
31 'Programming Language :: Python',
32 'Programming Language :: Python :: 2',
33 'Programming Language :: Python :: 3',
34 'Topic :: Internet :: WWW/HTTP',
35 'Topic :: Software Development :: Libraries',
36 ],
37 keywords='urllib httplib threadsafe filepost http https ssl pooling',
38 author='Andrey Petrov',
39 author_email='[email protected]',
40 url='https://urllib3.readthedocs.io/',
41 license='MIT',
42 packages=['urllib3',
43 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',
44 'urllib3.packages.backports', 'urllib3.contrib',
45 'urllib3.util',
46 ],
47 requires=[],
48 tests_require=[
49 # These are a less-specific subset of dev-requirements.txt, for the
50 # convenience of distro package maintainers.
51 'nose',
52 'mock',
53 'tornado',
54 ],
55 test_suite='test',
56 extras_require={
57 'secure': [
58 'pyOpenSSL>=0.14',
59 'cryptography>=1.3.4',
60 'idna>=2.0.0',
61 'certifi',
62 ],
63 'secure:python_version <= "2.7"': [
64 "ipaddress",
65 ],
66 'socks': [
67 'PySocks>=1.5.6,<2.0,!=1.5.7',
68 ]
69 },
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,8 +59,6 @@
'cryptography>=1.3.4',
'idna>=2.0.0',
'certifi',
- ],
- 'secure:python_version <= "2.7"': [
"ipaddress",
],
'socks': [
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,8 +59,6 @@\n 'cryptography>=1.3.4',\n 'idna>=2.0.0',\n 'certifi',\n- ],\n- 'secure:python_version <= \"2.7\"': [\n \"ipaddress\",\n ],\n 'socks': [\n", "issue": "urllib3 fails to install on centos7 due to old setuptools not supporting <=, < environment markers.\nCurrent urllib3 fails to install on centos7. This bug was most likely introduced after https://github.com/shazow/urllib3/commit/9f5454eac808a105307b2d363c99ce97e5109821.\n\ncentos7 ships a very old version of setuptools (0.9.8) which does not support `<=` as an environment marker. See https://github.com/pypa/setuptools/issues/380.\n\n```\n$ python --version\nPython 2.7.5\n\n$ rpm -qa python-setuptools\npython-setuptools-0.9.8-4.el7.noarch\n\n$ lsb_release -a\n...\nDescription: CentOS Linux release 7.2.1511 (Core) \nRelease: 7.2.1511\n\n$ virtualenv venv\n...\n\n$ venv/bin/pip install urllib3\nDownloading/unpacking urllib3\n Downloading urllib3-1.18.tar.gz (183kB): 183kB downloaded\n Running setup.py egg_info for package urllib3\n error in urllib3 setup command: Invalid environment marker: python_version <= \"2.7\"\n Complete output from command python setup.py egg_info:\n error in urllib3 setup command: Invalid environment marker: python_version <= \"2.7\"\n\n----------------------------------------\nCleaning up...\nCommand python setup.py egg_info failed with error code 1 in /home/rene/src/venv/build/urllib3\nStoring complete log in /home/rene/.pip/pip.log\n```\n\nInstalling https://github.com/shazow/urllib3/commit/f620d997134708b09560ca5797aa79a59a2ef4c0 (commit before 9f5454eac808a105307b2d363c99ce97e5109821) works fine.\n\n```\n$ venv/bin/pip install git+git://github.com/shazow/urllib3.git@f620d997134708b09560ca5797aa79a59a2ef4c0\n...\nSuccessfully installed urllib3\nCleaning up...\n```\n\nBut 9f5454eac808a105307b2d363c99ce97e5109821 fails.\n\n```\n$ venv/bin/pip install git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821\nDownloading/unpacking git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821\n Cloning git://github.com/shazow/urllib3.git (to 9f5454eac808a105307b2d363c99ce97e5109821) to /tmp/pip-lnVDAG-build\n Could not find a tag or branch '9f5454eac808a105307b2d363c99ce97e5109821', assuming commit.\n Running setup.py egg_info for package from git+git://github.com/shazow/urllib3.git@9f5454eac808a105307b2d363c99ce97e5109821\n error in urllib3 setup command: Invalid environment marker: python_version < \"3.3\"\n Complete output from command python setup.py egg_info:\n error in urllib3 setup command: Invalid environment marker: python_version < \"3.3\"\n\n----------------------------------------\nCleaning up...\nCommand python setup.py egg_info failed with error code 1 in /tmp/pip-lnVDAG-build\nStoring complete log in /home/rene/.pip/pip.log\n```\n\nurllib3 1.17 setup.py does not ship with < or <= markers so my workaround right now is to install urllib3==1.17.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport os\nimport re\nimport codecs\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, 'urllib3', '__init__.py')) as fp:\n VERSION = re.compile(r\".*__version__ = '(.*?)'\",\n re.S).match(fp.read()).group(1)\n\nwith codecs.open('README.rst', encoding='utf-8') as fp:\n readme = fp.read()\nwith codecs.open('CHANGES.rst', encoding='utf-8') as fp:\n changes = fp.read()\nversion = VERSION\n\nsetup(name='urllib3',\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u'\\n\\n'.join([readme, changes]),\n classifiers=[\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='urllib httplib threadsafe filepost http https ssl pooling',\n author='Andrey Petrov',\n author_email='[email protected]',\n url='https://urllib3.readthedocs.io/',\n license='MIT',\n packages=['urllib3',\n 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',\n 'urllib3.packages.backports', 'urllib3.contrib',\n 'urllib3.util',\n ],\n requires=[],\n tests_require=[\n # These are a less-specific subset of dev-requirements.txt, for the\n # convenience of distro package maintainers.\n 'nose',\n 'mock',\n 'tornado',\n ],\n test_suite='test',\n extras_require={\n 'secure': [\n 'pyOpenSSL>=0.14',\n 'cryptography>=1.3.4',\n 'idna>=2.0.0',\n 'certifi',\n ],\n 'secure:python_version <= \"2.7\"': [\n \"ipaddress\",\n ],\n 'socks': [\n 'PySocks>=1.5.6,<2.0,!=1.5.7',\n ]\n },\n )\n", "path": "setup.py"}]}
| 2,223 | 90 |
gh_patches_debug_13457
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-3382
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JSON dispatcher data file split correction
Originated from https://github.com/modin-project/modin/pull/2607#discussion_r571989125.
</issue>
<code>
[start of modin/engines/base/io/text/json_dispatcher.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """Module houses `JSONDispatcher` class, that is used for reading `.json` files."""
15
16 from modin.engines.base.io.text.text_file_dispatcher import TextFileDispatcher
17 from io import BytesIO
18 import pandas
19 import numpy as np
20 from csv import QUOTE_NONE
21
22 from modin.config import NPartitions
23
24
25 class JSONDispatcher(TextFileDispatcher):
26 """
27 Class handles utils for reading `.json` files.
28
29 Inherits some common for text files util functions from `TextFileDispatcher` class.
30 """
31
32 @classmethod
33 def _read(cls, path_or_buf, **kwargs):
34 """
35 Read data from `path_or_buf` according to the passed `read_json` `kwargs` parameters.
36
37 Parameters
38 ----------
39 path_or_buf : str, path object or file-like object
40 `path_or_buf` parameter of `read_json` function.
41 **kwargs : dict
42 Parameters of `read_json` function.
43
44 Returns
45 -------
46 BaseQueryCompiler
47 Query compiler with imported data for further processing.
48 """
49 path_or_buf = cls.get_path_or_buffer(path_or_buf)
50 if isinstance(path_or_buf, str):
51 if not cls.file_exists(path_or_buf):
52 return cls.single_worker_read(path_or_buf, **kwargs)
53 path_or_buf = cls.get_path(path_or_buf)
54 elif not cls.pathlib_or_pypath(path_or_buf):
55 return cls.single_worker_read(path_or_buf, **kwargs)
56 if not kwargs.get("lines", False):
57 return cls.single_worker_read(path_or_buf, **kwargs)
58 columns = pandas.read_json(
59 BytesIO(b"" + open(path_or_buf, "rb").readline()), lines=True
60 ).columns
61 kwargs["columns"] = columns
62 empty_pd_df = pandas.DataFrame(columns=columns)
63
64 with cls.file_open(path_or_buf, "rb", kwargs.get("compression", "infer")) as f:
65 partition_ids = []
66 index_ids = []
67 dtypes_ids = []
68
69 column_widths, num_splits = cls._define_metadata(empty_pd_df, columns)
70
71 args = {"fname": path_or_buf, "num_splits": num_splits, **kwargs}
72
73 splits = cls.partitioned_file(
74 f,
75 num_partitions=NPartitions.get(),
76 is_quoting=(args.get("quoting", "") != QUOTE_NONE),
77 )
78 for start, end in splits:
79 args.update({"start": start, "end": end})
80 partition_id = cls.deploy(cls.parse, num_splits + 3, args)
81 partition_ids.append(partition_id[:-3])
82 index_ids.append(partition_id[-3])
83 dtypes_ids.append(partition_id[-2])
84
85 # partition_id[-1] contains the columns for each partition, which will be useful
86 # for implementing when `lines=False`.
87 row_lengths = cls.materialize(index_ids)
88 new_index = pandas.RangeIndex(sum(row_lengths))
89
90 dtypes = cls.get_dtypes(dtypes_ids)
91 partition_ids = cls.build_partition(partition_ids, row_lengths, column_widths)
92
93 if isinstance(dtypes, pandas.Series):
94 dtypes.index = columns
95 else:
96 dtypes = pandas.Series(dtypes, index=columns)
97
98 new_frame = cls.frame_cls(
99 np.array(partition_ids),
100 new_index,
101 columns,
102 row_lengths,
103 column_widths,
104 dtypes=dtypes,
105 )
106 new_frame.synchronize_labels(axis=0)
107 return cls.query_compiler_cls(new_frame)
108
[end of modin/engines/base/io/text/json_dispatcher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/engines/base/io/text/json_dispatcher.py b/modin/engines/base/io/text/json_dispatcher.py
--- a/modin/engines/base/io/text/json_dispatcher.py
+++ b/modin/engines/base/io/text/json_dispatcher.py
@@ -17,7 +17,6 @@
from io import BytesIO
import pandas
import numpy as np
-from csv import QUOTE_NONE
from modin.config import NPartitions
@@ -73,7 +72,6 @@
splits = cls.partitioned_file(
f,
num_partitions=NPartitions.get(),
- is_quoting=(args.get("quoting", "") != QUOTE_NONE),
)
for start, end in splits:
args.update({"start": start, "end": end})
|
{"golden_diff": "diff --git a/modin/engines/base/io/text/json_dispatcher.py b/modin/engines/base/io/text/json_dispatcher.py\n--- a/modin/engines/base/io/text/json_dispatcher.py\n+++ b/modin/engines/base/io/text/json_dispatcher.py\n@@ -17,7 +17,6 @@\n from io import BytesIO\n import pandas\n import numpy as np\n-from csv import QUOTE_NONE\n \n from modin.config import NPartitions\n \n@@ -73,7 +72,6 @@\n splits = cls.partitioned_file(\n f,\n num_partitions=NPartitions.get(),\n- is_quoting=(args.get(\"quoting\", \"\") != QUOTE_NONE),\n )\n for start, end in splits:\n args.update({\"start\": start, \"end\": end})\n", "issue": "JSON dispatcher data file split correction\nOriginated from https://github.com/modin-project/modin/pull/2607#discussion_r571989125.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Module houses `JSONDispatcher` class, that is used for reading `.json` files.\"\"\"\n\nfrom modin.engines.base.io.text.text_file_dispatcher import TextFileDispatcher\nfrom io import BytesIO\nimport pandas\nimport numpy as np\nfrom csv import QUOTE_NONE\n\nfrom modin.config import NPartitions\n\n\nclass JSONDispatcher(TextFileDispatcher):\n \"\"\"\n Class handles utils for reading `.json` files.\n\n Inherits some common for text files util functions from `TextFileDispatcher` class.\n \"\"\"\n\n @classmethod\n def _read(cls, path_or_buf, **kwargs):\n \"\"\"\n Read data from `path_or_buf` according to the passed `read_json` `kwargs` parameters.\n\n Parameters\n ----------\n path_or_buf : str, path object or file-like object\n `path_or_buf` parameter of `read_json` function.\n **kwargs : dict\n Parameters of `read_json` function.\n\n Returns\n -------\n BaseQueryCompiler\n Query compiler with imported data for further processing.\n \"\"\"\n path_or_buf = cls.get_path_or_buffer(path_or_buf)\n if isinstance(path_or_buf, str):\n if not cls.file_exists(path_or_buf):\n return cls.single_worker_read(path_or_buf, **kwargs)\n path_or_buf = cls.get_path(path_or_buf)\n elif not cls.pathlib_or_pypath(path_or_buf):\n return cls.single_worker_read(path_or_buf, **kwargs)\n if not kwargs.get(\"lines\", False):\n return cls.single_worker_read(path_or_buf, **kwargs)\n columns = pandas.read_json(\n BytesIO(b\"\" + open(path_or_buf, \"rb\").readline()), lines=True\n ).columns\n kwargs[\"columns\"] = columns\n empty_pd_df = pandas.DataFrame(columns=columns)\n\n with cls.file_open(path_or_buf, \"rb\", kwargs.get(\"compression\", \"infer\")) as f:\n partition_ids = []\n index_ids = []\n dtypes_ids = []\n\n column_widths, num_splits = cls._define_metadata(empty_pd_df, columns)\n\n args = {\"fname\": path_or_buf, \"num_splits\": num_splits, **kwargs}\n\n splits = cls.partitioned_file(\n f,\n num_partitions=NPartitions.get(),\n is_quoting=(args.get(\"quoting\", \"\") != QUOTE_NONE),\n )\n for start, end in splits:\n args.update({\"start\": start, \"end\": end})\n partition_id = cls.deploy(cls.parse, num_splits + 3, args)\n partition_ids.append(partition_id[:-3])\n index_ids.append(partition_id[-3])\n dtypes_ids.append(partition_id[-2])\n\n # partition_id[-1] contains the columns for each partition, which will be useful\n # for implementing when `lines=False`.\n row_lengths = cls.materialize(index_ids)\n new_index = pandas.RangeIndex(sum(row_lengths))\n\n dtypes = cls.get_dtypes(dtypes_ids)\n partition_ids = cls.build_partition(partition_ids, row_lengths, column_widths)\n\n if isinstance(dtypes, pandas.Series):\n dtypes.index = columns\n else:\n dtypes = pandas.Series(dtypes, index=columns)\n\n new_frame = cls.frame_cls(\n np.array(partition_ids),\n new_index,\n columns,\n row_lengths,\n column_widths,\n dtypes=dtypes,\n )\n new_frame.synchronize_labels(axis=0)\n return cls.query_compiler_cls(new_frame)\n", "path": "modin/engines/base/io/text/json_dispatcher.py"}]}
| 1,710 | 170 |
gh_patches_debug_7993
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-2954
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature Pyramid Network code bug
https://github.com/pytorch/vision/blob/c2e8a00885e68ae1200eb6440f540e181d9125de/torchvision/ops/feature_pyramid_network.py#L60
this line does not work, I think it should be modified as self.modules() instead of self.children()
</issue>
<code>
[start of torchvision/ops/feature_pyramid_network.py]
1 from collections import OrderedDict
2
3 import torch
4 import torch.nn.functional as F
5 from torch import nn, Tensor
6
7 from torch.jit.annotations import Tuple, List, Dict, Optional
8
9
10 class ExtraFPNBlock(nn.Module):
11 """
12 Base class for the extra block in the FPN.
13
14 Arguments:
15 results (List[Tensor]): the result of the FPN
16 x (List[Tensor]): the original feature maps
17 names (List[str]): the names for each one of the
18 original feature maps
19
20 Returns:
21 results (List[Tensor]): the extended set of results
22 of the FPN
23 names (List[str]): the extended set of names for the results
24 """
25 def forward(
26 self,
27 results: List[Tensor],
28 x: List[Tensor],
29 names: List[str],
30 ) -> Tuple[List[Tensor], List[str]]:
31 pass
32
33
34 class FeaturePyramidNetwork(nn.Module):
35 """
36 Module that adds a FPN from on top of a set of feature maps. This is based on
37 `"Feature Pyramid Network for Object Detection" <https://arxiv.org/abs/1612.03144>`_.
38
39 The feature maps are currently supposed to be in increasing depth
40 order.
41
42 The input to the model is expected to be an OrderedDict[Tensor], containing
43 the feature maps on top of which the FPN will be added.
44
45 Arguments:
46 in_channels_list (list[int]): number of channels for each feature map that
47 is passed to the module
48 out_channels (int): number of channels of the FPN representation
49 extra_blocks (ExtraFPNBlock or None): if provided, extra operations will
50 be performed. It is expected to take the fpn features, the original
51 features and the names of the original features as input, and returns
52 a new list of feature maps and their corresponding names
53
54 Examples::
55
56 >>> m = torchvision.ops.FeaturePyramidNetwork([10, 20, 30], 5)
57 >>> # get some dummy data
58 >>> x = OrderedDict()
59 >>> x['feat0'] = torch.rand(1, 10, 64, 64)
60 >>> x['feat2'] = torch.rand(1, 20, 16, 16)
61 >>> x['feat3'] = torch.rand(1, 30, 8, 8)
62 >>> # compute the FPN on top of x
63 >>> output = m(x)
64 >>> print([(k, v.shape) for k, v in output.items()])
65 >>> # returns
66 >>> [('feat0', torch.Size([1, 5, 64, 64])),
67 >>> ('feat2', torch.Size([1, 5, 16, 16])),
68 >>> ('feat3', torch.Size([1, 5, 8, 8]))]
69
70 """
71 def __init__(
72 self,
73 in_channels_list: List[int],
74 out_channels: int,
75 extra_blocks: Optional[ExtraFPNBlock] = None,
76 ):
77 super(FeaturePyramidNetwork, self).__init__()
78 self.inner_blocks = nn.ModuleList()
79 self.layer_blocks = nn.ModuleList()
80 for in_channels in in_channels_list:
81 if in_channels == 0:
82 raise ValueError("in_channels=0 is currently not supported")
83 inner_block_module = nn.Conv2d(in_channels, out_channels, 1)
84 layer_block_module = nn.Conv2d(out_channels, out_channels, 3, padding=1)
85 self.inner_blocks.append(inner_block_module)
86 self.layer_blocks.append(layer_block_module)
87
88 # initialize parameters now to avoid modifying the initialization of top_blocks
89 for m in self.children():
90 if isinstance(m, nn.Conv2d):
91 nn.init.kaiming_uniform_(m.weight, a=1)
92 nn.init.constant_(m.bias, 0)
93
94 if extra_blocks is not None:
95 assert isinstance(extra_blocks, ExtraFPNBlock)
96 self.extra_blocks = extra_blocks
97
98 def get_result_from_inner_blocks(self, x: Tensor, idx: int) -> Tensor:
99 """
100 This is equivalent to self.inner_blocks[idx](x),
101 but torchscript doesn't support this yet
102 """
103 num_blocks = 0
104 for m in self.inner_blocks:
105 num_blocks += 1
106 if idx < 0:
107 idx += num_blocks
108 i = 0
109 out = x
110 for module in self.inner_blocks:
111 if i == idx:
112 out = module(x)
113 i += 1
114 return out
115
116 def get_result_from_layer_blocks(self, x: Tensor, idx: int) -> Tensor:
117 """
118 This is equivalent to self.layer_blocks[idx](x),
119 but torchscript doesn't support this yet
120 """
121 num_blocks = 0
122 for m in self.layer_blocks:
123 num_blocks += 1
124 if idx < 0:
125 idx += num_blocks
126 i = 0
127 out = x
128 for module in self.layer_blocks:
129 if i == idx:
130 out = module(x)
131 i += 1
132 return out
133
134 def forward(self, x: Dict[str, Tensor]) -> Dict[str, Tensor]:
135 """
136 Computes the FPN for a set of feature maps.
137
138 Arguments:
139 x (OrderedDict[Tensor]): feature maps for each feature level.
140
141 Returns:
142 results (OrderedDict[Tensor]): feature maps after FPN layers.
143 They are ordered from highest resolution first.
144 """
145 # unpack OrderedDict into two lists for easier handling
146 names = list(x.keys())
147 x = list(x.values())
148
149 last_inner = self.get_result_from_inner_blocks(x[-1], -1)
150 results = []
151 results.append(self.get_result_from_layer_blocks(last_inner, -1))
152
153 for idx in range(len(x) - 2, -1, -1):
154 inner_lateral = self.get_result_from_inner_blocks(x[idx], idx)
155 feat_shape = inner_lateral.shape[-2:]
156 inner_top_down = F.interpolate(last_inner, size=feat_shape, mode="nearest")
157 last_inner = inner_lateral + inner_top_down
158 results.insert(0, self.get_result_from_layer_blocks(last_inner, idx))
159
160 if self.extra_blocks is not None:
161 results, names = self.extra_blocks(results, x, names)
162
163 # make it back an OrderedDict
164 out = OrderedDict([(k, v) for k, v in zip(names, results)])
165
166 return out
167
168
169 class LastLevelMaxPool(ExtraFPNBlock):
170 """
171 Applies a max_pool2d on top of the last feature map
172 """
173 def forward(
174 self,
175 x: List[Tensor],
176 y: List[Tensor],
177 names: List[str],
178 ) -> Tuple[List[Tensor], List[str]]:
179 names.append("pool")
180 x.append(F.max_pool2d(x[-1], 1, 2, 0))
181 return x, names
182
183
184 class LastLevelP6P7(ExtraFPNBlock):
185 """
186 This module is used in RetinaNet to generate extra layers, P6 and P7.
187 """
188 def __init__(self, in_channels: int, out_channels: int):
189 super(LastLevelP6P7, self).__init__()
190 self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)
191 self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)
192 for module in [self.p6, self.p7]:
193 nn.init.kaiming_uniform_(module.weight, a=1)
194 nn.init.constant_(module.bias, 0)
195 self.use_P5 = in_channels == out_channels
196
197 def forward(
198 self,
199 p: List[Tensor],
200 c: List[Tensor],
201 names: List[str],
202 ) -> Tuple[List[Tensor], List[str]]:
203 p5, c5 = p[-1], c[-1]
204 x = p5 if self.use_P5 else c5
205 p6 = self.p6(x)
206 p7 = self.p7(F.relu(p6))
207 p.extend([p6, p7])
208 names.extend(["p6", "p7"])
209 return p, names
210
[end of torchvision/ops/feature_pyramid_network.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchvision/ops/feature_pyramid_network.py b/torchvision/ops/feature_pyramid_network.py
--- a/torchvision/ops/feature_pyramid_network.py
+++ b/torchvision/ops/feature_pyramid_network.py
@@ -86,7 +86,7 @@
self.layer_blocks.append(layer_block_module)
# initialize parameters now to avoid modifying the initialization of top_blocks
- for m in self.children():
+ for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_uniform_(m.weight, a=1)
nn.init.constant_(m.bias, 0)
|
{"golden_diff": "diff --git a/torchvision/ops/feature_pyramid_network.py b/torchvision/ops/feature_pyramid_network.py\n--- a/torchvision/ops/feature_pyramid_network.py\n+++ b/torchvision/ops/feature_pyramid_network.py\n@@ -86,7 +86,7 @@\n self.layer_blocks.append(layer_block_module)\n \n # initialize parameters now to avoid modifying the initialization of top_blocks\n- for m in self.children():\n+ for m in self.modules():\n if isinstance(m, nn.Conv2d):\n nn.init.kaiming_uniform_(m.weight, a=1)\n nn.init.constant_(m.bias, 0)\n", "issue": "Feature Pyramid Network code bug\nhttps://github.com/pytorch/vision/blob/c2e8a00885e68ae1200eb6440f540e181d9125de/torchvision/ops/feature_pyramid_network.py#L60\r\n\r\nthis line does not work, I think it should be modified as self.modules() instead of self.children()\n", "before_files": [{"content": "from collections import OrderedDict\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn, Tensor\n\nfrom torch.jit.annotations import Tuple, List, Dict, Optional\n\n\nclass ExtraFPNBlock(nn.Module):\n \"\"\"\n Base class for the extra block in the FPN.\n\n Arguments:\n results (List[Tensor]): the result of the FPN\n x (List[Tensor]): the original feature maps\n names (List[str]): the names for each one of the\n original feature maps\n\n Returns:\n results (List[Tensor]): the extended set of results\n of the FPN\n names (List[str]): the extended set of names for the results\n \"\"\"\n def forward(\n self,\n results: List[Tensor],\n x: List[Tensor],\n names: List[str],\n ) -> Tuple[List[Tensor], List[str]]:\n pass\n\n\nclass FeaturePyramidNetwork(nn.Module):\n \"\"\"\n Module that adds a FPN from on top of a set of feature maps. This is based on\n `\"Feature Pyramid Network for Object Detection\" <https://arxiv.org/abs/1612.03144>`_.\n\n The feature maps are currently supposed to be in increasing depth\n order.\n\n The input to the model is expected to be an OrderedDict[Tensor], containing\n the feature maps on top of which the FPN will be added.\n\n Arguments:\n in_channels_list (list[int]): number of channels for each feature map that\n is passed to the module\n out_channels (int): number of channels of the FPN representation\n extra_blocks (ExtraFPNBlock or None): if provided, extra operations will\n be performed. It is expected to take the fpn features, the original\n features and the names of the original features as input, and returns\n a new list of feature maps and their corresponding names\n\n Examples::\n\n >>> m = torchvision.ops.FeaturePyramidNetwork([10, 20, 30], 5)\n >>> # get some dummy data\n >>> x = OrderedDict()\n >>> x['feat0'] = torch.rand(1, 10, 64, 64)\n >>> x['feat2'] = torch.rand(1, 20, 16, 16)\n >>> x['feat3'] = torch.rand(1, 30, 8, 8)\n >>> # compute the FPN on top of x\n >>> output = m(x)\n >>> print([(k, v.shape) for k, v in output.items()])\n >>> # returns\n >>> [('feat0', torch.Size([1, 5, 64, 64])),\n >>> ('feat2', torch.Size([1, 5, 16, 16])),\n >>> ('feat3', torch.Size([1, 5, 8, 8]))]\n\n \"\"\"\n def __init__(\n self,\n in_channels_list: List[int],\n out_channels: int,\n extra_blocks: Optional[ExtraFPNBlock] = None,\n ):\n super(FeaturePyramidNetwork, self).__init__()\n self.inner_blocks = nn.ModuleList()\n self.layer_blocks = nn.ModuleList()\n for in_channels in in_channels_list:\n if in_channels == 0:\n raise ValueError(\"in_channels=0 is currently not supported\")\n inner_block_module = nn.Conv2d(in_channels, out_channels, 1)\n layer_block_module = nn.Conv2d(out_channels, out_channels, 3, padding=1)\n self.inner_blocks.append(inner_block_module)\n self.layer_blocks.append(layer_block_module)\n\n # initialize parameters now to avoid modifying the initialization of top_blocks\n for m in self.children():\n if isinstance(m, nn.Conv2d):\n nn.init.kaiming_uniform_(m.weight, a=1)\n nn.init.constant_(m.bias, 0)\n\n if extra_blocks is not None:\n assert isinstance(extra_blocks, ExtraFPNBlock)\n self.extra_blocks = extra_blocks\n\n def get_result_from_inner_blocks(self, x: Tensor, idx: int) -> Tensor:\n \"\"\"\n This is equivalent to self.inner_blocks[idx](x),\n but torchscript doesn't support this yet\n \"\"\"\n num_blocks = 0\n for m in self.inner_blocks:\n num_blocks += 1\n if idx < 0:\n idx += num_blocks\n i = 0\n out = x\n for module in self.inner_blocks:\n if i == idx:\n out = module(x)\n i += 1\n return out\n\n def get_result_from_layer_blocks(self, x: Tensor, idx: int) -> Tensor:\n \"\"\"\n This is equivalent to self.layer_blocks[idx](x),\n but torchscript doesn't support this yet\n \"\"\"\n num_blocks = 0\n for m in self.layer_blocks:\n num_blocks += 1\n if idx < 0:\n idx += num_blocks\n i = 0\n out = x\n for module in self.layer_blocks:\n if i == idx:\n out = module(x)\n i += 1\n return out\n\n def forward(self, x: Dict[str, Tensor]) -> Dict[str, Tensor]:\n \"\"\"\n Computes the FPN for a set of feature maps.\n\n Arguments:\n x (OrderedDict[Tensor]): feature maps for each feature level.\n\n Returns:\n results (OrderedDict[Tensor]): feature maps after FPN layers.\n They are ordered from highest resolution first.\n \"\"\"\n # unpack OrderedDict into two lists for easier handling\n names = list(x.keys())\n x = list(x.values())\n\n last_inner = self.get_result_from_inner_blocks(x[-1], -1)\n results = []\n results.append(self.get_result_from_layer_blocks(last_inner, -1))\n\n for idx in range(len(x) - 2, -1, -1):\n inner_lateral = self.get_result_from_inner_blocks(x[idx], idx)\n feat_shape = inner_lateral.shape[-2:]\n inner_top_down = F.interpolate(last_inner, size=feat_shape, mode=\"nearest\")\n last_inner = inner_lateral + inner_top_down\n results.insert(0, self.get_result_from_layer_blocks(last_inner, idx))\n\n if self.extra_blocks is not None:\n results, names = self.extra_blocks(results, x, names)\n\n # make it back an OrderedDict\n out = OrderedDict([(k, v) for k, v in zip(names, results)])\n\n return out\n\n\nclass LastLevelMaxPool(ExtraFPNBlock):\n \"\"\"\n Applies a max_pool2d on top of the last feature map\n \"\"\"\n def forward(\n self,\n x: List[Tensor],\n y: List[Tensor],\n names: List[str],\n ) -> Tuple[List[Tensor], List[str]]:\n names.append(\"pool\")\n x.append(F.max_pool2d(x[-1], 1, 2, 0))\n return x, names\n\n\nclass LastLevelP6P7(ExtraFPNBlock):\n \"\"\"\n This module is used in RetinaNet to generate extra layers, P6 and P7.\n \"\"\"\n def __init__(self, in_channels: int, out_channels: int):\n super(LastLevelP6P7, self).__init__()\n self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)\n self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)\n for module in [self.p6, self.p7]:\n nn.init.kaiming_uniform_(module.weight, a=1)\n nn.init.constant_(module.bias, 0)\n self.use_P5 = in_channels == out_channels\n\n def forward(\n self,\n p: List[Tensor],\n c: List[Tensor],\n names: List[str],\n ) -> Tuple[List[Tensor], List[str]]:\n p5, c5 = p[-1], c[-1]\n x = p5 if self.use_P5 else c5\n p6 = self.p6(x)\n p7 = self.p7(F.relu(p6))\n p.extend([p6, p7])\n names.extend([\"p6\", \"p7\"])\n return p, names\n", "path": "torchvision/ops/feature_pyramid_network.py"}]}
| 2,966 | 144 |
gh_patches_debug_33856
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-2674
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
[FEATURE]: Patch meta information of `torch.nn.functional.softmax()`
This is a part of issue #2628, we will patch meta information of `torch.nn.functional.softmax()`
</issue>
<code>
[start of colossalai/auto_parallel/meta_profiler/meta_registry/activation.py]
1 from typing import List, Tuple
2
3 import torch
4
5 from colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem
6 from colossalai.fx.profiler.memory_utils import activation_size
7 from colossalai.fx.profiler.opcount import flop_mapping
8
9 from ..registry import meta_register
10
11 __all__ = ["relu_meta_info"]
12
13
14 @meta_register.register(torch.nn.ReLU)
15 def relu_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]:
16 """torch.nn.ReLU metainfo generator
17 The aten graph of torch.nn.ReLU is
18 graph():
19 %input_2 : [#users=1] = placeholder[target=placeholder](default=)
20 %relu_default : [#users=2] = call_function[target=torch.ops.aten.relu.default](args = (%input_2,), kwargs = {})
21 %zeros_like_default : [#users=1] = call_function[target=torch.ops.aten.zeros_like.default](args = (%relu_default,), kwargs = {dtype: None, layout: None, device: None, pin_memory: None})
22 %detach_default : [#users=1] = call_function[target=torch.ops.aten.detach.default](args = (%relu_default,), kwargs = {})
23 %threshold_backward_default : [#users=1] = call_function[target=torch.ops.aten.threshold_backward.default](args = (%zeros_like_default, %detach_default, None), kwargs = {})
24 %detach_default_1 : [#users=1] = call_function[target=torch.ops.aten.detach.default](args = (%threshold_backward_default,), kwargs = {})
25 %detach_default_2 : [#users=0] = call_function[target=torch.ops.aten.detach.default](args = (%detach_default_1,), kwargs = {})
26
27 Returns:
28 Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs
29 """
30
31 input_tensor = args[0].data
32 output_tensor = next(filter(lambda x: x.type == OperationDataType.OUTPUT, args)).data
33 is_inplace = kwargs.get("inplace", False)
34
35 # construct input args for forward
36 fwd_in_args = [input_tensor]
37
38 # construct input args for backward
39 bwd_in_args = [output_tensor]
40
41 # calculate cost
42 # the fwd op with compute cost is relu.default
43 # the bwd op with compute cost is threshold_backward
44
45 # calculate compute cost
46 fwd_compute_cost = flop_mapping[torch.ops.aten.relu.default](fwd_in_args, (output_tensor,))
47 bwd_compute_cost = flop_mapping[torch.ops.aten.threshold_backward.default](bwd_in_args, (input_tensor,))
48 compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost)
49
50 # calculate memory cost
51 # NOTE: the inplace ReLU don't have forward memory cost
52 # NOTE: currently in SPMD solver we always believe that there will be a new tensor created in forward
53 fwd_memory_cost = MemoryCost(
54 activation=activation_size(input_tensor) if is_inplace else activation_size([output_tensor, input_tensor]),
55 parameter=0,
56 temp=0,
57 buffer=0)
58
59 bwd_memory_cost = MemoryCost(activation=activation_size(input_tensor), parameter=0, temp=0, buffer=0)
60
61 # total cost is the sum of forward and backward cost
62 total_cost = MemoryCost(activation=fwd_memory_cost.activation + bwd_memory_cost.activation,
63 parameter=fwd_memory_cost.parameter + bwd_memory_cost.parameter)
64
65 memory_cost = TrainCycleItem(fwd=fwd_memory_cost, bwd=bwd_memory_cost, total=total_cost)
66
67 # store fwd_in, fwd_buffer, fwd_out
68 # NOTE: It might seems a little bit weird here, we just want to align it with the older version
69 # of MetaInfoProp. In the future we might modify this part to make it clearer.
70 fwd_in = []
71 fwd_buffer = [torch.zeros_like(output_tensor, device='meta')]
72 fwd_out = [torch.zeros_like(output_tensor, device='meta')]
73
74 return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out
75
[end of colossalai/auto_parallel/meta_profiler/meta_registry/activation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py b/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py
--- a/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py
+++ b/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py
@@ -72,3 +72,53 @@
fwd_out = [torch.zeros_like(output_tensor, device='meta')]
return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out
+
+
+@meta_register.register(torch.nn.Softmax)
+@meta_register.register(torch.nn.functional.softmax)
+def softmax_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]:
+ """torch.nn.Softmax metainfo generator
+ Returns:
+ Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs
+ """
+ input_tensor = next(
+ filter(
+ lambda x:
+ (x.type == OperationDataType.ARG or x.type == OperationDataType.PARAM) and x.name != 'softmax_dim',
+ args)).data
+ output_tensor = next(filter(lambda x: x.type == OperationDataType.OUTPUT, args)).data
+ softmax_dim = next(filter(lambda x: x.name == 'softmax_dim', args)).data
+
+ # calculate cost
+
+ # calculate compute cost
+ fwd_compute_cost = flop_mapping[torch.ops.aten._softmax.default]([input_tensor], [output_tensor])
+ bwd_compute_cost = flop_mapping[torch.ops.aten._softmax_backward_data.default]([output_tensor], [input_tensor])
+
+ compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost)
+
+ # calculate memory cost
+ # NOTE: currently in SPMD solver we always believe that there will be a new tensor created in forward
+ fwd_memory_cost = MemoryCost(activation=activation_size([input_tensor, output_tensor]),
+ parameter=0,
+ temp=0,
+ buffer=0)
+ bwd_memory_cost = MemoryCost(activation=activation_size(input_tensor),
+ parameter=0,
+ temp=activation_size(input_tensor),
+ buffer=0)
+
+ # total cost is the sum of forward and backward cost
+ total_cost = MemoryCost(activation=fwd_memory_cost.activation + bwd_memory_cost.activation,
+ parameter=fwd_memory_cost.parameter + bwd_memory_cost.parameter,
+ temp=fwd_memory_cost.temp + bwd_memory_cost.temp,
+ buffer=fwd_memory_cost.buffer + bwd_memory_cost.buffer)
+
+ memory_cost = TrainCycleItem(fwd=fwd_memory_cost, bwd=bwd_memory_cost, total=total_cost)
+
+ # store fwd_in, fwd_buffer, fwd_out
+ fwd_in = []
+ fwd_buffer = [torch.zeros_like(output_tensor, device='meta')]
+ fwd_out = [torch.zeros_like(output_tensor, device='meta')]
+
+ return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out
|
{"golden_diff": "diff --git a/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py b/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py\n--- a/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py\n+++ b/colossalai/auto_parallel/meta_profiler/meta_registry/activation.py\n@@ -72,3 +72,53 @@\n fwd_out = [torch.zeros_like(output_tensor, device='meta')]\n \n return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out\n+\n+\n+@meta_register.register(torch.nn.Softmax)\n+@meta_register.register(torch.nn.functional.softmax)\n+def softmax_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]:\n+ \"\"\"torch.nn.Softmax metainfo generator\n+ Returns:\n+ Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs\n+ \"\"\"\n+ input_tensor = next(\n+ filter(\n+ lambda x:\n+ (x.type == OperationDataType.ARG or x.type == OperationDataType.PARAM) and x.name != 'softmax_dim',\n+ args)).data\n+ output_tensor = next(filter(lambda x: x.type == OperationDataType.OUTPUT, args)).data\n+ softmax_dim = next(filter(lambda x: x.name == 'softmax_dim', args)).data\n+\n+ # calculate cost\n+\n+ # calculate compute cost\n+ fwd_compute_cost = flop_mapping[torch.ops.aten._softmax.default]([input_tensor], [output_tensor])\n+ bwd_compute_cost = flop_mapping[torch.ops.aten._softmax_backward_data.default]([output_tensor], [input_tensor])\n+\n+ compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost)\n+\n+ # calculate memory cost\n+ # NOTE: currently in SPMD solver we always believe that there will be a new tensor created in forward\n+ fwd_memory_cost = MemoryCost(activation=activation_size([input_tensor, output_tensor]),\n+ parameter=0,\n+ temp=0,\n+ buffer=0)\n+ bwd_memory_cost = MemoryCost(activation=activation_size(input_tensor),\n+ parameter=0,\n+ temp=activation_size(input_tensor),\n+ buffer=0)\n+\n+ # total cost is the sum of forward and backward cost\n+ total_cost = MemoryCost(activation=fwd_memory_cost.activation + bwd_memory_cost.activation,\n+ parameter=fwd_memory_cost.parameter + bwd_memory_cost.parameter,\n+ temp=fwd_memory_cost.temp + bwd_memory_cost.temp,\n+ buffer=fwd_memory_cost.buffer + bwd_memory_cost.buffer)\n+\n+ memory_cost = TrainCycleItem(fwd=fwd_memory_cost, bwd=bwd_memory_cost, total=total_cost)\n+\n+ # store fwd_in, fwd_buffer, fwd_out\n+ fwd_in = []\n+ fwd_buffer = [torch.zeros_like(output_tensor, device='meta')]\n+ fwd_out = [torch.zeros_like(output_tensor, device='meta')]\n+\n+ return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[FEATURE]: Patch meta information of `torch.nn.functional.softmax()`\nThis is a part of issue #2628, we will patch meta information of `torch.nn.functional.softmax()`\n", "before_files": [{"content": "from typing import List, Tuple\n\nimport torch\n\nfrom colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem\nfrom colossalai.fx.profiler.memory_utils import activation_size\nfrom colossalai.fx.profiler.opcount import flop_mapping\n\nfrom ..registry import meta_register\n\n__all__ = [\"relu_meta_info\"]\n\n\n@meta_register.register(torch.nn.ReLU)\ndef relu_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]:\n \"\"\"torch.nn.ReLU metainfo generator\n The aten graph of torch.nn.ReLU is\n graph():\n %input_2 : [#users=1] = placeholder[target=placeholder](default=)\n %relu_default : [#users=2] = call_function[target=torch.ops.aten.relu.default](args = (%input_2,), kwargs = {})\n %zeros_like_default : [#users=1] = call_function[target=torch.ops.aten.zeros_like.default](args = (%relu_default,), kwargs = {dtype: None, layout: None, device: None, pin_memory: None})\n %detach_default : [#users=1] = call_function[target=torch.ops.aten.detach.default](args = (%relu_default,), kwargs = {})\n %threshold_backward_default : [#users=1] = call_function[target=torch.ops.aten.threshold_backward.default](args = (%zeros_like_default, %detach_default, None), kwargs = {})\n %detach_default_1 : [#users=1] = call_function[target=torch.ops.aten.detach.default](args = (%threshold_backward_default,), kwargs = {})\n %detach_default_2 : [#users=0] = call_function[target=torch.ops.aten.detach.default](args = (%detach_default_1,), kwargs = {})\n\n Returns:\n Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs\n \"\"\"\n\n input_tensor = args[0].data\n output_tensor = next(filter(lambda x: x.type == OperationDataType.OUTPUT, args)).data\n is_inplace = kwargs.get(\"inplace\", False)\n\n # construct input args for forward\n fwd_in_args = [input_tensor]\n\n # construct input args for backward\n bwd_in_args = [output_tensor]\n\n # calculate cost\n # the fwd op with compute cost is relu.default\n # the bwd op with compute cost is threshold_backward\n\n # calculate compute cost\n fwd_compute_cost = flop_mapping[torch.ops.aten.relu.default](fwd_in_args, (output_tensor,))\n bwd_compute_cost = flop_mapping[torch.ops.aten.threshold_backward.default](bwd_in_args, (input_tensor,))\n compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost)\n\n # calculate memory cost\n # NOTE: the inplace ReLU don't have forward memory cost\n # NOTE: currently in SPMD solver we always believe that there will be a new tensor created in forward\n fwd_memory_cost = MemoryCost(\n activation=activation_size(input_tensor) if is_inplace else activation_size([output_tensor, input_tensor]),\n parameter=0,\n temp=0,\n buffer=0)\n\n bwd_memory_cost = MemoryCost(activation=activation_size(input_tensor), parameter=0, temp=0, buffer=0)\n\n # total cost is the sum of forward and backward cost\n total_cost = MemoryCost(activation=fwd_memory_cost.activation + bwd_memory_cost.activation,\n parameter=fwd_memory_cost.parameter + bwd_memory_cost.parameter)\n\n memory_cost = TrainCycleItem(fwd=fwd_memory_cost, bwd=bwd_memory_cost, total=total_cost)\n\n # store fwd_in, fwd_buffer, fwd_out\n # NOTE: It might seems a little bit weird here, we just want to align it with the older version\n # of MetaInfoProp. In the future we might modify this part to make it clearer.\n fwd_in = []\n fwd_buffer = [torch.zeros_like(output_tensor, device='meta')]\n fwd_out = [torch.zeros_like(output_tensor, device='meta')]\n\n return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out\n", "path": "colossalai/auto_parallel/meta_profiler/meta_registry/activation.py"}]}
| 1,663 | 692 |
gh_patches_debug_37193
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-233
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change 1C export algo for Tags
В этой задаче речь идёт только о файле тегов:
shopelectro/management/commands/_update_catalog/update_tags.py
27.11.18 Произошла неприятная история.
Контентщик выгрузил нес-ко тегов для гирлянд, но они пропали от нашей выгрузки.
Сейчас выгрузка Тегов просто сносит всё что есть и заново накатывает из 1С. Мы немного изменим эту политику.
Реализуй такой алгоритм:
- все Теги, что зашли из 1С, заливаем в базу с перезаписью
- все Теги, что есть в базе, но нет в 1С, не трогаем
</issue>
<code>
[start of shopelectro/management/commands/_update_catalog/utils.py]
1 import glob
2 import logging
3 import os
4 import shutil
5 import subprocess
6 import time
7 from contextlib import contextmanager
8 from itertools import chain
9 from typing import Iterator, Dict
10 from uuid import UUID
11 from xml.etree import ElementTree
12
13 import requests
14 from django.conf import settings
15
16
17 logger = logging.getLogger(__name__)
18 Data = Dict[str, str]
19 NOT_SAVE_TEMPLATE = '{entity} with name="{name}" has no {field}. It\'ll not be' \
20 ' saved'
21
22
23 def is_correct_uuid(uuid_):
24 try:
25 val = UUID(uuid_)
26 except (ValueError, TypeError):
27 return False
28 return str(val) == uuid_
29
30
31 class XmlFile:
32
33 namespace = '{urn:1C.ru:commerceml_2}'
34
35 def __init__(self, fetch_callback, xml_path_pattern, xpath_queries,
36 extra_options=None):
37 self.fetch_callback = fetch_callback
38 self.xml_path_pattern = xml_path_pattern
39 self.xpath_queries = xpath_queries
40 self.extra_options = extra_options or {}
41
42 @property
43 def parsed_files(self):
44 """Get parsed xml files, that matched the path pattern."""
45 xml_files = glob.glob(os.path.join(
46 settings.ASSETS_DIR, self.xml_path_pattern
47 ))
48 assert xml_files, 'Files on path {} does not exist.'.format(
49 self.xml_path_pattern
50 )
51 return [ElementTree.parse(file) for file in xml_files]
52
53 @property
54 def xpaths(self):
55 """Get xpath queries for xml."""
56 return {
57 name: query.format(self.namespace)
58 for name, query in self.xpath_queries.items()
59 }
60
61 def get_data(self) -> Iterator:
62 """
63 Get data from xml files.
64
65 Example files with products names or prices.
66 """
67 return chain.from_iterable(
68 self.fetch_callback(file, self)
69 for file in self.parsed_files
70 )
71
72
73 @contextmanager
74 def collect_errors(error_types: tuple):
75 errors = []
76
77 @contextmanager
78 def collect():
79 try:
80 yield
81 except error_types as error:
82 errors.append(error)
83 yield collect
84 if errors:
85 raise errors[0]
86
87
88 @contextmanager
89 def download_catalog(destination):
90 """Download catalog's xml files and delete after handle them."""
91 wget_command = (
92 'wget -r -P {} ftp://{}:{}@{}/webdata'
93 ' 2>&1 | grep "время\|time\|Downloaded"'.format(
94 destination,
95 settings.FTP_USER,
96 settings.FTP_PASS,
97 settings.FTP_IP,
98 )
99 )
100
101 subprocess.run(wget_command, shell=True)
102 assert os.path.exists(os.path.join(
103 destination, settings.FTP_IP)), 'Files do not downloaded...'
104 logger.info('Download catalog - completed...')
105
106 try:
107 yield
108 finally:
109 # remove downloaded data
110 shutil.rmtree(os.path.join(destination, settings.FTP_IP))
111
112
113 def report(error):
114 report_url = getattr(settings, 'SLACK_REPORT_URL', None)
115 if report_url is not None:
116 requests.post(
117 url=report_url,
118 json={
119 'text': '*Не удалось обновить каталог Shopelectro.*\n'
120 '*Время*: {}\n'
121 '*Ошибка*: {}'.format(time.ctime(), error),
122 }
123 )
124
[end of shopelectro/management/commands/_update_catalog/utils.py]
[start of shopelectro/management/commands/_update_catalog/update_tags.py]
1 import logging
2 from copy import deepcopy
3 from itertools import chain
4 from typing import Iterator, Dict
5 from xml.etree.ElementTree import Element
6
7 from django.db import transaction
8
9 from shopelectro.management.commands._update_catalog.utils import (
10 XmlFile, is_correct_uuid, UUID, Data,
11 )
12 from shopelectro.models import Tag, TagGroup
13
14
15 logger = logging.getLogger(__name__)
16
17
18 def fetch_tags(root: Element, config: XmlFile):
19 def get_uuid_name_pair(
20 element: Element,
21 uuid_xpath: str,
22 name_xpath: str,
23 ):
24 uuid = element.find(uuid_xpath).text
25 name = element.find(name_xpath).text
26
27 return uuid, name
28
29 tag_groups = root.findall(config.xpaths['tag_groups'])
30 for group in tag_groups:
31 group_uuid, group_name = get_uuid_name_pair(
32 group,
33 config.xpaths['tag_group_uuid'],
34 config.xpaths['tag_group_name'],
35 )
36
37 tags = group.findall(config.xpaths['tags'])
38 tags_data = (
39 get_uuid_name_pair(
40 tag,
41 config.xpaths['tag_uuid'],
42 config.xpaths['tag_name'],
43 ) for tag in tags
44 )
45
46 yield group_uuid, {
47 'name': group_name,
48 'tags_data': tags_data,
49 }
50
51 tag_file = XmlFile(
52 fetch_callback=fetch_tags,
53 xml_path_pattern='**/webdata/**/properties/**/import*.xml',
54 xpath_queries={
55 'tag_groups': './/{}Свойства/',
56 'tag_group_uuid': '.{}Ид',
57 'tag_group_name': '.{}Наименование',
58 'tags': '.{}ВариантыЗначений/',
59 'tag_name': '.{}Значение',
60 'tag_uuid': '.{}ИдЗначения',
61 },
62 )
63
64
65 @transaction.atomic
66 def create_or_update(data: Dict[UUID, Data]):
67 group_data = deepcopy(data)
68
69 created_groups_count = 0
70 created_tags_count = 0
71
72 for group_uuid, data_ in group_data.items():
73 tags = data_.pop('tags')
74
75 group, group_created = TagGroup.objects.update_or_create(
76 uuid=group_uuid, defaults=data_
77 )
78
79 created_groups_count += int(group_created)
80
81 for tag_uuid, tag_data in tags.items():
82 _, tag_created = Tag.objects.update_or_create(
83 uuid=tag_uuid,
84 defaults={**tag_data, 'group': group}
85 )
86
87 created_tags_count += int(tag_created)
88
89 logger.info(f'{created_groups_count} tag groups were created.')
90 logger.info(f'{created_tags_count} tags were created.')
91
92
93 @transaction.atomic
94 def delete(group_data: Dict[UUID, Data]):
95 group_data = deepcopy(group_data)
96
97 group_uuids = group_data.keys()
98 tag_uuids = set(chain.from_iterable(
99 data['tags'].keys()
100 for data in group_data.values()
101 ))
102
103 if not (group_uuids and tag_uuids):
104 return
105
106 group_count, _ = TagGroup.objects.exclude(uuid__in=group_uuids).delete()
107 tag_count, _ = Tag.objects.exclude(uuid__in=tag_uuids).delete()
108
109 logger.info(f'{group_count} tag groups and {tag_count} tags were deleted.')
110
111
112 def prepare_data(group_data: Iterator) -> Dict[UUID, Data]:
113 def assembly_structure(group_uuid: str, group_data_: dict):
114 tags_data = group_data_.pop('tags_data', [])
115 tags = {
116 tag_uuid: {'name': tag_name}
117 for tag_uuid, tag_name in tags_data
118 if is_correct_uuid(tag_uuid)
119 }
120
121 return (
122 group_uuid, {
123 **group_data_,
124 'tags': tags
125 }
126 )
127
128 return dict(
129 assembly_structure(group_uuid, data)
130 for group_uuid, data in group_data
131 if is_correct_uuid(group_uuid)
132 )
133
134
135 def main(*args, **kwargs):
136 cleared_group_data = prepare_data(tag_file.get_data())
137 create_or_update(cleared_group_data)
138 delete(cleared_group_data)
139
[end of shopelectro/management/commands/_update_catalog/update_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/shopelectro/management/commands/_update_catalog/update_tags.py b/shopelectro/management/commands/_update_catalog/update_tags.py
--- a/shopelectro/management/commands/_update_catalog/update_tags.py
+++ b/shopelectro/management/commands/_update_catalog/update_tags.py
@@ -1,13 +1,12 @@
import logging
from copy import deepcopy
-from itertools import chain
from typing import Iterator, Dict
from xml.etree.ElementTree import Element
from django.db import transaction
from shopelectro.management.commands._update_catalog.utils import (
- XmlFile, is_correct_uuid, UUID, Data,
+ Data, is_correct_uuid, UUID_TYPE, XmlFile
)
from shopelectro.models import Tag, TagGroup
@@ -63,7 +62,7 @@
@transaction.atomic
-def create_or_update(data: Dict[UUID, Data]):
+def create_or_update(data: Dict[UUID_TYPE, Data]):
group_data = deepcopy(data)
created_groups_count = 0
@@ -90,26 +89,7 @@
logger.info(f'{created_tags_count} tags were created.')
[email protected]
-def delete(group_data: Dict[UUID, Data]):
- group_data = deepcopy(group_data)
-
- group_uuids = group_data.keys()
- tag_uuids = set(chain.from_iterable(
- data['tags'].keys()
- for data in group_data.values()
- ))
-
- if not (group_uuids and tag_uuids):
- return
-
- group_count, _ = TagGroup.objects.exclude(uuid__in=group_uuids).delete()
- tag_count, _ = Tag.objects.exclude(uuid__in=tag_uuids).delete()
-
- logger.info(f'{group_count} tag groups and {tag_count} tags were deleted.')
-
-
-def prepare_data(group_data: Iterator) -> Dict[UUID, Data]:
+def prepare_data(group_data: Iterator) -> Dict[UUID_TYPE, Data]:
def assembly_structure(group_uuid: str, group_data_: dict):
tags_data = group_data_.pop('tags_data', [])
tags = {
@@ -135,4 +115,3 @@
def main(*args, **kwargs):
cleared_group_data = prepare_data(tag_file.get_data())
create_or_update(cleared_group_data)
- delete(cleared_group_data)
diff --git a/shopelectro/management/commands/_update_catalog/utils.py b/shopelectro/management/commands/_update_catalog/utils.py
--- a/shopelectro/management/commands/_update_catalog/utils.py
+++ b/shopelectro/management/commands/_update_catalog/utils.py
@@ -15,7 +15,8 @@
logger = logging.getLogger(__name__)
-Data = Dict[str, str]
+UUID_TYPE = str
+Data = Dict[str, Dict[str, dict]]
NOT_SAVE_TEMPLATE = '{entity} with name="{name}" has no {field}. It\'ll not be' \
' saved'
|
{"golden_diff": "diff --git a/shopelectro/management/commands/_update_catalog/update_tags.py b/shopelectro/management/commands/_update_catalog/update_tags.py\n--- a/shopelectro/management/commands/_update_catalog/update_tags.py\n+++ b/shopelectro/management/commands/_update_catalog/update_tags.py\n@@ -1,13 +1,12 @@\n import logging\n from copy import deepcopy\n-from itertools import chain\n from typing import Iterator, Dict\n from xml.etree.ElementTree import Element\n \n from django.db import transaction\n \n from shopelectro.management.commands._update_catalog.utils import (\n- XmlFile, is_correct_uuid, UUID, Data,\n+ Data, is_correct_uuid, UUID_TYPE, XmlFile\n )\n from shopelectro.models import Tag, TagGroup\n \n@@ -63,7 +62,7 @@\n \n \n @transaction.atomic\n-def create_or_update(data: Dict[UUID, Data]):\n+def create_or_update(data: Dict[UUID_TYPE, Data]):\n group_data = deepcopy(data)\n \n created_groups_count = 0\n@@ -90,26 +89,7 @@\n logger.info(f'{created_tags_count} tags were created.')\n \n \[email protected]\n-def delete(group_data: Dict[UUID, Data]):\n- group_data = deepcopy(group_data)\n-\n- group_uuids = group_data.keys()\n- tag_uuids = set(chain.from_iterable(\n- data['tags'].keys()\n- for data in group_data.values()\n- ))\n-\n- if not (group_uuids and tag_uuids):\n- return\n-\n- group_count, _ = TagGroup.objects.exclude(uuid__in=group_uuids).delete()\n- tag_count, _ = Tag.objects.exclude(uuid__in=tag_uuids).delete()\n-\n- logger.info(f'{group_count} tag groups and {tag_count} tags were deleted.')\n-\n-\n-def prepare_data(group_data: Iterator) -> Dict[UUID, Data]:\n+def prepare_data(group_data: Iterator) -> Dict[UUID_TYPE, Data]:\n def assembly_structure(group_uuid: str, group_data_: dict):\n tags_data = group_data_.pop('tags_data', [])\n tags = {\n@@ -135,4 +115,3 @@\n def main(*args, **kwargs):\n cleared_group_data = prepare_data(tag_file.get_data())\n create_or_update(cleared_group_data)\n- delete(cleared_group_data)\ndiff --git a/shopelectro/management/commands/_update_catalog/utils.py b/shopelectro/management/commands/_update_catalog/utils.py\n--- a/shopelectro/management/commands/_update_catalog/utils.py\n+++ b/shopelectro/management/commands/_update_catalog/utils.py\n@@ -15,7 +15,8 @@\n \n \n logger = logging.getLogger(__name__)\n-Data = Dict[str, str]\n+UUID_TYPE = str\n+Data = Dict[str, Dict[str, dict]]\n NOT_SAVE_TEMPLATE = '{entity} with name=\"{name}\" has no {field}. It\\'ll not be' \\\n ' saved'\n", "issue": "Change 1C export algo for Tags\n\u0412 \u044d\u0442\u043e\u0439 \u0437\u0430\u0434\u0430\u0447\u0435 \u0440\u0435\u0447\u044c \u0438\u0434\u0451\u0442 \u0442\u043e\u043b\u044c\u043a\u043e \u043e \u0444\u0430\u0439\u043b\u0435 \u0442\u0435\u0433\u043e\u0432:\r\nshopelectro/management/commands/_update_catalog/update_tags.py\r\n\r\n27.11.18 \u041f\u0440\u043e\u0438\u0437\u043e\u0448\u043b\u0430 \u043d\u0435\u043f\u0440\u0438\u044f\u0442\u043d\u0430\u044f \u0438\u0441\u0442\u043e\u0440\u0438\u044f.\r\n\u041a\u043e\u043d\u0442\u0435\u043d\u0442\u0449\u0438\u043a \u0432\u044b\u0433\u0440\u0443\u0437\u0438\u043b \u043d\u0435\u0441-\u043a\u043e \u0442\u0435\u0433\u043e\u0432 \u0434\u043b\u044f \u0433\u0438\u0440\u043b\u044f\u043d\u0434, \u043d\u043e \u043e\u043d\u0438 \u043f\u0440\u043e\u043f\u0430\u043b\u0438 \u043e\u0442 \u043d\u0430\u0448\u0435\u0439 \u0432\u044b\u0433\u0440\u0443\u0437\u043a\u0438.\r\n\u0421\u0435\u0439\u0447\u0430\u0441 \u0432\u044b\u0433\u0440\u0443\u0437\u043a\u0430 \u0422\u0435\u0433\u043e\u0432 \u043f\u0440\u043e\u0441\u0442\u043e \u0441\u043d\u043e\u0441\u0438\u0442 \u0432\u0441\u0451 \u0447\u0442\u043e \u0435\u0441\u0442\u044c \u0438 \u0437\u0430\u043d\u043e\u0432\u043e \u043d\u0430\u043a\u0430\u0442\u044b\u0432\u0430\u0435\u0442 \u0438\u0437 1\u0421. \u041c\u044b \u043d\u0435\u043c\u043d\u043e\u0433\u043e \u0438\u0437\u043c\u0435\u043d\u0438\u043c \u044d\u0442\u0443 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0443.\r\n\r\n\u0420\u0435\u0430\u043b\u0438\u0437\u0443\u0439 \u0442\u0430\u043a\u043e\u0439 \u0430\u043b\u0433\u043e\u0440\u0438\u0442\u043c:\r\n- \u0432\u0441\u0435 \u0422\u0435\u0433\u0438, \u0447\u0442\u043e \u0437\u0430\u0448\u043b\u0438 \u0438\u0437 1\u0421, \u0437\u0430\u043b\u0438\u0432\u0430\u0435\u043c \u0432 \u0431\u0430\u0437\u0443 \u0441 \u043f\u0435\u0440\u0435\u0437\u0430\u043f\u0438\u0441\u044c\u044e\r\n- \u0432\u0441\u0435 \u0422\u0435\u0433\u0438, \u0447\u0442\u043e \u0435\u0441\u0442\u044c \u0432 \u0431\u0430\u0437\u0435, \u043d\u043e \u043d\u0435\u0442 \u0432 1\u0421, \u043d\u0435 \u0442\u0440\u043e\u0433\u0430\u0435\u043c\r\n\r\n\n", "before_files": [{"content": "import glob\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport time\nfrom contextlib import contextmanager\nfrom itertools import chain\nfrom typing import Iterator, Dict\nfrom uuid import UUID\nfrom xml.etree import ElementTree\n\nimport requests\nfrom django.conf import settings\n\n\nlogger = logging.getLogger(__name__)\nData = Dict[str, str]\nNOT_SAVE_TEMPLATE = '{entity} with name=\"{name}\" has no {field}. It\\'ll not be' \\\n ' saved'\n\n\ndef is_correct_uuid(uuid_):\n try:\n val = UUID(uuid_)\n except (ValueError, TypeError):\n return False\n return str(val) == uuid_\n\n\nclass XmlFile:\n\n namespace = '{urn:1C.ru:commerceml_2}'\n\n def __init__(self, fetch_callback, xml_path_pattern, xpath_queries,\n extra_options=None):\n self.fetch_callback = fetch_callback\n self.xml_path_pattern = xml_path_pattern\n self.xpath_queries = xpath_queries\n self.extra_options = extra_options or {}\n\n @property\n def parsed_files(self):\n \"\"\"Get parsed xml files, that matched the path pattern.\"\"\"\n xml_files = glob.glob(os.path.join(\n settings.ASSETS_DIR, self.xml_path_pattern\n ))\n assert xml_files, 'Files on path {} does not exist.'.format(\n self.xml_path_pattern\n )\n return [ElementTree.parse(file) for file in xml_files]\n\n @property\n def xpaths(self):\n \"\"\"Get xpath queries for xml.\"\"\"\n return {\n name: query.format(self.namespace)\n for name, query in self.xpath_queries.items()\n }\n\n def get_data(self) -> Iterator:\n \"\"\"\n Get data from xml files.\n\n Example files with products names or prices.\n \"\"\"\n return chain.from_iterable(\n self.fetch_callback(file, self)\n for file in self.parsed_files\n )\n\n\n@contextmanager\ndef collect_errors(error_types: tuple):\n errors = []\n\n @contextmanager\n def collect():\n try:\n yield\n except error_types as error:\n errors.append(error)\n yield collect\n if errors:\n raise errors[0]\n\n\n@contextmanager\ndef download_catalog(destination):\n \"\"\"Download catalog's xml files and delete after handle them.\"\"\"\n wget_command = (\n 'wget -r -P {} ftp://{}:{}@{}/webdata'\n ' 2>&1 | grep \"\u0432\u0440\u0435\u043c\u044f\\|time\\|Downloaded\"'.format(\n destination,\n settings.FTP_USER,\n settings.FTP_PASS,\n settings.FTP_IP,\n )\n )\n\n subprocess.run(wget_command, shell=True)\n assert os.path.exists(os.path.join(\n destination, settings.FTP_IP)), 'Files do not downloaded...'\n logger.info('Download catalog - completed...')\n\n try:\n yield\n finally:\n # remove downloaded data\n shutil.rmtree(os.path.join(destination, settings.FTP_IP))\n\n\ndef report(error):\n report_url = getattr(settings, 'SLACK_REPORT_URL', None)\n if report_url is not None:\n requests.post(\n url=report_url,\n json={\n 'text': '*\u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0431\u043d\u043e\u0432\u0438\u0442\u044c \u043a\u0430\u0442\u0430\u043b\u043e\u0433 Shopelectro.*\\n'\n '*\u0412\u0440\u0435\u043c\u044f*: {}\\n'\n '*\u041e\u0448\u0438\u0431\u043a\u0430*: {}'.format(time.ctime(), error),\n }\n )\n", "path": "shopelectro/management/commands/_update_catalog/utils.py"}, {"content": "import logging\nfrom copy import deepcopy\nfrom itertools import chain\nfrom typing import Iterator, Dict\nfrom xml.etree.ElementTree import Element\n\nfrom django.db import transaction\n\nfrom shopelectro.management.commands._update_catalog.utils import (\n XmlFile, is_correct_uuid, UUID, Data,\n)\nfrom shopelectro.models import Tag, TagGroup\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef fetch_tags(root: Element, config: XmlFile):\n def get_uuid_name_pair(\n element: Element,\n uuid_xpath: str,\n name_xpath: str,\n ):\n uuid = element.find(uuid_xpath).text\n name = element.find(name_xpath).text\n\n return uuid, name\n\n tag_groups = root.findall(config.xpaths['tag_groups'])\n for group in tag_groups:\n group_uuid, group_name = get_uuid_name_pair(\n group,\n config.xpaths['tag_group_uuid'],\n config.xpaths['tag_group_name'],\n )\n\n tags = group.findall(config.xpaths['tags'])\n tags_data = (\n get_uuid_name_pair(\n tag,\n config.xpaths['tag_uuid'],\n config.xpaths['tag_name'],\n ) for tag in tags\n )\n\n yield group_uuid, {\n 'name': group_name,\n 'tags_data': tags_data,\n }\n\ntag_file = XmlFile(\n fetch_callback=fetch_tags,\n xml_path_pattern='**/webdata/**/properties/**/import*.xml',\n xpath_queries={\n 'tag_groups': './/{}\u0421\u0432\u043e\u0439\u0441\u0442\u0432\u0430/',\n 'tag_group_uuid': '.{}\u0418\u0434',\n 'tag_group_name': '.{}\u041d\u0430\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u0438\u0435',\n 'tags': '.{}\u0412\u0430\u0440\u0438\u0430\u043d\u0442\u044b\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0439/',\n 'tag_name': '.{}\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u0435',\n 'tag_uuid': '.{}\u0418\u0434\u0417\u043d\u0430\u0447\u0435\u043d\u0438\u044f',\n },\n)\n\n\[email protected]\ndef create_or_update(data: Dict[UUID, Data]):\n group_data = deepcopy(data)\n\n created_groups_count = 0\n created_tags_count = 0\n\n for group_uuid, data_ in group_data.items():\n tags = data_.pop('tags')\n\n group, group_created = TagGroup.objects.update_or_create(\n uuid=group_uuid, defaults=data_\n )\n\n created_groups_count += int(group_created)\n\n for tag_uuid, tag_data in tags.items():\n _, tag_created = Tag.objects.update_or_create(\n uuid=tag_uuid,\n defaults={**tag_data, 'group': group}\n )\n\n created_tags_count += int(tag_created)\n\n logger.info(f'{created_groups_count} tag groups were created.')\n logger.info(f'{created_tags_count} tags were created.')\n\n\[email protected]\ndef delete(group_data: Dict[UUID, Data]):\n group_data = deepcopy(group_data)\n\n group_uuids = group_data.keys()\n tag_uuids = set(chain.from_iterable(\n data['tags'].keys()\n for data in group_data.values()\n ))\n\n if not (group_uuids and tag_uuids):\n return\n\n group_count, _ = TagGroup.objects.exclude(uuid__in=group_uuids).delete()\n tag_count, _ = Tag.objects.exclude(uuid__in=tag_uuids).delete()\n\n logger.info(f'{group_count} tag groups and {tag_count} tags were deleted.')\n\n\ndef prepare_data(group_data: Iterator) -> Dict[UUID, Data]:\n def assembly_structure(group_uuid: str, group_data_: dict):\n tags_data = group_data_.pop('tags_data', [])\n tags = {\n tag_uuid: {'name': tag_name}\n for tag_uuid, tag_name in tags_data\n if is_correct_uuid(tag_uuid)\n }\n\n return (\n group_uuid, {\n **group_data_,\n 'tags': tags\n }\n )\n\n return dict(\n assembly_structure(group_uuid, data)\n for group_uuid, data in group_data\n if is_correct_uuid(group_uuid)\n )\n\n\ndef main(*args, **kwargs):\n cleared_group_data = prepare_data(tag_file.get_data())\n create_or_update(cleared_group_data)\n delete(cleared_group_data)\n", "path": "shopelectro/management/commands/_update_catalog/update_tags.py"}]}
| 2,962 | 656 |
gh_patches_debug_2250
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-3850
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optimize the `bug_to_types()` functions to avoid redundant the execution of string lowering
The `bug["whiteboard"].lower()` statement is performed multiple times in the function. Instead, we could store the lowercase value in a variable (e.g., `bug_whiteboard`) and reuse it where needed.
Examples:
https://github.com/mozilla/bugbug/blob/0586c6a49732f03781daf43d8726b53e2049e3db/bugbug/models/bugtype.py#L46
https://github.com/mozilla/bugbug/blob/0586c6a49732f03781daf43d8726b53e2049e3db/bugbug/models/bugtype.py#L50
</issue>
<code>
[start of bugbug/models/bugtype.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7 from typing import Iterable, Optional
8
9 import numpy as np
10 import xgboost
11 from sklearn.compose import ColumnTransformer
12 from sklearn.feature_extraction import DictVectorizer
13 from sklearn.multiclass import OneVsRestClassifier
14 from sklearn.pipeline import Pipeline
15
16 from bugbug import bug_features, bugzilla, feature_cleanup, utils
17 from bugbug.model import BugModel
18
19 logger = logging.getLogger(__name__)
20
21 KEYWORD_DICT = {
22 "sec-": "security",
23 "csectype-": "security",
24 "memory-": "memory",
25 "crash": "crash",
26 "crashreportid": "crash",
27 "perf": "performance",
28 "topperf": "performance",
29 "main-thread-io": "performance",
30 "power": "power",
31 }
32 TYPE_LIST = sorted(set(KEYWORD_DICT.values()))
33
34
35 def bug_to_types(
36 bug: bugzilla.BugDict, bug_map: Optional[dict[int, bugzilla.BugDict]] = None
37 ) -> list[str]:
38 types = set()
39
40 bug_whiteboard = bug["whiteboard"].lower()
41
42 if any(
43 f"{whiteboard_text}" in bug_whiteboard
44 for whiteboard_text in ("overhead", "memshrink")
45 ):
46 types.add("memory")
47
48 if "[power" in bug_whiteboard:
49 types.add("power")
50
51 if any(
52 f"[{whiteboard_text}" in bug_whiteboard
53 for whiteboard_text in (
54 "fxperf",
55 "fxperfsize",
56 "snappy",
57 "pdfjs-c-performance",
58 "pdfjs-performance",
59 "sp3",
60 )
61 ):
62 types.add("performance")
63
64 if any(
65 f"[{whiteboard_text}" in bug["whiteboard"].lower()
66 for whiteboard_text in ("client-bounty-form", "sec-survey")
67 ):
68 types.add("security")
69
70 if "cf_performance" in bug and bug["cf_performance"] not in ("---", "?"):
71 types.add("performance")
72
73 if "cf_crash_signature" in bug and bug["cf_crash_signature"] not in ("", "---"):
74 types.add("crash")
75
76 if bug_map is not None:
77 for bug_id in bug["blocks"]:
78 if bug_id not in bug_map:
79 continue
80
81 alias = bug_map[bug_id]["alias"]
82 if alias and alias.startswith("memshrink"):
83 types.add("memory")
84
85 for keyword_start, type in KEYWORD_DICT.items():
86 if any(keyword.startswith(keyword_start) for keyword in bug["keywords"]):
87 types.add(type)
88
89 return list(types)
90
91
92 class BugTypeModel(BugModel):
93 def __init__(self, lemmatization=False, historical=False):
94 BugModel.__init__(self, lemmatization)
95
96 self.calculate_importance = False
97
98 feature_extractors = [
99 bug_features.HasSTR(),
100 bug_features.Severity(),
101 # Ignore keywords that would make the ML completely skewed
102 # (we are going to use them as 100% rules in the evaluation phase).
103 bug_features.Keywords(set(KEYWORD_DICT.keys())),
104 bug_features.IsCoverityIssue(),
105 bug_features.HasCrashSignature(),
106 bug_features.HasURL(),
107 bug_features.HasW3CURL(),
108 bug_features.HasGithubURL(),
109 bug_features.Whiteboard(),
110 bug_features.Patches(),
111 bug_features.Landings(),
112 bug_features.BlockedBugsNumber(),
113 bug_features.EverAffected(),
114 bug_features.AffectedThenUnaffected(),
115 bug_features.Product(),
116 bug_features.Component(),
117 ]
118
119 cleanup_functions = [
120 feature_cleanup.url(),
121 feature_cleanup.fileref(),
122 feature_cleanup.synonyms(),
123 ]
124
125 self.extraction_pipeline = Pipeline(
126 [
127 (
128 "bug_extractor",
129 bug_features.BugExtractor(feature_extractors, cleanup_functions),
130 ),
131 (
132 "union",
133 ColumnTransformer(
134 [
135 ("data", DictVectorizer(), "data"),
136 ("title", self.text_vectorizer(min_df=0.001), "title"),
137 (
138 "first_comment",
139 self.text_vectorizer(min_df=0.001),
140 "first_comment",
141 ),
142 (
143 "comments",
144 self.text_vectorizer(min_df=0.001),
145 "comments",
146 ),
147 ]
148 ),
149 ),
150 ]
151 )
152
153 self.hyperparameter = {"n_jobs": utils.get_physical_cpu_count()}
154 self.clf = OneVsRestClassifier(xgboost.XGBClassifier(**self.hyperparameter))
155
156 def get_labels(self) -> tuple[dict[int, np.ndarray], list[str]]:
157 classes = {}
158
159 bug_map = {bug["id"]: bug for bug in bugzilla.get_bugs()}
160
161 for bug_data in bug_map.values():
162 target = np.zeros(len(TYPE_LIST))
163 for type_ in bug_to_types(bug_data, bug_map):
164 target[TYPE_LIST.index(type_)] = 1
165
166 classes[int(bug_data["id"])] = target
167
168 for type_ in TYPE_LIST:
169 logger.info(
170 "%d %s bugs",
171 sum(
172 1
173 for target in classes.values()
174 if target[TYPE_LIST.index(type_)] == 1
175 ),
176 type_,
177 )
178
179 return classes, TYPE_LIST
180
181 def get_feature_names(self):
182 return self.extraction_pipeline.named_steps["union"].get_feature_names_out()
183
184 def overwrite_classes(
185 self,
186 bugs: Iterable[bugzilla.BugDict],
187 classes: dict[int, np.ndarray],
188 probabilities: bool,
189 ):
190 for i, bug in enumerate(bugs):
191 for type_ in bug_to_types(bug):
192 if probabilities:
193 classes[i][TYPE_LIST.index(type_)] = 1.0
194 else:
195 classes[i][TYPE_LIST.index(type_)] = 1
196
197 return classes
198
[end of bugbug/models/bugtype.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bugbug/models/bugtype.py b/bugbug/models/bugtype.py
--- a/bugbug/models/bugtype.py
+++ b/bugbug/models/bugtype.py
@@ -62,7 +62,7 @@
types.add("performance")
if any(
- f"[{whiteboard_text}" in bug["whiteboard"].lower()
+ f"[{whiteboard_text}" in bug_whiteboard
for whiteboard_text in ("client-bounty-form", "sec-survey")
):
types.add("security")
|
{"golden_diff": "diff --git a/bugbug/models/bugtype.py b/bugbug/models/bugtype.py\n--- a/bugbug/models/bugtype.py\n+++ b/bugbug/models/bugtype.py\n@@ -62,7 +62,7 @@\n types.add(\"performance\")\n \n if any(\n- f\"[{whiteboard_text}\" in bug[\"whiteboard\"].lower()\n+ f\"[{whiteboard_text}\" in bug_whiteboard\n for whiteboard_text in (\"client-bounty-form\", \"sec-survey\")\n ):\n types.add(\"security\")\n", "issue": "Optimize the `bug_to_types()` functions to avoid redundant the execution of string lowering\nThe `bug[\"whiteboard\"].lower()` statement is performed multiple times in the function. Instead, we could store the lowercase value in a variable (e.g., `bug_whiteboard`) and reuse it where needed.\r\n\r\nExamples:\r\n\r\nhttps://github.com/mozilla/bugbug/blob/0586c6a49732f03781daf43d8726b53e2049e3db/bugbug/models/bugtype.py#L46\r\n\r\nhttps://github.com/mozilla/bugbug/blob/0586c6a49732f03781daf43d8726b53e2049e3db/bugbug/models/bugtype.py#L50\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\nfrom typing import Iterable, Optional\n\nimport numpy as np\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features, bugzilla, feature_cleanup, utils\nfrom bugbug.model import BugModel\n\nlogger = logging.getLogger(__name__)\n\nKEYWORD_DICT = {\n \"sec-\": \"security\",\n \"csectype-\": \"security\",\n \"memory-\": \"memory\",\n \"crash\": \"crash\",\n \"crashreportid\": \"crash\",\n \"perf\": \"performance\",\n \"topperf\": \"performance\",\n \"main-thread-io\": \"performance\",\n \"power\": \"power\",\n}\nTYPE_LIST = sorted(set(KEYWORD_DICT.values()))\n\n\ndef bug_to_types(\n bug: bugzilla.BugDict, bug_map: Optional[dict[int, bugzilla.BugDict]] = None\n) -> list[str]:\n types = set()\n\n bug_whiteboard = bug[\"whiteboard\"].lower()\n\n if any(\n f\"{whiteboard_text}\" in bug_whiteboard\n for whiteboard_text in (\"overhead\", \"memshrink\")\n ):\n types.add(\"memory\")\n\n if \"[power\" in bug_whiteboard:\n types.add(\"power\")\n\n if any(\n f\"[{whiteboard_text}\" in bug_whiteboard\n for whiteboard_text in (\n \"fxperf\",\n \"fxperfsize\",\n \"snappy\",\n \"pdfjs-c-performance\",\n \"pdfjs-performance\",\n \"sp3\",\n )\n ):\n types.add(\"performance\")\n\n if any(\n f\"[{whiteboard_text}\" in bug[\"whiteboard\"].lower()\n for whiteboard_text in (\"client-bounty-form\", \"sec-survey\")\n ):\n types.add(\"security\")\n\n if \"cf_performance\" in bug and bug[\"cf_performance\"] not in (\"---\", \"?\"):\n types.add(\"performance\")\n\n if \"cf_crash_signature\" in bug and bug[\"cf_crash_signature\"] not in (\"\", \"---\"):\n types.add(\"crash\")\n\n if bug_map is not None:\n for bug_id in bug[\"blocks\"]:\n if bug_id not in bug_map:\n continue\n\n alias = bug_map[bug_id][\"alias\"]\n if alias and alias.startswith(\"memshrink\"):\n types.add(\"memory\")\n\n for keyword_start, type in KEYWORD_DICT.items():\n if any(keyword.startswith(keyword_start) for keyword in bug[\"keywords\"]):\n types.add(type)\n\n return list(types)\n\n\nclass BugTypeModel(BugModel):\n def __init__(self, lemmatization=False, historical=False):\n BugModel.__init__(self, lemmatization)\n\n self.calculate_importance = False\n\n feature_extractors = [\n bug_features.HasSTR(),\n bug_features.Severity(),\n # Ignore keywords that would make the ML completely skewed\n # (we are going to use them as 100% rules in the evaluation phase).\n bug_features.Keywords(set(KEYWORD_DICT.keys())),\n bug_features.IsCoverityIssue(),\n bug_features.HasCrashSignature(),\n bug_features.HasURL(),\n bug_features.HasW3CURL(),\n bug_features.HasGithubURL(),\n bug_features.Whiteboard(),\n bug_features.Patches(),\n bug_features.Landings(),\n bug_features.BlockedBugsNumber(),\n bug_features.EverAffected(),\n bug_features.AffectedThenUnaffected(),\n bug_features.Product(),\n bug_features.Component(),\n ]\n\n cleanup_functions = [\n feature_cleanup.url(),\n feature_cleanup.fileref(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"bug_extractor\",\n bug_features.BugExtractor(feature_extractors, cleanup_functions),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"data\", DictVectorizer(), \"data\"),\n (\"title\", self.text_vectorizer(min_df=0.001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.001),\n \"first_comment\",\n ),\n (\n \"comments\",\n self.text_vectorizer(min_df=0.001),\n \"comments\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.hyperparameter = {\"n_jobs\": utils.get_physical_cpu_count()}\n self.clf = OneVsRestClassifier(xgboost.XGBClassifier(**self.hyperparameter))\n\n def get_labels(self) -> tuple[dict[int, np.ndarray], list[str]]:\n classes = {}\n\n bug_map = {bug[\"id\"]: bug for bug in bugzilla.get_bugs()}\n\n for bug_data in bug_map.values():\n target = np.zeros(len(TYPE_LIST))\n for type_ in bug_to_types(bug_data, bug_map):\n target[TYPE_LIST.index(type_)] = 1\n\n classes[int(bug_data[\"id\"])] = target\n\n for type_ in TYPE_LIST:\n logger.info(\n \"%d %s bugs\",\n sum(\n 1\n for target in classes.values()\n if target[TYPE_LIST.index(type_)] == 1\n ),\n type_,\n )\n\n return classes, TYPE_LIST\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names_out()\n\n def overwrite_classes(\n self,\n bugs: Iterable[bugzilla.BugDict],\n classes: dict[int, np.ndarray],\n probabilities: bool,\n ):\n for i, bug in enumerate(bugs):\n for type_ in bug_to_types(bug):\n if probabilities:\n classes[i][TYPE_LIST.index(type_)] = 1.0\n else:\n classes[i][TYPE_LIST.index(type_)] = 1\n\n return classes\n", "path": "bugbug/models/bugtype.py"}]}
| 2,538 | 123 |
gh_patches_debug_4261
|
rasdani/github-patches
|
git_diff
|
Nitrate__Nitrate-406
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix call to DurationField.from_db_value() which will be removed in Django 3.0
Lots of such warning are output.
```
src/tests/xmlrpc/test_testcaseplan.py::TestCasePlanGet::test_get_with_negative_case_id
/testenv/lib/python3.7/site-packages/django/db/models/sql/compiler.py:995: RemovedInDjango30Warning: Remove the context parameter from DurationField.from_db_value(). Support for it will be removed in Django 3.0.
RemovedInDjango30Warning,
```
</issue>
<code>
[start of src/tcms/core/models/fields.py]
1 # -*- coding: utf-8 -*-
2 import datetime
3 import six
4
5 from django.core.exceptions import ValidationError
6 from django.db.models.fields import IntegerField
7 from django.db.models.fields import BooleanField
8
9 from tcms.core.forms.fields import DurationField as DurationFormField
10
11 try:
12 from pymysql.constants import FIELD_TYPE
13 except ImportError:
14 # Refer to tcms/__init__.py for details.
15 pass
16 else:
17 from django.db.backends.mysql.base import django_conversions
18 django_conversions.update({FIELD_TYPE.TIME: None})
19
20
21 class DurationField(IntegerField):
22 """Duration field for test run
23
24 Value is stored as number of seconds in database and presents in Nitrate in
25 timedelta type.
26
27 Value should also be able to be serialized to integer as seconds, and then
28 deserialized from value of seconds.
29 """
30
31 def to_python(self, value):
32 if isinstance(value, six.integer_types):
33 return datetime.timedelta(seconds=value)
34 elif isinstance(value, datetime.timedelta):
35 return value
36 else:
37 raise TypeError('Unable to convert %s to timedelta.' % value)
38
39 def from_db_value(self, value, expression, connection, context):
40 if value is None:
41 return value
42 return datetime.timedelta(seconds=value)
43
44 def get_db_prep_value(self, value, connection, prepared=True):
45 """convert datetime.timedelta to seconds.
46
47 1 day equal to 86400 seconds
48 """
49 if isinstance(value, datetime.timedelta):
50 return value.seconds + (86400 * value.days)
51 else:
52 value = super(DurationField, self).get_db_prep_value(
53 value, connection, prepared)
54 return value
55
56 def formfield(self, form_class=DurationFormField, **kwargs):
57 defaults = {'help_text': 'Enter duration in the format: DDHHMM'}
58 defaults.update(kwargs)
59 return form_class(**defaults)
60
61
62 class NitrateBooleanField(BooleanField):
63 """Custom boolean field to allow accepting arbitrary bool values"""
64
65 def to_python(self, value):
66 if value in (1, '1', 'true', 'True', True):
67 return True
68 if value in (0, '0', 'false', 'False', False):
69 return False
70 raise ValidationError(
71 '{} is not recognized as a bool value.'.format(value))
72
[end of src/tcms/core/models/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/tcms/core/models/fields.py b/src/tcms/core/models/fields.py
--- a/src/tcms/core/models/fields.py
+++ b/src/tcms/core/models/fields.py
@@ -36,7 +36,7 @@
else:
raise TypeError('Unable to convert %s to timedelta.' % value)
- def from_db_value(self, value, expression, connection, context):
+ def from_db_value(self, value, *args, **kwargs):
if value is None:
return value
return datetime.timedelta(seconds=value)
|
{"golden_diff": "diff --git a/src/tcms/core/models/fields.py b/src/tcms/core/models/fields.py\n--- a/src/tcms/core/models/fields.py\n+++ b/src/tcms/core/models/fields.py\n@@ -36,7 +36,7 @@\n else:\n raise TypeError('Unable to convert %s to timedelta.' % value)\n \n- def from_db_value(self, value, expression, connection, context):\n+ def from_db_value(self, value, *args, **kwargs):\n if value is None:\n return value\n return datetime.timedelta(seconds=value)\n", "issue": "Fix call to DurationField.from_db_value() which will be removed in Django 3.0\nLots of such warning are output.\r\n\r\n```\r\nsrc/tests/xmlrpc/test_testcaseplan.py::TestCasePlanGet::test_get_with_negative_case_id\r\n /testenv/lib/python3.7/site-packages/django/db/models/sql/compiler.py:995: RemovedInDjango30Warning: Remove the context parameter from DurationField.from_db_value(). Support for it will be removed in Django 3.0.\r\n RemovedInDjango30Warning,\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport six\n\nfrom django.core.exceptions import ValidationError\nfrom django.db.models.fields import IntegerField\nfrom django.db.models.fields import BooleanField\n\nfrom tcms.core.forms.fields import DurationField as DurationFormField\n\ntry:\n from pymysql.constants import FIELD_TYPE\nexcept ImportError:\n # Refer to tcms/__init__.py for details.\n pass\nelse:\n from django.db.backends.mysql.base import django_conversions\n django_conversions.update({FIELD_TYPE.TIME: None})\n\n\nclass DurationField(IntegerField):\n \"\"\"Duration field for test run\n\n Value is stored as number of seconds in database and presents in Nitrate in\n timedelta type.\n\n Value should also be able to be serialized to integer as seconds, and then\n deserialized from value of seconds.\n \"\"\"\n\n def to_python(self, value):\n if isinstance(value, six.integer_types):\n return datetime.timedelta(seconds=value)\n elif isinstance(value, datetime.timedelta):\n return value\n else:\n raise TypeError('Unable to convert %s to timedelta.' % value)\n\n def from_db_value(self, value, expression, connection, context):\n if value is None:\n return value\n return datetime.timedelta(seconds=value)\n\n def get_db_prep_value(self, value, connection, prepared=True):\n \"\"\"convert datetime.timedelta to seconds.\n\n 1 day equal to 86400 seconds\n \"\"\"\n if isinstance(value, datetime.timedelta):\n return value.seconds + (86400 * value.days)\n else:\n value = super(DurationField, self).get_db_prep_value(\n value, connection, prepared)\n return value\n\n def formfield(self, form_class=DurationFormField, **kwargs):\n defaults = {'help_text': 'Enter duration in the format: DDHHMM'}\n defaults.update(kwargs)\n return form_class(**defaults)\n\n\nclass NitrateBooleanField(BooleanField):\n \"\"\"Custom boolean field to allow accepting arbitrary bool values\"\"\"\n\n def to_python(self, value):\n if value in (1, '1', 'true', 'True', True):\n return True\n if value in (0, '0', 'false', 'False', False):\n return False\n raise ValidationError(\n '{} is not recognized as a bool value.'.format(value))\n", "path": "src/tcms/core/models/fields.py"}]}
| 1,286 | 124 |
gh_patches_debug_34925
|
rasdani/github-patches
|
git_diff
|
mosaicml__composer-1174
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consistency in docs [State, time, callbacks]
I am re-writing a callback that was working on composer 0.5 and now it does not.
I see that there is a new class `time` and that `state.epoch` is not there anymore, the problem is that the docs still point to that. Like [here](https://docs.mosaicml.com/en/v0.7.1/trainer/logging.html#logging-additional-data) or that `Logger` cannot be imported from `composer` (`from composer.loggers import Logger`)
I am having a hard time creating a custom callback, maybe a tutorial with the new API?
The callback is for logging images to wandb:
```python
import wandb
from composer import Callback, State, Logger
class LogPredictions(Callback):
def __init__(self, num_samples=100, seed=1234):
super().__init__()
self.num_samples = num_samples
self.data = []
def eval_batch_end(self, state: State, logger: Logger):
"""Compute predictions per batch and stores them on self.data"""
if state.epoch == state.max_duration: #on last val epoch
if len(self.data) < self.num_samples:
n = self.num_samples
x, y = state.batch_pair
outputs = state.outputs.argmax(-1)
data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(zip(x[:n], y[:n], outputs[:n]))]
self.data += data
def eval_end(self, state: State, logger: Logger):
"Create a wandb.Table and logs it"
columns = ['image', 'ground truth', 'prediction']
table = wandb.Table(columns=columns, data=self.data[:self.num_samples])
wandb.log({'sample_table':table}, step=int(state.timer.batch))
```
</issue>
<code>
[start of docs/source/doctest_cleanup.py]
1 # Copyright 2022 MosaicML Composer authors
2 # SPDX-License-Identifier: Apache-2.0
3
4 """Cleanup script that is executed at the end of each doctest."""
5
6 import os
7
8 # variables are defined in doctest_fixtures.py
9 # pyright: reportUndefinedVariable=none
10
11 # tmpdir and cwd were defined in doctest_fixtures.py
12
13 os.chdir(cwd)
14
15 tmpdir.cleanup()
16
[end of docs/source/doctest_cleanup.py]
[start of composer/__init__.py]
1 # Copyright 2022 MosaicML Composer authors
2 # SPDX-License-Identifier: Apache-2.0
3
4 """Composer."""
5
6 from composer import algorithms as algorithms
7 from composer import callbacks as callbacks
8 from composer import datasets as datasets
9 from composer import functional as functional
10 from composer import loggers as loggers
11 from composer import models as models
12 from composer import optim as optim
13 from composer import profiler as profiler
14 from composer import trainer as trainer
15 from composer import utils as utils
16 from composer.core import Algorithm as Algorithm
17 from composer.core import Callback as Callback
18 from composer.core import DataSpec as DataSpec
19 from composer.core import Engine as Engine
20 from composer.core import Evaluator as Evaluator
21 from composer.core import Event as Event
22 from composer.core import State as State
23 from composer.core import Time as Time
24 from composer.core import Timestamp as Timestamp
25 from composer.core import TimeUnit as TimeUnit
26 from composer.core import types as types
27 from composer.models import ComposerModel as ComposerModel
28 from composer.trainer import Trainer as Trainer
29
30 __version__ = '0.7.0'
31
[end of composer/__init__.py]
[start of docs/source/doctest_fixtures.py]
1 # Copyright 2022 MosaicML Composer authors
2 # SPDX-License-Identifier: Apache-2.0
3
4 # disabling general type issues because of monkeypatching
5 #yright: reportGeneralTypeIssues=none
6
7 """Fixtures available in doctests.
8
9 The script is run before any doctests are executed,
10 so all imports and variables are available in any doctest.
11 The output of this setup script does not show up in the documentation.
12 """
13 import os
14 import sys
15 import tempfile
16 from typing import Any
17 from typing import Callable as Callable
18
19 import numpy as np
20 import torch
21 import torch.optim
22 import torch.utils.data
23 from PIL import Image
24 from torch.optim.lr_scheduler import CosineAnnealingLR
25
26 import composer
27 import composer.loggers
28 import composer.loggers.object_store_logger
29 import composer.trainer
30 import composer.trainer.trainer
31 import composer.utils
32 import composer.utils.checkpoint
33 import composer.utils.file_helpers
34 from composer import Trainer
35 from composer.core import Algorithm as Algorithm
36 from composer.core import Callback as Callback
37 from composer.core import DataSpec as DataSpec
38 from composer.core import Engine as Engine
39 from composer.core import Evaluator as Evaluator
40 from composer.core import Event as Event
41 from composer.core import State as State
42 from composer.core import Time as Time
43 from composer.core import Timestamp as Timestamp
44 from composer.core import TimeUnit as TimeUnit
45 from composer.core import types as types
46 from composer.datasets.synthetic import SyntheticBatchPairDataset
47 from composer.loggers import InMemoryLogger as InMemoryLogger
48 from composer.loggers import Logger as Logger
49 from composer.loggers import LogLevel as LogLevel
50 from composer.loggers import ObjectStoreLogger
51 from composer.models import ComposerModel as ComposerModel
52 from composer.optim.scheduler import ConstantScheduler
53 from composer.utils import LibcloudObjectStore
54 from composer.utils import ensure_tuple as ensure_tuple
55
56 # Need to insert the repo root at the beginning of the path, since there may be other modules named `tests`
57 # Assuming that docs generation is running from the `docs` directory
58 _docs_dir = os.path.abspath('.')
59 _repo_root = os.path.dirname(_docs_dir)
60 if sys.path[0] != _repo_root:
61 sys.path.insert(0, _repo_root)
62
63 from tests.common import SimpleModel
64
65 # Change the cwd to be the tempfile, so we don't pollute the documentation source folder
66 tmpdir = tempfile.TemporaryDirectory()
67 cwd = os.path.abspath('.')
68 os.chdir(tmpdir.name)
69
70 num_channels = 3
71 num_classes = 10
72 data_shape = (num_channels, 5, 5)
73
74 Model = SimpleModel
75
76 model = SimpleModel(num_channels, num_classes)
77
78 optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
79
80 scheduler = CosineAnnealingLR(optimizer, T_max=1)
81
82 dataset = SyntheticBatchPairDataset(
83 total_dataset_size=100,
84 data_shape=data_shape,
85 num_classes=num_classes,
86 num_unique_samples_to_create=10,
87 )
88
89 train_dataset = dataset
90 eval_dataset = dataset
91
92 batch_size = 10
93
94 train_dataloader = torch.utils.data.DataLoader(
95 train_dataset,
96 batch_size=batch_size,
97 num_workers=0,
98 pin_memory=False,
99 drop_last=True,
100 )
101
102 eval_dataloader = torch.utils.data.DataLoader(
103 eval_dataset,
104 batch_size=batch_size,
105 num_workers=0,
106 pin_memory=False,
107 drop_last=False,
108 )
109
110 state = State(
111 rank_zero_seed=0,
112 model=model,
113 run_name='run_name',
114 optimizers=optimizer,
115 grad_accum=1,
116 dataloader=train_dataloader,
117 dataloader_label='train',
118 max_duration='1ep',
119 precision='fp32',
120 )
121
122 logger = Logger(state)
123
124 engine = Engine(state, logger)
125
126 image = Image.fromarray(np.random.randint(0, 256, size=(32, 32, 3), dtype=np.uint8))
127
128 # error: "randn" is not a known member of module (reportGeneralTypeIssues)
129 X_example = torch.randn(batch_size, num_channels, 32, 32) # type: ignore
130 # error: "randn" is not a known member of module (reportGeneralTypeIssues)
131 logits = torch.randn(batch_size, num_classes) # type: ignore
132 # error: "randint" is not a known member of module (reportGeneralTypeIssues)
133 y_example = torch.randint(num_classes, (batch_size,)) # type: ignore
134
135
136 def loss_fun(output, target, reduction='none'):
137 """Dummy loss function."""
138 return torch.ones_like(target)
139
140
141 # Patch Trainer __init__ function to replace arguments while preserving type
142 _original_trainer_init = Trainer.__init__
143
144
145 def _new_trainer_init(self, fake_ellipses: None = None, **kwargs: Any):
146 if 'model' not in kwargs:
147 kwargs['model'] = model
148 if 'optimizers' not in kwargs:
149 kwargs['optimizers'] = torch.optim.SGD(kwargs['model'].parameters(), lr=0.01)
150 if 'schedulers' not in kwargs:
151 kwargs['schedulers'] = ConstantScheduler()
152 if 'max_duration' not in kwargs:
153 kwargs['max_duration'] = '1ep'
154 if 'train_dataloader' not in kwargs:
155 kwargs['train_dataloader'] = train_dataloader
156 if 'eval_dataloader' not in kwargs:
157 kwargs['eval_dataloader'] = eval_dataloader
158 if 'progress_bar' not in kwargs:
159 kwargs['progress_bar'] = False # hide tqdm logging
160 if 'log_to_console' not in kwargs:
161 kwargs['log_to_console'] = False # hide console logging
162 _original_trainer_init(self, **kwargs)
163
164
165 Trainer.__init__ = _new_trainer_init
166
167
168 # Do not attempt to validate cloud credentials
169 def _do_not_validate(*args, **kwargs) -> None:
170 pass
171
172
173 composer.loggers.object_store_logger._validate_credentials = _do_not_validate # type: ignore
174
175 # Patch ObjectStoreLogger __init__ function to replace arguments while preserving type
176 _original_objectStoreLogger_init = ObjectStoreLogger.__init__
177
178
179 def _new_objectStoreLogger_init(self, fake_ellipses: None = None, **kwargs: Any):
180 os.makedirs('./object_store', exist_ok=True)
181 kwargs.update(use_procs=False,
182 num_concurrent_uploads=1,
183 object_store_cls=LibcloudObjectStore,
184 object_store_kwargs={
185 'provider': 'local',
186 'container': '.',
187 'provider_kwargs': {
188 'key': os.path.abspath('./object_store'),
189 },
190 })
191 _original_objectStoreLogger_init(self, **kwargs)
192
193
194 ObjectStoreLogger.__init__ = _new_objectStoreLogger_init # type: ignore
195
196 # Patch ObjectStore __init__ function to replace arguments while preserving type
197 _original_libcloudObjectStore_init = LibcloudObjectStore.__init__
198
199
200 def _new_libcloudObjectStore_init(self, fake_ellipses: None = None, **kwargs: Any):
201 os.makedirs('./object_store', exist_ok=True)
202 kwargs.update(
203 provider='local',
204 container='.',
205 provider_kwargs={
206 'key': os.path.abspath('./object_store'),
207 },
208 )
209 _original_libcloudObjectStore_init(self, **kwargs)
210
211
212 LibcloudObjectStore.__init__ = _new_libcloudObjectStore_init # type: ignore
213
[end of docs/source/doctest_fixtures.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/composer/__init__.py b/composer/__init__.py
--- a/composer/__init__.py
+++ b/composer/__init__.py
@@ -3,28 +3,25 @@
"""Composer."""
-from composer import algorithms as algorithms
-from composer import callbacks as callbacks
-from composer import datasets as datasets
-from composer import functional as functional
-from composer import loggers as loggers
-from composer import models as models
-from composer import optim as optim
-from composer import profiler as profiler
-from composer import trainer as trainer
-from composer import utils as utils
-from composer.core import Algorithm as Algorithm
-from composer.core import Callback as Callback
-from composer.core import DataSpec as DataSpec
-from composer.core import Engine as Engine
-from composer.core import Evaluator as Evaluator
-from composer.core import Event as Event
-from composer.core import State as State
-from composer.core import Time as Time
-from composer.core import Timestamp as Timestamp
-from composer.core import TimeUnit as TimeUnit
-from composer.core import types as types
-from composer.models import ComposerModel as ComposerModel
-from composer.trainer import Trainer as Trainer
+from composer.core import Algorithm, Callback, DataSpec, Engine, Evaluator, Event, State, Time, Timestamp, TimeUnit
+from composer.loggers import Logger
+from composer.models import ComposerModel
+from composer.trainer import Trainer
+
+__all__ = [
+ 'Algorithm',
+ 'Callback',
+ 'DataSpec',
+ 'Engine',
+ 'Evaluator',
+ 'Event',
+ 'State',
+ 'Time',
+ 'Timestamp',
+ 'TimeUnit',
+ 'Logger',
+ 'ComposerModel',
+ 'Trainer',
+]
__version__ = '0.7.0'
diff --git a/docs/source/doctest_cleanup.py b/docs/source/doctest_cleanup.py
--- a/docs/source/doctest_cleanup.py
+++ b/docs/source/doctest_cleanup.py
@@ -4,6 +4,7 @@
"""Cleanup script that is executed at the end of each doctest."""
import os
+import shutil
# variables are defined in doctest_fixtures.py
# pyright: reportUndefinedVariable=none
@@ -12,4 +13,7 @@
os.chdir(cwd)
-tmpdir.cleanup()
+try:
+ shutil.rmtree(tmpdir)
+except OSError:
+ pass
diff --git a/docs/source/doctest_fixtures.py b/docs/source/doctest_fixtures.py
--- a/docs/source/doctest_fixtures.py
+++ b/docs/source/doctest_fixtures.py
@@ -63,9 +63,9 @@
from tests.common import SimpleModel
# Change the cwd to be the tempfile, so we don't pollute the documentation source folder
-tmpdir = tempfile.TemporaryDirectory()
+tmpdir = tempfile.mkdtemp()
cwd = os.path.abspath('.')
-os.chdir(tmpdir.name)
+os.chdir(tmpdir)
num_channels = 3
num_classes = 10
|
{"golden_diff": "diff --git a/composer/__init__.py b/composer/__init__.py\n--- a/composer/__init__.py\n+++ b/composer/__init__.py\n@@ -3,28 +3,25 @@\n \n \"\"\"Composer.\"\"\"\n \n-from composer import algorithms as algorithms\n-from composer import callbacks as callbacks\n-from composer import datasets as datasets\n-from composer import functional as functional\n-from composer import loggers as loggers\n-from composer import models as models\n-from composer import optim as optim\n-from composer import profiler as profiler\n-from composer import trainer as trainer\n-from composer import utils as utils\n-from composer.core import Algorithm as Algorithm\n-from composer.core import Callback as Callback\n-from composer.core import DataSpec as DataSpec\n-from composer.core import Engine as Engine\n-from composer.core import Evaluator as Evaluator\n-from composer.core import Event as Event\n-from composer.core import State as State\n-from composer.core import Time as Time\n-from composer.core import Timestamp as Timestamp\n-from composer.core import TimeUnit as TimeUnit\n-from composer.core import types as types\n-from composer.models import ComposerModel as ComposerModel\n-from composer.trainer import Trainer as Trainer\n+from composer.core import Algorithm, Callback, DataSpec, Engine, Evaluator, Event, State, Time, Timestamp, TimeUnit\n+from composer.loggers import Logger\n+from composer.models import ComposerModel\n+from composer.trainer import Trainer\n+\n+__all__ = [\n+ 'Algorithm',\n+ 'Callback',\n+ 'DataSpec',\n+ 'Engine',\n+ 'Evaluator',\n+ 'Event',\n+ 'State',\n+ 'Time',\n+ 'Timestamp',\n+ 'TimeUnit',\n+ 'Logger',\n+ 'ComposerModel',\n+ 'Trainer',\n+]\n \n __version__ = '0.7.0'\ndiff --git a/docs/source/doctest_cleanup.py b/docs/source/doctest_cleanup.py\n--- a/docs/source/doctest_cleanup.py\n+++ b/docs/source/doctest_cleanup.py\n@@ -4,6 +4,7 @@\n \"\"\"Cleanup script that is executed at the end of each doctest.\"\"\"\n \n import os\n+import shutil\n \n # variables are defined in doctest_fixtures.py\n # pyright: reportUndefinedVariable=none\n@@ -12,4 +13,7 @@\n \n os.chdir(cwd)\n \n-tmpdir.cleanup()\n+try:\n+ shutil.rmtree(tmpdir)\n+except OSError:\n+ pass\ndiff --git a/docs/source/doctest_fixtures.py b/docs/source/doctest_fixtures.py\n--- a/docs/source/doctest_fixtures.py\n+++ b/docs/source/doctest_fixtures.py\n@@ -63,9 +63,9 @@\n from tests.common import SimpleModel\n \n # Change the cwd to be the tempfile, so we don't pollute the documentation source folder\n-tmpdir = tempfile.TemporaryDirectory()\n+tmpdir = tempfile.mkdtemp()\n cwd = os.path.abspath('.')\n-os.chdir(tmpdir.name)\n+os.chdir(tmpdir)\n \n num_channels = 3\n num_classes = 10\n", "issue": "Consistency in docs [State, time, callbacks]\nI am re-writing a callback that was working on composer 0.5 and now it does not.\r\nI see that there is a new class `time` and that `state.epoch` is not there anymore, the problem is that the docs still point to that. Like [here](https://docs.mosaicml.com/en/v0.7.1/trainer/logging.html#logging-additional-data) or that `Logger` cannot be imported from `composer` (`from composer.loggers import Logger`)\r\nI am having a hard time creating a custom callback, maybe a tutorial with the new API?\r\n\r\nThe callback is for logging images to wandb:\r\n```python\r\nimport wandb\r\nfrom composer import Callback, State, Logger\r\n\r\nclass LogPredictions(Callback):\r\n def __init__(self, num_samples=100, seed=1234):\r\n super().__init__()\r\n self.num_samples = num_samples\r\n self.data = []\r\n \r\n def eval_batch_end(self, state: State, logger: Logger):\r\n \"\"\"Compute predictions per batch and stores them on self.data\"\"\"\r\n if state.epoch == state.max_duration: #on last val epoch\r\n if len(self.data) < self.num_samples:\r\n n = self.num_samples\r\n x, y = state.batch_pair\r\n outputs = state.outputs.argmax(-1)\r\n data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(zip(x[:n], y[:n], outputs[:n]))]\r\n self.data += data\r\n \r\n def eval_end(self, state: State, logger: Logger):\r\n \"Create a wandb.Table and logs it\"\r\n columns = ['image', 'ground truth', 'prediction']\r\n table = wandb.Table(columns=columns, data=self.data[:self.num_samples])\r\n wandb.log({'sample_table':table}, step=int(state.timer.batch))\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2022 MosaicML Composer authors\n# SPDX-License-Identifier: Apache-2.0\n\n\"\"\"Cleanup script that is executed at the end of each doctest.\"\"\"\n\nimport os\n\n# variables are defined in doctest_fixtures.py\n# pyright: reportUndefinedVariable=none\n\n# tmpdir and cwd were defined in doctest_fixtures.py\n\nos.chdir(cwd)\n\ntmpdir.cleanup()\n", "path": "docs/source/doctest_cleanup.py"}, {"content": "# Copyright 2022 MosaicML Composer authors\n# SPDX-License-Identifier: Apache-2.0\n\n\"\"\"Composer.\"\"\"\n\nfrom composer import algorithms as algorithms\nfrom composer import callbacks as callbacks\nfrom composer import datasets as datasets\nfrom composer import functional as functional\nfrom composer import loggers as loggers\nfrom composer import models as models\nfrom composer import optim as optim\nfrom composer import profiler as profiler\nfrom composer import trainer as trainer\nfrom composer import utils as utils\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Evaluator as Evaluator\nfrom composer.core import Event as Event\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timestamp as Timestamp\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.models import ComposerModel as ComposerModel\nfrom composer.trainer import Trainer as Trainer\n\n__version__ = '0.7.0'\n", "path": "composer/__init__.py"}, {"content": "# Copyright 2022 MosaicML Composer authors\n# SPDX-License-Identifier: Apache-2.0\n\n# disabling general type issues because of monkeypatching\n#yright: reportGeneralTypeIssues=none\n\n\"\"\"Fixtures available in doctests.\n\nThe script is run before any doctests are executed,\nso all imports and variables are available in any doctest.\nThe output of this setup script does not show up in the documentation.\n\"\"\"\nimport os\nimport sys\nimport tempfile\nfrom typing import Any\nfrom typing import Callable as Callable\n\nimport numpy as np\nimport torch\nimport torch.optim\nimport torch.utils.data\nfrom PIL import Image\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\n\nimport composer\nimport composer.loggers\nimport composer.loggers.object_store_logger\nimport composer.trainer\nimport composer.trainer.trainer\nimport composer.utils\nimport composer.utils.checkpoint\nimport composer.utils.file_helpers\nfrom composer import Trainer\nfrom composer.core import Algorithm as Algorithm\nfrom composer.core import Callback as Callback\nfrom composer.core import DataSpec as DataSpec\nfrom composer.core import Engine as Engine\nfrom composer.core import Evaluator as Evaluator\nfrom composer.core import Event as Event\nfrom composer.core import State as State\nfrom composer.core import Time as Time\nfrom composer.core import Timestamp as Timestamp\nfrom composer.core import TimeUnit as TimeUnit\nfrom composer.core import types as types\nfrom composer.datasets.synthetic import SyntheticBatchPairDataset\nfrom composer.loggers import InMemoryLogger as InMemoryLogger\nfrom composer.loggers import Logger as Logger\nfrom composer.loggers import LogLevel as LogLevel\nfrom composer.loggers import ObjectStoreLogger\nfrom composer.models import ComposerModel as ComposerModel\nfrom composer.optim.scheduler import ConstantScheduler\nfrom composer.utils import LibcloudObjectStore\nfrom composer.utils import ensure_tuple as ensure_tuple\n\n# Need to insert the repo root at the beginning of the path, since there may be other modules named `tests`\n# Assuming that docs generation is running from the `docs` directory\n_docs_dir = os.path.abspath('.')\n_repo_root = os.path.dirname(_docs_dir)\nif sys.path[0] != _repo_root:\n sys.path.insert(0, _repo_root)\n\nfrom tests.common import SimpleModel\n\n# Change the cwd to be the tempfile, so we don't pollute the documentation source folder\ntmpdir = tempfile.TemporaryDirectory()\ncwd = os.path.abspath('.')\nos.chdir(tmpdir.name)\n\nnum_channels = 3\nnum_classes = 10\ndata_shape = (num_channels, 5, 5)\n\nModel = SimpleModel\n\nmodel = SimpleModel(num_channels, num_classes)\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.001)\n\nscheduler = CosineAnnealingLR(optimizer, T_max=1)\n\ndataset = SyntheticBatchPairDataset(\n total_dataset_size=100,\n data_shape=data_shape,\n num_classes=num_classes,\n num_unique_samples_to_create=10,\n)\n\ntrain_dataset = dataset\neval_dataset = dataset\n\nbatch_size = 10\n\ntrain_dataloader = torch.utils.data.DataLoader(\n train_dataset,\n batch_size=batch_size,\n num_workers=0,\n pin_memory=False,\n drop_last=True,\n)\n\neval_dataloader = torch.utils.data.DataLoader(\n eval_dataset,\n batch_size=batch_size,\n num_workers=0,\n pin_memory=False,\n drop_last=False,\n)\n\nstate = State(\n rank_zero_seed=0,\n model=model,\n run_name='run_name',\n optimizers=optimizer,\n grad_accum=1,\n dataloader=train_dataloader,\n dataloader_label='train',\n max_duration='1ep',\n precision='fp32',\n)\n\nlogger = Logger(state)\n\nengine = Engine(state, logger)\n\nimage = Image.fromarray(np.random.randint(0, 256, size=(32, 32, 3), dtype=np.uint8))\n\n# error: \"randn\" is not a known member of module (reportGeneralTypeIssues)\nX_example = torch.randn(batch_size, num_channels, 32, 32) # type: ignore\n# error: \"randn\" is not a known member of module (reportGeneralTypeIssues)\nlogits = torch.randn(batch_size, num_classes) # type: ignore\n# error: \"randint\" is not a known member of module (reportGeneralTypeIssues)\ny_example = torch.randint(num_classes, (batch_size,)) # type: ignore\n\n\ndef loss_fun(output, target, reduction='none'):\n \"\"\"Dummy loss function.\"\"\"\n return torch.ones_like(target)\n\n\n# Patch Trainer __init__ function to replace arguments while preserving type\n_original_trainer_init = Trainer.__init__\n\n\ndef _new_trainer_init(self, fake_ellipses: None = None, **kwargs: Any):\n if 'model' not in kwargs:\n kwargs['model'] = model\n if 'optimizers' not in kwargs:\n kwargs['optimizers'] = torch.optim.SGD(kwargs['model'].parameters(), lr=0.01)\n if 'schedulers' not in kwargs:\n kwargs['schedulers'] = ConstantScheduler()\n if 'max_duration' not in kwargs:\n kwargs['max_duration'] = '1ep'\n if 'train_dataloader' not in kwargs:\n kwargs['train_dataloader'] = train_dataloader\n if 'eval_dataloader' not in kwargs:\n kwargs['eval_dataloader'] = eval_dataloader\n if 'progress_bar' not in kwargs:\n kwargs['progress_bar'] = False # hide tqdm logging\n if 'log_to_console' not in kwargs:\n kwargs['log_to_console'] = False # hide console logging\n _original_trainer_init(self, **kwargs)\n\n\nTrainer.__init__ = _new_trainer_init\n\n\n# Do not attempt to validate cloud credentials\ndef _do_not_validate(*args, **kwargs) -> None:\n pass\n\n\ncomposer.loggers.object_store_logger._validate_credentials = _do_not_validate # type: ignore\n\n# Patch ObjectStoreLogger __init__ function to replace arguments while preserving type\n_original_objectStoreLogger_init = ObjectStoreLogger.__init__\n\n\ndef _new_objectStoreLogger_init(self, fake_ellipses: None = None, **kwargs: Any):\n os.makedirs('./object_store', exist_ok=True)\n kwargs.update(use_procs=False,\n num_concurrent_uploads=1,\n object_store_cls=LibcloudObjectStore,\n object_store_kwargs={\n 'provider': 'local',\n 'container': '.',\n 'provider_kwargs': {\n 'key': os.path.abspath('./object_store'),\n },\n })\n _original_objectStoreLogger_init(self, **kwargs)\n\n\nObjectStoreLogger.__init__ = _new_objectStoreLogger_init # type: ignore\n\n# Patch ObjectStore __init__ function to replace arguments while preserving type\n_original_libcloudObjectStore_init = LibcloudObjectStore.__init__\n\n\ndef _new_libcloudObjectStore_init(self, fake_ellipses: None = None, **kwargs: Any):\n os.makedirs('./object_store', exist_ok=True)\n kwargs.update(\n provider='local',\n container='.',\n provider_kwargs={\n 'key': os.path.abspath('./object_store'),\n },\n )\n _original_libcloudObjectStore_init(self, **kwargs)\n\n\nLibcloudObjectStore.__init__ = _new_libcloudObjectStore_init # type: ignore\n", "path": "docs/source/doctest_fixtures.py"}]}
| 3,505 | 637 |
gh_patches_debug_2642
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-3676
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Removing astropy_helpers section in CONTRIBUTING.rst
<!-- This comments are hidden when you submit the issue so you do not need to remove them!
Please be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst
Please be sure to check out our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->
<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue! -->
### Description
<!-- Provide a general description of the bug. -->
As of PR https://github.com/sunpy/sunpy/pull/3598, sunpy no longer needs `astropy_helpers`, and even it is removed from the package.
I think there should not be a section of Astropy Helpers in contribution guidelines as well.
</issue>
<code>
[start of sunpy/version.py]
1 # This file is for compatibility with astropy_helpers
2 version = 'unknown.dev'
3 try:
4 from importlib_metadata import version as _version, PackageNotFoundError
5 version = _version('sunpy')
6 except ImportError:
7 from pkg_resources import get_distribution, DistributionNotFound
8 try:
9 version = get_distribution("sunpy").version
10 except DistributionNotFound:
11 pass
12 except PackageNotFoundError:
13 pass
14
[end of sunpy/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sunpy/version.py b/sunpy/version.py
deleted file mode 100644
--- a/sunpy/version.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# This file is for compatibility with astropy_helpers
-version = 'unknown.dev'
-try:
- from importlib_metadata import version as _version, PackageNotFoundError
- version = _version('sunpy')
-except ImportError:
- from pkg_resources import get_distribution, DistributionNotFound
- try:
- version = get_distribution("sunpy").version
- except DistributionNotFound:
- pass
-except PackageNotFoundError:
- pass
|
{"golden_diff": "diff --git a/sunpy/version.py b/sunpy/version.py\ndeleted file mode 100644\n--- a/sunpy/version.py\n+++ /dev/null\n@@ -1,13 +0,0 @@\n-# This file is for compatibility with astropy_helpers\n-version = 'unknown.dev'\n-try:\n- from importlib_metadata import version as _version, PackageNotFoundError\n- version = _version('sunpy')\n-except ImportError:\n- from pkg_resources import get_distribution, DistributionNotFound\n- try:\n- version = get_distribution(\"sunpy\").version\n- except DistributionNotFound:\n- pass\n-except PackageNotFoundError:\n- pass\n", "issue": "Removing astropy_helpers section in CONTRIBUTING.rst\n<!-- This comments are hidden when you submit the issue so you do not need to remove them!\r\nPlease be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst\r\nPlease be sure to check out our code of conduct:\r\nhttps://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->\r\n\r\n<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.\r\nIf a similar issue is closed, have a quick look to see if you are satisfied by the resolution.\r\nIf not please go ahead and open an issue! -->\r\n\r\n### Description\r\n<!-- Provide a general description of the bug. -->\r\nAs of PR https://github.com/sunpy/sunpy/pull/3598, sunpy no longer needs `astropy_helpers`, and even it is removed from the package.\r\nI think there should not be a section of Astropy Helpers in contribution guidelines as well.\n", "before_files": [{"content": "# This file is for compatibility with astropy_helpers\nversion = 'unknown.dev'\ntry:\n from importlib_metadata import version as _version, PackageNotFoundError\n version = _version('sunpy')\nexcept ImportError:\n from pkg_resources import get_distribution, DistributionNotFound\n try:\n version = get_distribution(\"sunpy\").version\n except DistributionNotFound:\n pass\nexcept PackageNotFoundError:\n pass\n", "path": "sunpy/version.py"}]}
| 851 | 148 |
gh_patches_debug_6051
|
rasdani/github-patches
|
git_diff
|
coala__coala-3888
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aspects/meta.py: Typo error
<!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed.
Chat: https://coala.io/chat
Issue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues
Documentation: https://docs.coala.io
-->
Replace `int` -> `in` in `search for tastes int the sub-aspectclass`
difficulty/newcomer
</issue>
<code>
[start of coalib/bearlib/aspects/meta.py]
1 from inspect import getmembers, signature
2
3 from coala_utils.decorators import generate_repr
4
5 from .base import aspectbase
6 from .docs import Documentation
7 from .taste import Taste
8
9
10 class aspectclass(type):
11 """
12 Metaclass for aspectclasses.
13
14 Root aspectclass is :class:`coalib.bearlib.aspectclasses.Root`.
15 """
16 def __init__(cls, clsname, bases, clsattrs):
17 """
18 Initializes the ``.subaspects`` dict on new aspectclasses.
19 """
20 cls.subaspects = {}
21
22 @property
23 def tastes(cls):
24 """
25 Get a dictionary of all taste names mapped to their
26 :class:`coalib.bearlib.aspectclasses.Taste` instances.
27 """
28 if cls.parent:
29 return dict(cls.parent.tastes, **cls._tastes)
30
31 return dict(cls._tastes)
32
33 def subaspect(cls, subcls):
34 """
35 The sub-aspectclass decorator.
36
37 See :class:`coalib.bearlib.aspectclasses.Root` for description
38 and usage.
39 """
40 aspectname = subcls.__name__
41
42 docs = getattr(subcls, 'docs', None)
43 aspectdocs = Documentation(subcls.__doc__, **{
44 attr: getattr(docs, attr, '') for attr in
45 list(signature(Documentation).parameters.keys())[1:]})
46
47 # search for tastes int the sub-aspectclass
48 subtastes = {}
49 for name, member in getmembers(subcls):
50 if isinstance(member, Taste):
51 # tell the taste its own name
52 member.name = name
53 subtastes[name] = member
54
55 class Sub(subcls, aspectbase, metaclass=aspectclass):
56 __module__ = subcls.__module__
57
58 parent = cls
59
60 docs = aspectdocs
61 _tastes = subtastes
62
63 members = sorted(Sub.tastes)
64 if members:
65 Sub = generate_repr(*members)(Sub)
66
67 Sub.__name__ = aspectname
68 Sub.__qualname__ = '%s.%s' % (cls.__qualname__, aspectname)
69 cls.subaspects[aspectname] = Sub
70 setattr(cls, aspectname, Sub)
71 return Sub
72
73 def __repr__(cls):
74 return '<%s %s>' % (type(cls).__name__, repr(cls.__qualname__))
75
[end of coalib/bearlib/aspects/meta.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/coalib/bearlib/aspects/meta.py b/coalib/bearlib/aspects/meta.py
--- a/coalib/bearlib/aspects/meta.py
+++ b/coalib/bearlib/aspects/meta.py
@@ -44,7 +44,7 @@
attr: getattr(docs, attr, '') for attr in
list(signature(Documentation).parameters.keys())[1:]})
- # search for tastes int the sub-aspectclass
+ # search for tastes in the sub-aspectclass
subtastes = {}
for name, member in getmembers(subcls):
if isinstance(member, Taste):
|
{"golden_diff": "diff --git a/coalib/bearlib/aspects/meta.py b/coalib/bearlib/aspects/meta.py\n--- a/coalib/bearlib/aspects/meta.py\n+++ b/coalib/bearlib/aspects/meta.py\n@@ -44,7 +44,7 @@\n attr: getattr(docs, attr, '') for attr in\n list(signature(Documentation).parameters.keys())[1:]})\n \n- # search for tastes int the sub-aspectclass\n+ # search for tastes in the sub-aspectclass\n subtastes = {}\n for name, member in getmembers(subcls):\n if isinstance(member, Taste):\n", "issue": "aspects/meta.py: Typo error\n<!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed.\r\n\r\nChat: https://coala.io/chat\r\nIssue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues\r\nDocumentation: https://docs.coala.io\r\n-->\r\nReplace `int` -> `in` in `search for tastes int the sub-aspectclass`\r\n\r\ndifficulty/newcomer\n", "before_files": [{"content": "from inspect import getmembers, signature\n\nfrom coala_utils.decorators import generate_repr\n\nfrom .base import aspectbase\nfrom .docs import Documentation\nfrom .taste import Taste\n\n\nclass aspectclass(type):\n \"\"\"\n Metaclass for aspectclasses.\n\n Root aspectclass is :class:`coalib.bearlib.aspectclasses.Root`.\n \"\"\"\n def __init__(cls, clsname, bases, clsattrs):\n \"\"\"\n Initializes the ``.subaspects`` dict on new aspectclasses.\n \"\"\"\n cls.subaspects = {}\n\n @property\n def tastes(cls):\n \"\"\"\n Get a dictionary of all taste names mapped to their\n :class:`coalib.bearlib.aspectclasses.Taste` instances.\n \"\"\"\n if cls.parent:\n return dict(cls.parent.tastes, **cls._tastes)\n\n return dict(cls._tastes)\n\n def subaspect(cls, subcls):\n \"\"\"\n The sub-aspectclass decorator.\n\n See :class:`coalib.bearlib.aspectclasses.Root` for description\n and usage.\n \"\"\"\n aspectname = subcls.__name__\n\n docs = getattr(subcls, 'docs', None)\n aspectdocs = Documentation(subcls.__doc__, **{\n attr: getattr(docs, attr, '') for attr in\n list(signature(Documentation).parameters.keys())[1:]})\n\n # search for tastes int the sub-aspectclass\n subtastes = {}\n for name, member in getmembers(subcls):\n if isinstance(member, Taste):\n # tell the taste its own name\n member.name = name\n subtastes[name] = member\n\n class Sub(subcls, aspectbase, metaclass=aspectclass):\n __module__ = subcls.__module__\n\n parent = cls\n\n docs = aspectdocs\n _tastes = subtastes\n\n members = sorted(Sub.tastes)\n if members:\n Sub = generate_repr(*members)(Sub)\n\n Sub.__name__ = aspectname\n Sub.__qualname__ = '%s.%s' % (cls.__qualname__, aspectname)\n cls.subaspects[aspectname] = Sub\n setattr(cls, aspectname, Sub)\n return Sub\n\n def __repr__(cls):\n return '<%s %s>' % (type(cls).__name__, repr(cls.__qualname__))\n", "path": "coalib/bearlib/aspects/meta.py"}]}
| 1,355 | 139 |
gh_patches_debug_19130
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-3840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Starlette/Fastapi: endpoint duration includes the duration of background tasks
### Which version of dd-trace-py are you using?
ddtrace==0.55.4
### Which version of pip are you using?
21.2.4
### Which version of the libraries are you using?
fastapi==0.68.2
starlette==0.14.2
### How can we reproduce your problem?
this would be a minimal proof of concept `app.py`, running through `ddtrace-run uvicorn app:app`
```
import asyncio
from ddtrace import tracer
from fastapi import FastAPI, BackgroundTasks
app = FastAPI()
async def some_background_task():
with tracer.start_span("some_background_task", activate=True):
tracer.context_provider.activate(None)
await asyncio.sleep(10)
@app.get("/")
async def main(background_tasks: BackgroundTasks):
background_tasks.add_task(some_background_task)
return "ok"
```
### What is the result that you get?
The duration of `/` is reported to be 10s, while the browser immediately receives the response.
`some_background_task` is also reported with a duration of 10s.
### What is the result that you expected?
I would expect that the reported endpoint duration matches the time it took to get the response, and that the background task is reported separately. Please don't mind that `tracer.context_provider.activate(None)` might be redundant here, adding it here to show what I have tried.
FastAPI's `add_task` actually comes from starlette https://www.starlette.io/background/
I can understand why the endpoint duration includes the background task, this is the definition of starlette's `Response.__call__`:
https://github.com/encode/starlette/blob/ada99beee530e7b841ce320bc6e66f6dbd9ad781/starlette/responses.py#L159
```
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
await send({"type": "http.response.body", "body": self.body})
if self.background is not None:
await self.background()
```
The response header and body is sent, but the function itself is not finished until all background tasks have been processed.
I believe that this is not what users of ddtrace would expect: the background tasks are used to return a response early without waiting for background operations to finish ; the reported endpoint duration should correspond to when the body was sent
</issue>
<code>
[start of ddtrace/contrib/asgi/middleware.py]
1 import sys
2 from typing import TYPE_CHECKING
3
4 import ddtrace
5 from ddtrace import config
6 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
7 from ddtrace.ext import SpanTypes
8 from ddtrace.ext import http
9
10 from .. import trace_utils
11 from ...internal.compat import reraise
12 from ...internal.logger import get_logger
13 from .utils import guarantee_single_callable
14
15
16 if TYPE_CHECKING:
17 from typing import Any
18 from typing import Mapping
19 from typing import Optional
20
21 from ddtrace import Span
22
23
24 log = get_logger(__name__)
25
26 config._add(
27 "asgi",
28 dict(service_name=config._get_service(default="asgi"), request_span_name="asgi.request", distributed_tracing=True),
29 )
30
31 ASGI_VERSION = "asgi.version"
32 ASGI_SPEC_VERSION = "asgi.spec_version"
33
34
35 def bytes_to_str(str_or_bytes):
36 return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes
37
38
39 def _extract_versions_from_scope(scope, integration_config):
40 tags = {}
41
42 http_version = scope.get("http_version")
43 if http_version:
44 tags[http.VERSION] = http_version
45
46 scope_asgi = scope.get("asgi")
47
48 if scope_asgi and "version" in scope_asgi:
49 tags[ASGI_VERSION] = scope_asgi["version"]
50
51 if scope_asgi and "spec_version" in scope_asgi:
52 tags[ASGI_SPEC_VERSION] = scope_asgi["spec_version"]
53
54 return tags
55
56
57 def _extract_headers(scope):
58 headers = scope.get("headers")
59 if headers:
60 # headers: (Iterable[[byte string, byte string]])
61 return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)
62 return {}
63
64
65 def _default_handle_exception_span(exc, span):
66 """Default handler for exception for span"""
67 span.set_tag(http.STATUS_CODE, 500)
68
69
70 def span_from_scope(scope):
71 # type: (Mapping[str, Any]) -> Optional[Span]
72 """Retrieve the top-level ASGI span from the scope."""
73 return scope.get("datadog", {}).get("request_spans", [None])[0]
74
75
76 class TraceMiddleware:
77 """
78 ASGI application middleware that traces the requests.
79 Args:
80 app: The ASGI application.
81 tracer: Custom tracer. Defaults to the global tracer.
82 """
83
84 def __init__(
85 self,
86 app,
87 tracer=None,
88 integration_config=config.asgi,
89 handle_exception_span=_default_handle_exception_span,
90 span_modifier=None,
91 ):
92 self.app = guarantee_single_callable(app)
93 self.tracer = tracer or ddtrace.tracer
94 self.integration_config = integration_config
95 self.handle_exception_span = handle_exception_span
96 self.span_modifier = span_modifier
97
98 async def __call__(self, scope, receive, send):
99 if scope["type"] != "http":
100 return await self.app(scope, receive, send)
101
102 try:
103 headers = _extract_headers(scope)
104 except Exception:
105 log.warning("failed to decode headers for distributed tracing", exc_info=True)
106 headers = {}
107 else:
108 trace_utils.activate_distributed_headers(
109 self.tracer, int_config=self.integration_config, request_headers=headers
110 )
111
112 resource = "{} {}".format(scope["method"], scope["path"])
113
114 span = self.tracer.trace(
115 name=self.integration_config.get("request_span_name", "asgi.request"),
116 service=trace_utils.int_service(None, self.integration_config),
117 resource=resource,
118 span_type=SpanTypes.WEB,
119 )
120
121 if "datadog" not in scope:
122 scope["datadog"] = {"request_spans": [span]}
123 else:
124 scope["datadog"]["request_spans"].append(span)
125
126 if self.span_modifier:
127 self.span_modifier(span, scope)
128
129 sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)
130 if sample_rate is not None:
131 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
132
133 method = scope.get("method")
134 server = scope.get("server")
135 if server and len(server) == 2:
136 port = server[1]
137 server_host = server[0] + (":" + str(port) if port is not None and port != 80 else "")
138 full_path = scope.get("root_path", "") + scope.get("path", "")
139 url = scope.get("scheme", "http") + "://" + server_host + full_path
140 else:
141 url = None
142
143 if self.integration_config.trace_query_string:
144 query_string = scope.get("query_string")
145 if len(query_string) > 0:
146 query_string = bytes_to_str(query_string)
147 else:
148 query_string = None
149
150 trace_utils.set_http_meta(
151 span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers
152 )
153
154 tags = _extract_versions_from_scope(scope, self.integration_config)
155 span.set_tags(tags)
156
157 async def wrapped_send(message):
158 if span and message.get("type") == "http.response.start" and "status" in message:
159 status_code = message["status"]
160 else:
161 status_code = None
162
163 if "headers" in message:
164 response_headers = message["headers"]
165 else:
166 response_headers = None
167
168 trace_utils.set_http_meta(
169 span, self.integration_config, status_code=status_code, response_headers=response_headers
170 )
171
172 return await send(message)
173
174 try:
175 return await self.app(scope, receive, wrapped_send)
176 except Exception as exc:
177 (exc_type, exc_val, exc_tb) = sys.exc_info()
178 span.set_exc_info(exc_type, exc_val, exc_tb)
179 self.handle_exception_span(exc, span)
180 reraise(exc_type, exc_val, exc_tb)
181 finally:
182 try:
183 del scope["datadog"]["request_span"]
184 except KeyError:
185 pass
186 span.finish()
187
[end of ddtrace/contrib/asgi/middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py
--- a/ddtrace/contrib/asgi/middleware.py
+++ b/ddtrace/contrib/asgi/middleware.py
@@ -169,7 +169,13 @@
span, self.integration_config, status_code=status_code, response_headers=response_headers
)
- return await send(message)
+ try:
+ return await send(message)
+ finally:
+ # Per asgi spec, "more_body" is used if there is still data to send
+ # Close the span if "http.response.body" has no more data left to send in the response.
+ if message.get("type") == "http.response.body" and not message.get("more_body", False):
+ span.finish()
try:
return await self.app(scope, receive, wrapped_send)
@@ -183,4 +189,5 @@
del scope["datadog"]["request_span"]
except KeyError:
pass
+
span.finish()
|
{"golden_diff": "diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py\n--- a/ddtrace/contrib/asgi/middleware.py\n+++ b/ddtrace/contrib/asgi/middleware.py\n@@ -169,7 +169,13 @@\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n \n- return await send(message)\n+ try:\n+ return await send(message)\n+ finally:\n+ # Per asgi spec, \"more_body\" is used if there is still data to send\n+ # Close the span if \"http.response.body\" has no more data left to send in the response.\n+ if message.get(\"type\") == \"http.response.body\" and not message.get(\"more_body\", False):\n+ span.finish()\n \n try:\n return await self.app(scope, receive, wrapped_send)\n@@ -183,4 +189,5 @@\n del scope[\"datadog\"][\"request_span\"]\n except KeyError:\n pass\n+\n span.finish()\n", "issue": "Starlette/Fastapi: endpoint duration includes the duration of background tasks\n### Which version of dd-trace-py are you using?\r\n\r\nddtrace==0.55.4\r\n\r\n### Which version of pip are you using?\r\n\r\n21.2.4\r\n\r\n\r\n### Which version of the libraries are you using?\r\n\r\nfastapi==0.68.2\r\nstarlette==0.14.2\r\n\r\n### How can we reproduce your problem?\r\n\r\nthis would be a minimal proof of concept `app.py`, running through `ddtrace-run uvicorn app:app`\r\n\r\n```\r\nimport asyncio\r\n\r\nfrom ddtrace import tracer\r\nfrom fastapi import FastAPI, BackgroundTasks\r\n\r\napp = FastAPI()\r\n\r\n\r\nasync def some_background_task():\r\n with tracer.start_span(\"some_background_task\", activate=True):\r\n tracer.context_provider.activate(None)\r\n await asyncio.sleep(10)\r\n\r\n\r\[email protected](\"/\")\r\nasync def main(background_tasks: BackgroundTasks):\r\n background_tasks.add_task(some_background_task)\r\n return \"ok\"\r\n\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe duration of `/` is reported to be 10s, while the browser immediately receives the response.\r\n`some_background_task` is also reported with a duration of 10s.\r\n\r\n### What is the result that you expected?\r\n\r\nI would expect that the reported endpoint duration matches the time it took to get the response, and that the background task is reported separately. Please don't mind that `tracer.context_provider.activate(None)` might be redundant here, adding it here to show what I have tried.\r\n\r\nFastAPI's `add_task` actually comes from starlette https://www.starlette.io/background/\r\n\r\nI can understand why the endpoint duration includes the background task, this is the definition of starlette's `Response.__call__`:\r\n\r\nhttps://github.com/encode/starlette/blob/ada99beee530e7b841ce320bc6e66f6dbd9ad781/starlette/responses.py#L159\r\n```\r\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\r\n await send(\r\n {\r\n \"type\": \"http.response.start\",\r\n \"status\": self.status_code,\r\n \"headers\": self.raw_headers,\r\n }\r\n )\r\n await send({\"type\": \"http.response.body\", \"body\": self.body})\r\n\r\n if self.background is not None:\r\n await self.background()\r\n```\r\n\r\nThe response header and body is sent, but the function itself is not finished until all background tasks have been processed.\r\n\r\nI believe that this is not what users of ddtrace would expect: the background tasks are used to return a response early without waiting for background operations to finish ; the reported endpoint duration should correspond to when the body was sent\r\n\n", "before_files": [{"content": "import sys\nfrom typing import TYPE_CHECKING\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.ext import http\n\nfrom .. import trace_utils\nfrom ...internal.compat import reraise\nfrom ...internal.logger import get_logger\nfrom .utils import guarantee_single_callable\n\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Mapping\n from typing import Optional\n\n from ddtrace import Span\n\n\nlog = get_logger(__name__)\n\nconfig._add(\n \"asgi\",\n dict(service_name=config._get_service(default=\"asgi\"), request_span_name=\"asgi.request\", distributed_tracing=True),\n)\n\nASGI_VERSION = \"asgi.version\"\nASGI_SPEC_VERSION = \"asgi.spec_version\"\n\n\ndef bytes_to_str(str_or_bytes):\n return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes\n\n\ndef _extract_versions_from_scope(scope, integration_config):\n tags = {}\n\n http_version = scope.get(\"http_version\")\n if http_version:\n tags[http.VERSION] = http_version\n\n scope_asgi = scope.get(\"asgi\")\n\n if scope_asgi and \"version\" in scope_asgi:\n tags[ASGI_VERSION] = scope_asgi[\"version\"]\n\n if scope_asgi and \"spec_version\" in scope_asgi:\n tags[ASGI_SPEC_VERSION] = scope_asgi[\"spec_version\"]\n\n return tags\n\n\ndef _extract_headers(scope):\n headers = scope.get(\"headers\")\n if headers:\n # headers: (Iterable[[byte string, byte string]])\n return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)\n return {}\n\n\ndef _default_handle_exception_span(exc, span):\n \"\"\"Default handler for exception for span\"\"\"\n span.set_tag(http.STATUS_CODE, 500)\n\n\ndef span_from_scope(scope):\n # type: (Mapping[str, Any]) -> Optional[Span]\n \"\"\"Retrieve the top-level ASGI span from the scope.\"\"\"\n return scope.get(\"datadog\", {}).get(\"request_spans\", [None])[0]\n\n\nclass TraceMiddleware:\n \"\"\"\n ASGI application middleware that traces the requests.\n Args:\n app: The ASGI application.\n tracer: Custom tracer. Defaults to the global tracer.\n \"\"\"\n\n def __init__(\n self,\n app,\n tracer=None,\n integration_config=config.asgi,\n handle_exception_span=_default_handle_exception_span,\n span_modifier=None,\n ):\n self.app = guarantee_single_callable(app)\n self.tracer = tracer or ddtrace.tracer\n self.integration_config = integration_config\n self.handle_exception_span = handle_exception_span\n self.span_modifier = span_modifier\n\n async def __call__(self, scope, receive, send):\n if scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n\n try:\n headers = _extract_headers(scope)\n except Exception:\n log.warning(\"failed to decode headers for distributed tracing\", exc_info=True)\n headers = {}\n else:\n trace_utils.activate_distributed_headers(\n self.tracer, int_config=self.integration_config, request_headers=headers\n )\n\n resource = \"{} {}\".format(scope[\"method\"], scope[\"path\"])\n\n span = self.tracer.trace(\n name=self.integration_config.get(\"request_span_name\", \"asgi.request\"),\n service=trace_utils.int_service(None, self.integration_config),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n\n if \"datadog\" not in scope:\n scope[\"datadog\"] = {\"request_spans\": [span]}\n else:\n scope[\"datadog\"][\"request_spans\"].append(span)\n\n if self.span_modifier:\n self.span_modifier(span, scope)\n\n sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = scope.get(\"method\")\n server = scope.get(\"server\")\n if server and len(server) == 2:\n port = server[1]\n server_host = server[0] + (\":\" + str(port) if port is not None and port != 80 else \"\")\n full_path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n url = scope.get(\"scheme\", \"http\") + \"://\" + server_host + full_path\n else:\n url = None\n\n if self.integration_config.trace_query_string:\n query_string = scope.get(\"query_string\")\n if len(query_string) > 0:\n query_string = bytes_to_str(query_string)\n else:\n query_string = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers\n )\n\n tags = _extract_versions_from_scope(scope, self.integration_config)\n span.set_tags(tags)\n\n async def wrapped_send(message):\n if span and message.get(\"type\") == \"http.response.start\" and \"status\" in message:\n status_code = message[\"status\"]\n else:\n status_code = None\n\n if \"headers\" in message:\n response_headers = message[\"headers\"]\n else:\n response_headers = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n\n return await send(message)\n\n try:\n return await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n (exc_type, exc_val, exc_tb) = sys.exc_info()\n span.set_exc_info(exc_type, exc_val, exc_tb)\n self.handle_exception_span(exc, span)\n reraise(exc_type, exc_val, exc_tb)\n finally:\n try:\n del scope[\"datadog\"][\"request_span\"]\n except KeyError:\n pass\n span.finish()\n", "path": "ddtrace/contrib/asgi/middleware.py"}]}
| 2,884 | 232 |
gh_patches_debug_17311
|
rasdani/github-patches
|
git_diff
|
vega__altair-2900
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ga: `DeprecationWarning`: SelectableGroups dict interface is deprecated. Use select.
I see the following deprecation warning in our tests:
```cmd
tests/vegalite/v5/tests/test_display.py::test_check_renderer_options
[51](https://github.com/altair-viz/altair/actions/runs/4210647283/jobs/7308470442#step:5:52)
/home/runner/work/altair/altair/altair/utils/plugin_registry.py:124:
DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
[52](https://github.com/altair-viz/altair/actions/runs/4210647283/jobs/7308470442#step:5:53)
more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]
```
It is around here in the code: https://github.com/altair-viz/altair/blob/master/altair/utils/plugin_registry.py#L124
</issue>
<code>
[start of altair/utils/plugin_registry.py]
1 from typing import Any, Dict, List, Optional, Generic, TypeVar, cast
2 from types import TracebackType
3
4 try:
5 from importlib.metadata import entry_points
6 except ImportError:
7 from importlib_metadata import entry_points
8
9 from toolz import curry
10
11
12 PluginType = TypeVar("PluginType")
13
14
15 class NoSuchEntryPoint(Exception):
16 def __init__(self, group, name):
17 self.group = group
18 self.name = name
19
20 def __str__(self):
21 return f"No {self.name!r} entry point found in group {self.group!r}"
22
23
24 class PluginEnabler(object):
25 """Context manager for enabling plugins
26
27 This object lets you use enable() as a context manager to
28 temporarily enable a given plugin::
29
30 with plugins.enable('name'):
31 do_something() # 'name' plugin temporarily enabled
32 # plugins back to original state
33 """
34
35 def __init__(self, registry: "PluginRegistry", name: str, **options):
36 self.registry = registry # type: PluginRegistry
37 self.name = name # type: str
38 self.options = options # type: Dict[str, Any]
39 self.original_state = registry._get_state() # type: Dict[str, Any]
40 self.registry._enable(name, **options)
41
42 def __enter__(self) -> "PluginEnabler":
43 return self
44
45 def __exit__(self, typ: type, value: Exception, traceback: TracebackType) -> None:
46 self.registry._set_state(self.original_state)
47
48 def __repr__(self) -> str:
49 return "{}.enable({!r})".format(self.registry.__class__.__name__, self.name)
50
51
52 class PluginRegistry(Generic[PluginType]):
53 """A registry for plugins.
54
55 This is a plugin registry that allows plugins to be loaded/registered
56 in two ways:
57
58 1. Through an explicit call to ``.register(name, value)``.
59 2. By looking for other Python packages that are installed and provide
60 a setuptools entry point group.
61
62 When you create an instance of this class, provide the name of the
63 entry point group to use::
64
65 reg = PluginRegister('my_entrypoint_group')
66
67 """
68
69 # this is a mapping of name to error message to allow custom error messages
70 # in case an entrypoint is not found
71 entrypoint_err_messages = {} # type: Dict[str, str]
72
73 # global settings is a key-value mapping of settings that are stored globally
74 # in the registry rather than passed to the plugins
75 _global_settings = {} # type: Dict[str, Any]
76
77 def __init__(self, entry_point_group: str = "", plugin_type: type = object):
78 """Create a PluginRegistry for a named entry point group.
79
80 Parameters
81 ==========
82 entry_point_group: str
83 The name of the entry point group.
84 plugin_type: object
85 A type that will optionally be used for runtime type checking of
86 loaded plugins using isinstance.
87 """
88 self.entry_point_group = entry_point_group # type: str
89 self.plugin_type = plugin_type # type: Optional[type]
90 self._active = None # type: Optional[PluginType]
91 self._active_name = "" # type: str
92 self._plugins = {} # type: Dict[str, PluginType]
93 self._options = {} # type: Dict[str, Any]
94 self._global_settings = self.__class__._global_settings.copy() # type: dict
95
96 def register(self, name: str, value: Optional[PluginType]) -> Optional[PluginType]:
97 """Register a plugin by name and value.
98
99 This method is used for explicit registration of a plugin and shouldn't be
100 used to manage entry point managed plugins, which are auto-loaded.
101
102 Parameters
103 ==========
104 name: str
105 The name of the plugin.
106 value: PluginType or None
107 The actual plugin object to register or None to unregister that plugin.
108
109 Returns
110 =======
111 plugin: PluginType or None
112 The plugin that was registered or unregistered.
113 """
114 if value is None:
115 return self._plugins.pop(name, None)
116 else:
117 assert isinstance(value, self.plugin_type)
118 self._plugins[name] = value
119 return value
120
121 def names(self) -> List[str]:
122 """List the names of the registered and entry points plugins."""
123 exts = list(self._plugins.keys())
124 more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]
125 exts.extend(more_exts)
126 return sorted(set(exts))
127
128 def _get_state(self) -> Dict[str, Any]:
129 """Return a dictionary representing the current state of the registry"""
130 return {
131 "_active": self._active,
132 "_active_name": self._active_name,
133 "_plugins": self._plugins.copy(),
134 "_options": self._options.copy(),
135 "_global_settings": self._global_settings.copy(),
136 }
137
138 def _set_state(self, state: Dict[str, Any]) -> None:
139 """Reset the state of the registry"""
140 assert set(state.keys()) == {
141 "_active",
142 "_active_name",
143 "_plugins",
144 "_options",
145 "_global_settings",
146 }
147 for key, val in state.items():
148 setattr(self, key, val)
149
150 def _enable(self, name: str, **options) -> None:
151 if name not in self._plugins:
152 try:
153 (ep,) = [
154 ep
155 for ep in entry_points().get(self.entry_point_group, [])
156 if ep.name == name
157 ]
158 except ValueError:
159 if name in self.entrypoint_err_messages:
160 raise ValueError(self.entrypoint_err_messages[name])
161 else:
162 raise NoSuchEntryPoint(self.entry_point_group, name)
163 value = cast(PluginType, ep.load())
164 self.register(name, value)
165 self._active_name = name
166 self._active = self._plugins[name]
167 for key in set(options.keys()) & set(self._global_settings.keys()):
168 self._global_settings[key] = options.pop(key)
169 self._options = options
170
171 def enable(self, name: Optional[str] = None, **options) -> PluginEnabler:
172 """Enable a plugin by name.
173
174 This can be either called directly, or used as a context manager.
175
176 Parameters
177 ----------
178 name : string (optional)
179 The name of the plugin to enable. If not specified, then use the
180 current active name.
181 **options :
182 Any additional parameters will be passed to the plugin as keyword
183 arguments
184
185 Returns
186 -------
187 PluginEnabler:
188 An object that allows enable() to be used as a context manager
189 """
190 if name is None:
191 name = self.active
192 return PluginEnabler(self, name, **options)
193
194 @property
195 def active(self) -> str:
196 """Return the name of the currently active plugin"""
197 return self._active_name
198
199 @property
200 def options(self) -> Dict[str, Any]:
201 """Return the current options dictionary"""
202 return self._options
203
204 def get(self) -> Optional[PluginType]:
205 """Return the currently active plugin."""
206 if self._options:
207 return curry(self._active, **self._options)
208 else:
209 return self._active
210
211 def __repr__(self) -> str:
212 return "{}(active={!r}, registered={!r})" "".format(
213 self.__class__.__name__, self._active_name, list(self.names())
214 )
215
[end of altair/utils/plugin_registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/altair/utils/plugin_registry.py b/altair/utils/plugin_registry.py
--- a/altair/utils/plugin_registry.py
+++ b/altair/utils/plugin_registry.py
@@ -1,3 +1,4 @@
+import sys
from typing import Any, Dict, List, Optional, Generic, TypeVar, cast
from types import TracebackType
@@ -121,7 +122,11 @@
def names(self) -> List[str]:
"""List the names of the registered and entry points plugins."""
exts = list(self._plugins.keys())
- more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]
+ if sys.version_info.major == 3 and sys.version_info.minor < 10:
+ e_points = entry_points().get(self.entry_point_group, [])
+ else:
+ e_points = entry_points(group=self.entry_point_group)
+ more_exts = [ep.name for ep in e_points]
exts.extend(more_exts)
return sorted(set(exts))
|
{"golden_diff": "diff --git a/altair/utils/plugin_registry.py b/altair/utils/plugin_registry.py\n--- a/altair/utils/plugin_registry.py\n+++ b/altair/utils/plugin_registry.py\n@@ -1,3 +1,4 @@\n+import sys\n from typing import Any, Dict, List, Optional, Generic, TypeVar, cast\n from types import TracebackType\n \n@@ -121,7 +122,11 @@\n def names(self) -> List[str]:\n \"\"\"List the names of the registered and entry points plugins.\"\"\"\n exts = list(self._plugins.keys())\n- more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]\n+ if sys.version_info.major == 3 and sys.version_info.minor < 10:\n+ e_points = entry_points().get(self.entry_point_group, [])\n+ else:\n+ e_points = entry_points(group=self.entry_point_group)\n+ more_exts = [ep.name for ep in e_points]\n exts.extend(more_exts)\n return sorted(set(exts))\n", "issue": "ga: `DeprecationWarning`: SelectableGroups dict interface is deprecated. Use select.\nI see the following deprecation warning in our tests:\r\n```cmd\r\ntests/vegalite/v5/tests/test_display.py::test_check_renderer_options\r\n[51](https://github.com/altair-viz/altair/actions/runs/4210647283/jobs/7308470442#step:5:52)\r\n /home/runner/work/altair/altair/altair/utils/plugin_registry.py:124: \r\nDeprecationWarning: SelectableGroups dict interface is deprecated. Use select.\r\n\r\n[52](https://github.com/altair-viz/altair/actions/runs/4210647283/jobs/7308470442#step:5:53)\r\n\r\n more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]\r\n```\r\nIt is around here in the code: https://github.com/altair-viz/altair/blob/master/altair/utils/plugin_registry.py#L124\n", "before_files": [{"content": "from typing import Any, Dict, List, Optional, Generic, TypeVar, cast\nfrom types import TracebackType\n\ntry:\n from importlib.metadata import entry_points\nexcept ImportError:\n from importlib_metadata import entry_points\n\nfrom toolz import curry\n\n\nPluginType = TypeVar(\"PluginType\")\n\n\nclass NoSuchEntryPoint(Exception):\n def __init__(self, group, name):\n self.group = group\n self.name = name\n\n def __str__(self):\n return f\"No {self.name!r} entry point found in group {self.group!r}\"\n\n\nclass PluginEnabler(object):\n \"\"\"Context manager for enabling plugins\n\n This object lets you use enable() as a context manager to\n temporarily enable a given plugin::\n\n with plugins.enable('name'):\n do_something() # 'name' plugin temporarily enabled\n # plugins back to original state\n \"\"\"\n\n def __init__(self, registry: \"PluginRegistry\", name: str, **options):\n self.registry = registry # type: PluginRegistry\n self.name = name # type: str\n self.options = options # type: Dict[str, Any]\n self.original_state = registry._get_state() # type: Dict[str, Any]\n self.registry._enable(name, **options)\n\n def __enter__(self) -> \"PluginEnabler\":\n return self\n\n def __exit__(self, typ: type, value: Exception, traceback: TracebackType) -> None:\n self.registry._set_state(self.original_state)\n\n def __repr__(self) -> str:\n return \"{}.enable({!r})\".format(self.registry.__class__.__name__, self.name)\n\n\nclass PluginRegistry(Generic[PluginType]):\n \"\"\"A registry for plugins.\n\n This is a plugin registry that allows plugins to be loaded/registered\n in two ways:\n\n 1. Through an explicit call to ``.register(name, value)``.\n 2. By looking for other Python packages that are installed and provide\n a setuptools entry point group.\n\n When you create an instance of this class, provide the name of the\n entry point group to use::\n\n reg = PluginRegister('my_entrypoint_group')\n\n \"\"\"\n\n # this is a mapping of name to error message to allow custom error messages\n # in case an entrypoint is not found\n entrypoint_err_messages = {} # type: Dict[str, str]\n\n # global settings is a key-value mapping of settings that are stored globally\n # in the registry rather than passed to the plugins\n _global_settings = {} # type: Dict[str, Any]\n\n def __init__(self, entry_point_group: str = \"\", plugin_type: type = object):\n \"\"\"Create a PluginRegistry for a named entry point group.\n\n Parameters\n ==========\n entry_point_group: str\n The name of the entry point group.\n plugin_type: object\n A type that will optionally be used for runtime type checking of\n loaded plugins using isinstance.\n \"\"\"\n self.entry_point_group = entry_point_group # type: str\n self.plugin_type = plugin_type # type: Optional[type]\n self._active = None # type: Optional[PluginType]\n self._active_name = \"\" # type: str\n self._plugins = {} # type: Dict[str, PluginType]\n self._options = {} # type: Dict[str, Any]\n self._global_settings = self.__class__._global_settings.copy() # type: dict\n\n def register(self, name: str, value: Optional[PluginType]) -> Optional[PluginType]:\n \"\"\"Register a plugin by name and value.\n\n This method is used for explicit registration of a plugin and shouldn't be\n used to manage entry point managed plugins, which are auto-loaded.\n\n Parameters\n ==========\n name: str\n The name of the plugin.\n value: PluginType or None\n The actual plugin object to register or None to unregister that plugin.\n\n Returns\n =======\n plugin: PluginType or None\n The plugin that was registered or unregistered.\n \"\"\"\n if value is None:\n return self._plugins.pop(name, None)\n else:\n assert isinstance(value, self.plugin_type)\n self._plugins[name] = value\n return value\n\n def names(self) -> List[str]:\n \"\"\"List the names of the registered and entry points plugins.\"\"\"\n exts = list(self._plugins.keys())\n more_exts = [ep.name for ep in entry_points().get(self.entry_point_group, [])]\n exts.extend(more_exts)\n return sorted(set(exts))\n\n def _get_state(self) -> Dict[str, Any]:\n \"\"\"Return a dictionary representing the current state of the registry\"\"\"\n return {\n \"_active\": self._active,\n \"_active_name\": self._active_name,\n \"_plugins\": self._plugins.copy(),\n \"_options\": self._options.copy(),\n \"_global_settings\": self._global_settings.copy(),\n }\n\n def _set_state(self, state: Dict[str, Any]) -> None:\n \"\"\"Reset the state of the registry\"\"\"\n assert set(state.keys()) == {\n \"_active\",\n \"_active_name\",\n \"_plugins\",\n \"_options\",\n \"_global_settings\",\n }\n for key, val in state.items():\n setattr(self, key, val)\n\n def _enable(self, name: str, **options) -> None:\n if name not in self._plugins:\n try:\n (ep,) = [\n ep\n for ep in entry_points().get(self.entry_point_group, [])\n if ep.name == name\n ]\n except ValueError:\n if name in self.entrypoint_err_messages:\n raise ValueError(self.entrypoint_err_messages[name])\n else:\n raise NoSuchEntryPoint(self.entry_point_group, name)\n value = cast(PluginType, ep.load())\n self.register(name, value)\n self._active_name = name\n self._active = self._plugins[name]\n for key in set(options.keys()) & set(self._global_settings.keys()):\n self._global_settings[key] = options.pop(key)\n self._options = options\n\n def enable(self, name: Optional[str] = None, **options) -> PluginEnabler:\n \"\"\"Enable a plugin by name.\n\n This can be either called directly, or used as a context manager.\n\n Parameters\n ----------\n name : string (optional)\n The name of the plugin to enable. If not specified, then use the\n current active name.\n **options :\n Any additional parameters will be passed to the plugin as keyword\n arguments\n\n Returns\n -------\n PluginEnabler:\n An object that allows enable() to be used as a context manager\n \"\"\"\n if name is None:\n name = self.active\n return PluginEnabler(self, name, **options)\n\n @property\n def active(self) -> str:\n \"\"\"Return the name of the currently active plugin\"\"\"\n return self._active_name\n\n @property\n def options(self) -> Dict[str, Any]:\n \"\"\"Return the current options dictionary\"\"\"\n return self._options\n\n def get(self) -> Optional[PluginType]:\n \"\"\"Return the currently active plugin.\"\"\"\n if self._options:\n return curry(self._active, **self._options)\n else:\n return self._active\n\n def __repr__(self) -> str:\n return \"{}(active={!r}, registered={!r})\" \"\".format(\n self.__class__.__name__, self._active_name, list(self.names())\n )\n", "path": "altair/utils/plugin_registry.py"}]}
| 2,973 | 233 |
gh_patches_debug_39017
|
rasdani/github-patches
|
git_diff
|
pypa__pip-2464
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interrupting a pip download with CTRL-C does not unhide the cursor
Version: pep 6.0.8
Pressing CTRL-C while pip is downloading a package (and displaying its progress bar) shows an "Operation cancelled by user" message and drops the user back to the shell prompt, but does not unhide the cursor that was hidden while displaying the progress bar.
Glancing at the code, it looks like the `progress` library already offers a `SigIntMixin` helper for handling this, and that pip's progress bar isn't using it. Maybe including this mixin in the appropriate place(s) is all that's needed?
</issue>
<code>
[start of pip/utils/ui.py]
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import itertools
5 import sys
6
7 from pip.compat import WINDOWS
8 from pip.utils import format_size
9 from pip.utils.logging import get_indentation
10 from pip._vendor import six
11 from pip._vendor.progress.bar import Bar, IncrementalBar
12 from pip._vendor.progress.helpers import WritelnMixin
13 from pip._vendor.progress.spinner import Spinner
14
15 try:
16 from pip._vendor import colorama
17 # Lots of different errors can come from this, including SystemError and
18 # ImportError.
19 except Exception:
20 colorama = None
21
22
23 def _select_progress_class(preferred, fallback):
24 encoding = getattr(preferred.file, "encoding", None)
25
26 # If we don't know what encoding this file is in, then we'll just assume
27 # that it doesn't support unicode and use the ASCII bar.
28 if not encoding:
29 return fallback
30
31 # Collect all of the possible characters we want to use with the preferred
32 # bar.
33 characters = [
34 getattr(preferred, "empty_fill", six.text_type()),
35 getattr(preferred, "fill", six.text_type()),
36 ]
37 characters += list(getattr(preferred, "phases", []))
38
39 # Try to decode the characters we're using for the bar using the encoding
40 # of the given file, if this works then we'll assume that we can use the
41 # fancier bar and if not we'll fall back to the plaintext bar.
42 try:
43 six.text_type().join(characters).encode(encoding)
44 except UnicodeEncodeError:
45 return fallback
46 else:
47 return preferred
48
49
50 _BaseBar = _select_progress_class(IncrementalBar, Bar)
51
52
53 class DownloadProgressMixin(object):
54
55 def __init__(self, *args, **kwargs):
56 super(DownloadProgressMixin, self).__init__(*args, **kwargs)
57 self.message = (" " * (get_indentation() + 2)) + self.message
58
59 @property
60 def downloaded(self):
61 return format_size(self.index)
62
63 @property
64 def download_speed(self):
65 # Avoid zero division errors...
66 if self.avg == 0.0:
67 return "..."
68 return format_size(1 / self.avg) + "/s"
69
70 @property
71 def pretty_eta(self):
72 if self.eta:
73 return "eta %s" % self.eta_td
74 return ""
75
76 def iter(self, it, n=1):
77 for x in it:
78 yield x
79 self.next(n)
80 self.finish()
81
82
83 class WindowsMixin(object):
84
85 def __init__(self, *args, **kwargs):
86 # The Windows terminal does not support the hide/show cursor ANSI codes
87 # even with colorama. So we'll ensure that hide_cursor is False on
88 # Windows.
89 # This call neds to go before the super() call, so that hide_cursor
90 # is set in time. The base progress bar class writes the "hide cursor"
91 # code to the terminal in its init, so if we don't set this soon
92 # enough, we get a "hide" with no corresponding "show"...
93 if WINDOWS and self.hide_cursor:
94 self.hide_cursor = False
95
96 super(WindowsMixin, self).__init__(*args, **kwargs)
97
98 # Check if we are running on Windows and we have the colorama module,
99 # if we do then wrap our file with it.
100 if WINDOWS and colorama:
101 self.file = colorama.AnsiToWin32(self.file)
102 # The progress code expects to be able to call self.file.isatty()
103 # but the colorama.AnsiToWin32() object doesn't have that, so we'll
104 # add it.
105 self.file.isatty = lambda: self.file.wrapped.isatty()
106 # The progress code expects to be able to call self.file.flush()
107 # but the colorama.AnsiToWin32() object doesn't have that, so we'll
108 # add it.
109 self.file.flush = lambda: self.file.wrapped.flush()
110
111
112 class DownloadProgressBar(WindowsMixin, DownloadProgressMixin, _BaseBar):
113
114 file = sys.stdout
115 message = "%(percent)d%%"
116 suffix = "%(downloaded)s %(download_speed)s %(pretty_eta)s"
117
118
119 class DownloadProgressSpinner(WindowsMixin, DownloadProgressMixin,
120 WritelnMixin, Spinner):
121
122 file = sys.stdout
123 suffix = "%(downloaded)s %(download_speed)s"
124
125 def next_phase(self):
126 if not hasattr(self, "_phaser"):
127 self._phaser = itertools.cycle(self.phases)
128 return next(self._phaser)
129
130 def update(self):
131 message = self.message % self
132 phase = self.next_phase()
133 suffix = self.suffix % self
134 line = ''.join([
135 message,
136 " " if message else "",
137 phase,
138 " " if suffix else "",
139 suffix,
140 ])
141
142 self.writeln(line)
143
[end of pip/utils/ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pip/utils/ui.py b/pip/utils/ui.py
--- a/pip/utils/ui.py
+++ b/pip/utils/ui.py
@@ -3,6 +3,7 @@
import itertools
import sys
+from signal import signal, SIGINT, default_int_handler
from pip.compat import WINDOWS
from pip.utils import format_size
@@ -50,6 +51,61 @@
_BaseBar = _select_progress_class(IncrementalBar, Bar)
+class InterruptibleMixin(object):
+ """
+ Helper to ensure that self.finish() gets called on keyboard interrupt.
+
+ This allows downloads to be interrupted without leaving temporary state
+ (like hidden cursors) behind.
+
+ This class is similar to the progress library's existing SigIntMixin
+ helper, but as of version 1.2, that helper has the following problems:
+
+ 1. It calls sys.exit().
+ 2. It discards the existing SIGINT handler completely.
+ 3. It leaves its own handler in place even after an uninterrupted finish,
+ which will have unexpected delayed effects if the user triggers an
+ unrelated keyboard interrupt some time after a progress-displaying
+ download has already completed, for example.
+ """
+
+ def __init__(self, *args, **kwargs):
+ """
+ Save the original SIGINT handler for later.
+ """
+ super(InterruptibleMixin, self).__init__(*args, **kwargs)
+
+ self.original_handler = signal(SIGINT, self.handle_sigint)
+
+ # If signal() returns None, the previous handler was not installed from
+ # Python, and we cannot restore it. This probably should not happen,
+ # but if it does, we must restore something sensible instead, at least.
+ # The least bad option should be Python's default SIGINT handler, which
+ # just raises KeyboardInterrupt.
+ if self.original_handler is None:
+ self.original_handler = default_int_handler
+
+ def finish(self):
+ """
+ Restore the original SIGINT handler after finishing.
+
+ This should happen regardless of whether the progress display finishes
+ normally, or gets interrupted.
+ """
+ super(InterruptibleMixin, self).finish()
+ signal(SIGINT, self.original_handler)
+
+ def handle_sigint(self, signum, frame):
+ """
+ Call self.finish() before delegating to the original SIGINT handler.
+
+ This handler should only be in place while the progress display is
+ active.
+ """
+ self.finish()
+ self.original_handler(signum, frame)
+
+
class DownloadProgressMixin(object):
def __init__(self, *args, **kwargs):
@@ -109,15 +165,16 @@
self.file.flush = lambda: self.file.wrapped.flush()
-class DownloadProgressBar(WindowsMixin, DownloadProgressMixin, _BaseBar):
+class DownloadProgressBar(WindowsMixin, InterruptibleMixin,
+ DownloadProgressMixin, _BaseBar):
file = sys.stdout
message = "%(percent)d%%"
suffix = "%(downloaded)s %(download_speed)s %(pretty_eta)s"
-class DownloadProgressSpinner(WindowsMixin, DownloadProgressMixin,
- WritelnMixin, Spinner):
+class DownloadProgressSpinner(WindowsMixin, InterruptibleMixin,
+ DownloadProgressMixin, WritelnMixin, Spinner):
file = sys.stdout
suffix = "%(downloaded)s %(download_speed)s"
|
{"golden_diff": "diff --git a/pip/utils/ui.py b/pip/utils/ui.py\n--- a/pip/utils/ui.py\n+++ b/pip/utils/ui.py\n@@ -3,6 +3,7 @@\n \n import itertools\n import sys\n+from signal import signal, SIGINT, default_int_handler\n \n from pip.compat import WINDOWS\n from pip.utils import format_size\n@@ -50,6 +51,61 @@\n _BaseBar = _select_progress_class(IncrementalBar, Bar)\n \n \n+class InterruptibleMixin(object):\n+ \"\"\"\n+ Helper to ensure that self.finish() gets called on keyboard interrupt.\n+\n+ This allows downloads to be interrupted without leaving temporary state\n+ (like hidden cursors) behind.\n+\n+ This class is similar to the progress library's existing SigIntMixin\n+ helper, but as of version 1.2, that helper has the following problems:\n+\n+ 1. It calls sys.exit().\n+ 2. It discards the existing SIGINT handler completely.\n+ 3. It leaves its own handler in place even after an uninterrupted finish,\n+ which will have unexpected delayed effects if the user triggers an\n+ unrelated keyboard interrupt some time after a progress-displaying\n+ download has already completed, for example.\n+ \"\"\"\n+\n+ def __init__(self, *args, **kwargs):\n+ \"\"\"\n+ Save the original SIGINT handler for later.\n+ \"\"\"\n+ super(InterruptibleMixin, self).__init__(*args, **kwargs)\n+\n+ self.original_handler = signal(SIGINT, self.handle_sigint)\n+\n+ # If signal() returns None, the previous handler was not installed from\n+ # Python, and we cannot restore it. This probably should not happen,\n+ # but if it does, we must restore something sensible instead, at least.\n+ # The least bad option should be Python's default SIGINT handler, which\n+ # just raises KeyboardInterrupt.\n+ if self.original_handler is None:\n+ self.original_handler = default_int_handler\n+\n+ def finish(self):\n+ \"\"\"\n+ Restore the original SIGINT handler after finishing.\n+\n+ This should happen regardless of whether the progress display finishes\n+ normally, or gets interrupted.\n+ \"\"\"\n+ super(InterruptibleMixin, self).finish()\n+ signal(SIGINT, self.original_handler)\n+\n+ def handle_sigint(self, signum, frame):\n+ \"\"\"\n+ Call self.finish() before delegating to the original SIGINT handler.\n+\n+ This handler should only be in place while the progress display is\n+ active.\n+ \"\"\"\n+ self.finish()\n+ self.original_handler(signum, frame)\n+\n+\n class DownloadProgressMixin(object):\n \n def __init__(self, *args, **kwargs):\n@@ -109,15 +165,16 @@\n self.file.flush = lambda: self.file.wrapped.flush()\n \n \n-class DownloadProgressBar(WindowsMixin, DownloadProgressMixin, _BaseBar):\n+class DownloadProgressBar(WindowsMixin, InterruptibleMixin,\n+ DownloadProgressMixin, _BaseBar):\n \n file = sys.stdout\n message = \"%(percent)d%%\"\n suffix = \"%(downloaded)s %(download_speed)s %(pretty_eta)s\"\n \n \n-class DownloadProgressSpinner(WindowsMixin, DownloadProgressMixin,\n- WritelnMixin, Spinner):\n+class DownloadProgressSpinner(WindowsMixin, InterruptibleMixin,\n+ DownloadProgressMixin, WritelnMixin, Spinner):\n \n file = sys.stdout\n suffix = \"%(downloaded)s %(download_speed)s\"\n", "issue": "Interrupting a pip download with CTRL-C does not unhide the cursor\nVersion: pep 6.0.8\n\nPressing CTRL-C while pip is downloading a package (and displaying its progress bar) shows an \"Operation cancelled by user\" message and drops the user back to the shell prompt, but does not unhide the cursor that was hidden while displaying the progress bar.\n\nGlancing at the code, it looks like the `progress` library already offers a `SigIntMixin` helper for handling this, and that pip's progress bar isn't using it. Maybe including this mixin in the appropriate place(s) is all that's needed?\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport itertools\nimport sys\n\nfrom pip.compat import WINDOWS\nfrom pip.utils import format_size\nfrom pip.utils.logging import get_indentation\nfrom pip._vendor import six\nfrom pip._vendor.progress.bar import Bar, IncrementalBar\nfrom pip._vendor.progress.helpers import WritelnMixin\nfrom pip._vendor.progress.spinner import Spinner\n\ntry:\n from pip._vendor import colorama\n# Lots of different errors can come from this, including SystemError and\n# ImportError.\nexcept Exception:\n colorama = None\n\n\ndef _select_progress_class(preferred, fallback):\n encoding = getattr(preferred.file, \"encoding\", None)\n\n # If we don't know what encoding this file is in, then we'll just assume\n # that it doesn't support unicode and use the ASCII bar.\n if not encoding:\n return fallback\n\n # Collect all of the possible characters we want to use with the preferred\n # bar.\n characters = [\n getattr(preferred, \"empty_fill\", six.text_type()),\n getattr(preferred, \"fill\", six.text_type()),\n ]\n characters += list(getattr(preferred, \"phases\", []))\n\n # Try to decode the characters we're using for the bar using the encoding\n # of the given file, if this works then we'll assume that we can use the\n # fancier bar and if not we'll fall back to the plaintext bar.\n try:\n six.text_type().join(characters).encode(encoding)\n except UnicodeEncodeError:\n return fallback\n else:\n return preferred\n\n\n_BaseBar = _select_progress_class(IncrementalBar, Bar)\n\n\nclass DownloadProgressMixin(object):\n\n def __init__(self, *args, **kwargs):\n super(DownloadProgressMixin, self).__init__(*args, **kwargs)\n self.message = (\" \" * (get_indentation() + 2)) + self.message\n\n @property\n def downloaded(self):\n return format_size(self.index)\n\n @property\n def download_speed(self):\n # Avoid zero division errors...\n if self.avg == 0.0:\n return \"...\"\n return format_size(1 / self.avg) + \"/s\"\n\n @property\n def pretty_eta(self):\n if self.eta:\n return \"eta %s\" % self.eta_td\n return \"\"\n\n def iter(self, it, n=1):\n for x in it:\n yield x\n self.next(n)\n self.finish()\n\n\nclass WindowsMixin(object):\n\n def __init__(self, *args, **kwargs):\n # The Windows terminal does not support the hide/show cursor ANSI codes\n # even with colorama. So we'll ensure that hide_cursor is False on\n # Windows.\n # This call neds to go before the super() call, so that hide_cursor\n # is set in time. The base progress bar class writes the \"hide cursor\"\n # code to the terminal in its init, so if we don't set this soon\n # enough, we get a \"hide\" with no corresponding \"show\"...\n if WINDOWS and self.hide_cursor:\n self.hide_cursor = False\n\n super(WindowsMixin, self).__init__(*args, **kwargs)\n\n # Check if we are running on Windows and we have the colorama module,\n # if we do then wrap our file with it.\n if WINDOWS and colorama:\n self.file = colorama.AnsiToWin32(self.file)\n # The progress code expects to be able to call self.file.isatty()\n # but the colorama.AnsiToWin32() object doesn't have that, so we'll\n # add it.\n self.file.isatty = lambda: self.file.wrapped.isatty()\n # The progress code expects to be able to call self.file.flush()\n # but the colorama.AnsiToWin32() object doesn't have that, so we'll\n # add it.\n self.file.flush = lambda: self.file.wrapped.flush()\n\n\nclass DownloadProgressBar(WindowsMixin, DownloadProgressMixin, _BaseBar):\n\n file = sys.stdout\n message = \"%(percent)d%%\"\n suffix = \"%(downloaded)s %(download_speed)s %(pretty_eta)s\"\n\n\nclass DownloadProgressSpinner(WindowsMixin, DownloadProgressMixin,\n WritelnMixin, Spinner):\n\n file = sys.stdout\n suffix = \"%(downloaded)s %(download_speed)s\"\n\n def next_phase(self):\n if not hasattr(self, \"_phaser\"):\n self._phaser = itertools.cycle(self.phases)\n return next(self._phaser)\n\n def update(self):\n message = self.message % self\n phase = self.next_phase()\n suffix = self.suffix % self\n line = ''.join([\n message,\n \" \" if message else \"\",\n phase,\n \" \" if suffix else \"\",\n suffix,\n ])\n\n self.writeln(line)\n", "path": "pip/utils/ui.py"}]}
| 2,061 | 763 |
gh_patches_debug_17530
|
rasdani/github-patches
|
git_diff
|
biopython__biopython-2513
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove use of Bio._py3k (Python 2 / 3 compatibility)
As of Biopython 1.76 (released December 2019), we are dropping Python 2 support and focusing on Python 3.6 or later. This means we no longer need our (internal) Python 2 vs 3 compatibility library ``Bio._py3k`` (which is a bit like the third party library ``six``).
This issue is suitable and specifically targeting first time contributors.
There are lots of cases:
```
$ grep _py3k Bio*/*.py Bio/*/*.py Bio/*/*/*.py
Bio/File.py:from Bio._py3k import basestring
Bio/MarkovModel.py: from Bio._py3k import StringIO
Bio/Seq.py:from Bio._py3k import range
Bio/Seq.py:from Bio._py3k import basestring
...
```
Example One
------------
Taking the first example, ``from Bio._py3k import basestring`` we see that this is defined under Python 3 as an alias of ``str``:
https://github.com/biopython/biopython/blob/biopython-176/Bio/_py3k/__init__.py#L56
```python
# Lots of our Python 2 code uses isinstance(x, basestring)
# which after 2to3 becomes isinstance(x, str)
basestring = str
unicode = str
```
Therefore the fix for ``Bio/File.py`` is to remove the ``from Bio._py3k import basestring`` line, and update where ``basestring`` was used to instead use ``str``, which in this case means editing one line:
```python
if isinstance(handleish, basestring):
```
with:
```python
if isinstance(handleish, str):
```
Example Two
------------
Taking the second example, ``Bio/MarkovModel.py`` has ``from Bio._py3k import StringIO`` which we find on Python 3 can just be replaced with ``from io import StringIO``
https://github.com/biopython/biopython/blob/biopython-176/Bio/_py3k/__init__.py#L130
Contributing
-----------
Could any newcomer wanting to work on this first comment on this issue saying which file(s) they are going to start with (e.g. ``Bio/File.py``, or ``BioSQL/``) to avoid duplication of effort.
(*Update: The consensus was to switch to be function or constant instead, since they generally require the same technique/fix each time*)
Then read https://github.com/biopython/biopython/blob/master/CONTRIBUTING.rst and setup ``flake8`` on your machine.
Then make a pull request making the necessary changes so that those files no longer import from ``Bio._py3k``. Once that's done, you could pick some more to work on.
Eventually there will be nothing using ``Bio._py3k`` and that code itself can be removed, and this issue closed.
</issue>
<code>
[start of Bio/_py3k/__init__.py]
1 # Copyright 2010-2018 by Peter Cock. All rights reserved.
2 #
3 # This file is part of the Biopython distribution and governed by your
4 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
5 # Please see the LICENSE file that should have been included as part of this
6 # package.
7 """Python 3 compatibility tools (PRIVATE).
8
9 Once we drop support for Python 2, the whole of Bio._py3k will
10 go away.
11 """
12
13 # From the point of view of pep8 and flake8, there are lots of issues with
14 # this file. This line tells flake8 to ignore it for quality assurance:
15 # flake8: noqa
16
17 import sys
18
19 import codecs
20
21
22 def _bytes_bytearray_to_str(s):
23 """If s is bytes or bytearray, convert to a unicode string (PRIVATE)."""
24 if isinstance(s, (bytes, bytearray)):
25 return s.decode()
26 return s
27
28
29 import io
30 import locale
31
32 # Python 3.4 onwards, the standard library wrappers should work:
33 def _binary_to_string_handle(handle):
34 """Treat a binary (bytes) handle like a text (unicode) handle (PRIVATE)."""
35 try:
36 # If this is a network handle from urllib,
37 # the HTTP headers may tell us the encoding.
38 encoding = handle.headers.get_content_charset()
39 except AttributeError:
40 encoding = None
41 if encoding is None:
42 # The W3C recommendation is:
43 # When no explicit charset parameter is provided by the sender,
44 # media subtypes of the "text" type are defined to have a default
45 # charset value of "ISO-8859-1" when received via HTTP.
46 # "ISO-8859-1" is also known as 'latin-1'
47 # See the following for more detail:
48 # https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1
49 encoding = "latin-1"
50 wrapped = io.TextIOWrapper(io.BufferedReader(handle), encoding=encoding)
51 try:
52 # If wrapping an online handle, this is nice to have:
53 wrapped.url = handle.url
54 except AttributeError:
55 pass
56 return wrapped
57
58
59 # On Python 3 urllib, urllib2, and urlparse were merged:
60 from urllib.request import urlopen, Request, urlparse, urlcleanup
61 from urllib.parse import urlencode, quote
62 from urllib.error import URLError, HTTPError
63
[end of Bio/_py3k/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Bio/_py3k/__init__.py b/Bio/_py3k/__init__.py
--- a/Bio/_py3k/__init__.py
+++ b/Bio/_py3k/__init__.py
@@ -14,20 +14,7 @@
# this file. This line tells flake8 to ignore it for quality assurance:
# flake8: noqa
-import sys
-
-import codecs
-
-
-def _bytes_bytearray_to_str(s):
- """If s is bytes or bytearray, convert to a unicode string (PRIVATE)."""
- if isinstance(s, (bytes, bytearray)):
- return s.decode()
- return s
-
-
import io
-import locale
# Python 3.4 onwards, the standard library wrappers should work:
def _binary_to_string_handle(handle):
@@ -54,9 +41,3 @@
except AttributeError:
pass
return wrapped
-
-
-# On Python 3 urllib, urllib2, and urlparse were merged:
-from urllib.request import urlopen, Request, urlparse, urlcleanup
-from urllib.parse import urlencode, quote
-from urllib.error import URLError, HTTPError
|
{"golden_diff": "diff --git a/Bio/_py3k/__init__.py b/Bio/_py3k/__init__.py\n--- a/Bio/_py3k/__init__.py\n+++ b/Bio/_py3k/__init__.py\n@@ -14,20 +14,7 @@\n # this file. This line tells flake8 to ignore it for quality assurance:\n # flake8: noqa\n \n-import sys\n-\n-import codecs\n-\n-\n-def _bytes_bytearray_to_str(s):\n- \"\"\"If s is bytes or bytearray, convert to a unicode string (PRIVATE).\"\"\"\n- if isinstance(s, (bytes, bytearray)):\n- return s.decode()\n- return s\n-\n-\n import io\n-import locale\n \n # Python 3.4 onwards, the standard library wrappers should work:\n def _binary_to_string_handle(handle):\n@@ -54,9 +41,3 @@\n except AttributeError:\n pass\n return wrapped\n-\n-\n-# On Python 3 urllib, urllib2, and urlparse were merged:\n-from urllib.request import urlopen, Request, urlparse, urlcleanup\n-from urllib.parse import urlencode, quote\n-from urllib.error import URLError, HTTPError\n", "issue": "Remove use of Bio._py3k (Python 2 / 3 compatibility)\nAs of Biopython 1.76 (released December 2019), we are dropping Python 2 support and focusing on Python 3.6 or later. This means we no longer need our (internal) Python 2 vs 3 compatibility library ``Bio._py3k`` (which is a bit like the third party library ``six``).\r\n\r\nThis issue is suitable and specifically targeting first time contributors.\r\n\r\nThere are lots of cases:\r\n\r\n```\r\n$ grep _py3k Bio*/*.py Bio/*/*.py Bio/*/*/*.py\r\nBio/File.py:from Bio._py3k import basestring\r\nBio/MarkovModel.py: from Bio._py3k import StringIO\r\nBio/Seq.py:from Bio._py3k import range\r\nBio/Seq.py:from Bio._py3k import basestring\r\n...\r\n```\r\n\r\nExample One\r\n------------\r\n\r\nTaking the first example, ``from Bio._py3k import basestring`` we see that this is defined under Python 3 as an alias of ``str``:\r\n\r\nhttps://github.com/biopython/biopython/blob/biopython-176/Bio/_py3k/__init__.py#L56\r\n\r\n```python\r\n # Lots of our Python 2 code uses isinstance(x, basestring)\r\n # which after 2to3 becomes isinstance(x, str)\r\n basestring = str\r\n unicode = str\r\n```\r\n\r\nTherefore the fix for ``Bio/File.py`` is to remove the ``from Bio._py3k import basestring`` line, and update where ``basestring`` was used to instead use ``str``, which in this case means editing one line:\r\n\r\n```python\r\nif isinstance(handleish, basestring):\r\n```\r\n\r\nwith:\r\n\r\n```python\r\nif isinstance(handleish, str):\r\n```\r\n\r\nExample Two\r\n------------\r\n\r\nTaking the second example, ``Bio/MarkovModel.py`` has ``from Bio._py3k import StringIO`` which we find on Python 3 can just be replaced with ``from io import StringIO``\r\n\r\nhttps://github.com/biopython/biopython/blob/biopython-176/Bio/_py3k/__init__.py#L130\r\n\r\nContributing\r\n-----------\r\n\r\nCould any newcomer wanting to work on this first comment on this issue saying which file(s) they are going to start with (e.g. ``Bio/File.py``, or ``BioSQL/``) to avoid duplication of effort.\r\n\r\n(*Update: The consensus was to switch to be function or constant instead, since they generally require the same technique/fix each time*)\r\n\r\nThen read https://github.com/biopython/biopython/blob/master/CONTRIBUTING.rst and setup ``flake8`` on your machine.\r\n\r\nThen make a pull request making the necessary changes so that those files no longer import from ``Bio._py3k``. Once that's done, you could pick some more to work on.\r\n\r\nEventually there will be nothing using ``Bio._py3k`` and that code itself can be removed, and this issue closed.\n", "before_files": [{"content": "# Copyright 2010-2018 by Peter Cock. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Python 3 compatibility tools (PRIVATE).\n\nOnce we drop support for Python 2, the whole of Bio._py3k will\ngo away.\n\"\"\"\n\n# From the point of view of pep8 and flake8, there are lots of issues with\n# this file. This line tells flake8 to ignore it for quality assurance:\n# flake8: noqa\n\nimport sys\n\nimport codecs\n\n\ndef _bytes_bytearray_to_str(s):\n \"\"\"If s is bytes or bytearray, convert to a unicode string (PRIVATE).\"\"\"\n if isinstance(s, (bytes, bytearray)):\n return s.decode()\n return s\n\n\nimport io\nimport locale\n\n# Python 3.4 onwards, the standard library wrappers should work:\ndef _binary_to_string_handle(handle):\n \"\"\"Treat a binary (bytes) handle like a text (unicode) handle (PRIVATE).\"\"\"\n try:\n # If this is a network handle from urllib,\n # the HTTP headers may tell us the encoding.\n encoding = handle.headers.get_content_charset()\n except AttributeError:\n encoding = None\n if encoding is None:\n # The W3C recommendation is:\n # When no explicit charset parameter is provided by the sender,\n # media subtypes of the \"text\" type are defined to have a default\n # charset value of \"ISO-8859-1\" when received via HTTP.\n # \"ISO-8859-1\" is also known as 'latin-1'\n # See the following for more detail:\n # https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1\n encoding = \"latin-1\"\n wrapped = io.TextIOWrapper(io.BufferedReader(handle), encoding=encoding)\n try:\n # If wrapping an online handle, this is nice to have:\n wrapped.url = handle.url\n except AttributeError:\n pass\n return wrapped\n\n\n# On Python 3 urllib, urllib2, and urlparse were merged:\nfrom urllib.request import urlopen, Request, urlparse, urlcleanup\nfrom urllib.parse import urlencode, quote\nfrom urllib.error import URLError, HTTPError\n", "path": "Bio/_py3k/__init__.py"}]}
| 1,861 | 252 |
gh_patches_debug_8524
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-864
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError when using IPCProvider
* Version: 4.2.1
* Python: 2.7/3.4/3.5
* OS: osx/linux/win
### What was wrong?
When using IPCProvider sometimes ValueError is encountered if the response is incomplete. Looks like in this pull request https://github.com/ethereum/web3.py/pull/785/files the author forgot to add `except ValueError` to IPCProvider https://github.com/ethereum/web3.py/blob/master/web3/providers/ipc.py#L176
### How can it be fixed?
Add `except ValueError` to IPCProvider.make_request https://github.com/ethereum/web3.py/blob/master/web3/providers/ipc.py#L176
</issue>
<code>
[start of web3/utils/encoding.py]
1 # String encodings and numeric representations
2 import json
3 import re
4
5 from eth_utils import (
6 add_0x_prefix,
7 big_endian_to_int,
8 decode_hex,
9 encode_hex,
10 int_to_big_endian,
11 is_boolean,
12 is_bytes,
13 is_hex,
14 is_integer,
15 is_list_like,
16 remove_0x_prefix,
17 to_hex,
18 )
19
20 from web3.utils.abi import (
21 is_address_type,
22 is_array_type,
23 is_bool_type,
24 is_bytes_type,
25 is_int_type,
26 is_string_type,
27 is_uint_type,
28 size_of_type,
29 sub_type_of_array_type,
30 )
31 from web3.utils.toolz import (
32 curry,
33 )
34 from web3.utils.validation import (
35 assert_one_val,
36 validate_abi_type,
37 validate_abi_value,
38 )
39
40
41 def hex_encode_abi_type(abi_type, value, force_size=None):
42 """
43 Encodes value into a hex string in format of abi_type
44 """
45 validate_abi_type(abi_type)
46 validate_abi_value(abi_type, value)
47
48 data_size = force_size or size_of_type(abi_type)
49 if is_array_type(abi_type):
50 sub_type = sub_type_of_array_type(abi_type)
51 return "".join([remove_0x_prefix(hex_encode_abi_type(sub_type, v, 256)) for v in value])
52 elif is_bool_type(abi_type):
53 return to_hex_with_size(value, data_size)
54 elif is_uint_type(abi_type):
55 return to_hex_with_size(value, data_size)
56 elif is_int_type(abi_type):
57 return to_hex_twos_compliment(value, data_size)
58 elif is_address_type(abi_type):
59 return pad_hex(value, data_size)
60 elif is_bytes_type(abi_type):
61 if is_bytes(value):
62 return encode_hex(value)
63 else:
64 return value
65 elif is_string_type(abi_type):
66 return to_hex(text=value)
67 else:
68 raise ValueError(
69 "Unsupported ABI type: {0}".format(abi_type)
70 )
71
72
73 def to_hex_twos_compliment(value, bit_size):
74 """
75 Converts integer value to twos compliment hex representation with given bit_size
76 """
77 if value >= 0:
78 return to_hex_with_size(value, bit_size)
79
80 value = (1 << bit_size) + value
81 hex_value = hex(value)
82 hex_value = hex_value.rstrip("L")
83 return hex_value
84
85
86 def to_hex_with_size(value, bit_size):
87 """
88 Converts a value to hex with given bit_size:
89 """
90 return pad_hex(to_hex(value), bit_size)
91
92
93 def pad_hex(value, bit_size):
94 """
95 Pads a hex string up to the given bit_size
96 """
97 value = remove_0x_prefix(value)
98 return add_0x_prefix(value.zfill(int(bit_size / 4)))
99
100
101 def trim_hex(hexstr):
102 if hexstr.startswith('0x0'):
103 hexstr = re.sub('^0x0+', '0x', hexstr)
104 if hexstr == '0x':
105 hexstr = '0x0'
106 return hexstr
107
108
109 def to_int(value=None, hexstr=None, text=None):
110 """
111 Converts value to it's integer representation.
112
113 Values are converted this way:
114
115 * value:
116 * bytes: big-endian integer
117 * bool: True => 1, False => 0
118 * hexstr: interpret hex as integer
119 * text: interpret as string of digits, like '12' => 12
120 """
121 assert_one_val(value, hexstr=hexstr, text=text)
122
123 if hexstr is not None:
124 return int(hexstr, 16)
125 elif text is not None:
126 return int(text)
127 elif isinstance(value, bytes):
128 return big_endian_to_int(value)
129 elif isinstance(value, str):
130 raise TypeError("Pass in strings with keyword hexstr or text")
131 else:
132 return int(value)
133
134
135 @curry
136 def pad_bytes(fill_with, num_bytes, unpadded):
137 return unpadded.rjust(num_bytes, fill_with)
138
139
140 zpad_bytes = pad_bytes(b'\0')
141
142
143 def to_bytes(primitive=None, hexstr=None, text=None):
144 assert_one_val(primitive, hexstr=hexstr, text=text)
145
146 if is_boolean(primitive):
147 return b'\x01' if primitive else b'\x00'
148 elif isinstance(primitive, bytes):
149 return primitive
150 elif is_integer(primitive):
151 return to_bytes(hexstr=to_hex(primitive))
152 elif hexstr is not None:
153 if len(hexstr) % 2:
154 hexstr = '0x0' + remove_0x_prefix(hexstr)
155 return decode_hex(hexstr)
156 elif text is not None:
157 return text.encode('utf-8')
158 raise TypeError("expected an int in first arg, or keyword of hexstr or text")
159
160
161 def to_text(primitive=None, hexstr=None, text=None):
162 assert_one_val(primitive, hexstr=hexstr, text=text)
163
164 if hexstr is not None:
165 return to_bytes(hexstr=hexstr).decode('utf-8')
166 elif text is not None:
167 return text
168 elif isinstance(primitive, str):
169 return to_text(hexstr=primitive)
170 elif isinstance(primitive, bytes):
171 return primitive.decode('utf-8')
172 elif is_integer(primitive):
173 byte_encoding = int_to_big_endian(primitive)
174 return to_text(byte_encoding)
175 raise TypeError("Expected an int, bytes or hexstr.")
176
177
178 @curry
179 def text_if_str(to_type, text_or_primitive):
180 '''
181 Convert to a type, assuming that strings can be only unicode text (not a hexstr)
182
183 @param to_type is a function that takes the arguments (primitive, hexstr=hexstr, text=text),
184 eg~ to_bytes, to_text, to_hex, to_int, etc
185 @param hexstr_or_primitive in bytes, str, or int.
186 '''
187 if isinstance(text_or_primitive, str):
188 (primitive, text) = (None, text_or_primitive)
189 else:
190 (primitive, text) = (text_or_primitive, None)
191 return to_type(primitive, text=text)
192
193
194 @curry
195 def hexstr_if_str(to_type, hexstr_or_primitive):
196 '''
197 Convert to a type, assuming that strings can be only hexstr (not unicode text)
198
199 @param to_type is a function that takes the arguments (primitive, hexstr=hexstr, text=text),
200 eg~ to_bytes, to_text, to_hex, to_int, etc
201 @param text_or_primitive in bytes, str, or int.
202 '''
203 if isinstance(hexstr_or_primitive, str):
204 (primitive, hexstr) = (None, hexstr_or_primitive)
205 if remove_0x_prefix(hexstr) and not is_hex(hexstr):
206 raise ValueError(
207 "when sending a str, it must be a hex string. Got: {0!r}".format(
208 hexstr_or_primitive,
209 )
210 )
211 else:
212 (primitive, hexstr) = (hexstr_or_primitive, None)
213 return to_type(primitive, hexstr=hexstr)
214
215
216 class FriendlyJsonSerde:
217 '''
218 Friendly JSON serializer & deserializer
219
220 When encoding or decoding fails, this class collects
221 information on which fields failed, to show more
222 helpful information in the raised error messages.
223 '''
224 def _json_mapping_errors(self, mapping):
225 for key, val in mapping.items():
226 try:
227 self._friendly_json_encode(val)
228 except TypeError as exc:
229 yield "%r: because (%s)" % (key, exc)
230
231 def _json_list_errors(self, iterable):
232 for index, element in enumerate(iterable):
233 try:
234 self._friendly_json_encode(element)
235 except TypeError as exc:
236 yield "%d: because (%s)" % (index, exc)
237
238 def _friendly_json_encode(self, obj):
239 try:
240 encoded = json.dumps(obj)
241 return encoded
242 except TypeError as full_exception:
243 if hasattr(obj, 'items'):
244 item_errors = '; '.join(self._json_mapping_errors(obj))
245 raise TypeError("dict had unencodable value at keys: {{{}}}".format(item_errors))
246 elif is_list_like(obj):
247 element_errors = '; '.join(self._json_list_errors(obj))
248 raise TypeError("list had unencodable value at index: [{}]".format(element_errors))
249 else:
250 raise full_exception
251
252 def json_decode(self, json_str):
253 try:
254 decoded = json.loads(json_str)
255 return decoded
256 except json.decoder.JSONDecodeError as exc:
257 err_msg = 'Could not decode {} because of {}.'.format(repr(json_str), exc)
258 raise ValueError(err_msg)
259
260 def json_encode(self, obj):
261 try:
262 return self._friendly_json_encode(obj)
263 except TypeError as exc:
264 raise TypeError("Could not encode to JSON: {}".format(exc))
265
266
267 def to_4byte_hex(hex_or_str_or_bytes):
268 size_of_4bytes = 4 * 8
269 byte_str = hexstr_if_str(to_bytes, hex_or_str_or_bytes)
270 if len(byte_str) > 4:
271 raise ValueError(
272 'expected value of size 4 bytes. Got: %d bytes' % len(byte_str)
273 )
274 hex_str = encode_hex(byte_str)
275 return pad_hex(hex_str, size_of_4bytes)
276
[end of web3/utils/encoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/web3/utils/encoding.py b/web3/utils/encoding.py
--- a/web3/utils/encoding.py
+++ b/web3/utils/encoding.py
@@ -255,7 +255,9 @@
return decoded
except json.decoder.JSONDecodeError as exc:
err_msg = 'Could not decode {} because of {}.'.format(repr(json_str), exc)
- raise ValueError(err_msg)
+ # Calling code may rely on catching JSONDecodeError to recognize bad json
+ # so we have to re-raise the same type.
+ raise json.decoder.JSONDecodeError(err_msg, exc.doc, exc.pos)
def json_encode(self, obj):
try:
|
{"golden_diff": "diff --git a/web3/utils/encoding.py b/web3/utils/encoding.py\n--- a/web3/utils/encoding.py\n+++ b/web3/utils/encoding.py\n@@ -255,7 +255,9 @@\n return decoded\n except json.decoder.JSONDecodeError as exc:\n err_msg = 'Could not decode {} because of {}.'.format(repr(json_str), exc)\n- raise ValueError(err_msg)\n+ # Calling code may rely on catching JSONDecodeError to recognize bad json\n+ # so we have to re-raise the same type.\n+ raise json.decoder.JSONDecodeError(err_msg, exc.doc, exc.pos)\n \n def json_encode(self, obj):\n try:\n", "issue": "ValueError when using IPCProvider\n* Version: 4.2.1\r\n* Python: 2.7/3.4/3.5\r\n* OS: osx/linux/win\r\n\r\n\r\n### What was wrong?\r\n\r\nWhen using IPCProvider sometimes ValueError is encountered if the response is incomplete. Looks like in this pull request https://github.com/ethereum/web3.py/pull/785/files the author forgot to add `except ValueError` to IPCProvider https://github.com/ethereum/web3.py/blob/master/web3/providers/ipc.py#L176\r\n\r\n### How can it be fixed?\r\n\r\nAdd `except ValueError` to IPCProvider.make_request https://github.com/ethereum/web3.py/blob/master/web3/providers/ipc.py#L176\r\n\n", "before_files": [{"content": "# String encodings and numeric representations\nimport json\nimport re\n\nfrom eth_utils import (\n add_0x_prefix,\n big_endian_to_int,\n decode_hex,\n encode_hex,\n int_to_big_endian,\n is_boolean,\n is_bytes,\n is_hex,\n is_integer,\n is_list_like,\n remove_0x_prefix,\n to_hex,\n)\n\nfrom web3.utils.abi import (\n is_address_type,\n is_array_type,\n is_bool_type,\n is_bytes_type,\n is_int_type,\n is_string_type,\n is_uint_type,\n size_of_type,\n sub_type_of_array_type,\n)\nfrom web3.utils.toolz import (\n curry,\n)\nfrom web3.utils.validation import (\n assert_one_val,\n validate_abi_type,\n validate_abi_value,\n)\n\n\ndef hex_encode_abi_type(abi_type, value, force_size=None):\n \"\"\"\n Encodes value into a hex string in format of abi_type\n \"\"\"\n validate_abi_type(abi_type)\n validate_abi_value(abi_type, value)\n\n data_size = force_size or size_of_type(abi_type)\n if is_array_type(abi_type):\n sub_type = sub_type_of_array_type(abi_type)\n return \"\".join([remove_0x_prefix(hex_encode_abi_type(sub_type, v, 256)) for v in value])\n elif is_bool_type(abi_type):\n return to_hex_with_size(value, data_size)\n elif is_uint_type(abi_type):\n return to_hex_with_size(value, data_size)\n elif is_int_type(abi_type):\n return to_hex_twos_compliment(value, data_size)\n elif is_address_type(abi_type):\n return pad_hex(value, data_size)\n elif is_bytes_type(abi_type):\n if is_bytes(value):\n return encode_hex(value)\n else:\n return value\n elif is_string_type(abi_type):\n return to_hex(text=value)\n else:\n raise ValueError(\n \"Unsupported ABI type: {0}\".format(abi_type)\n )\n\n\ndef to_hex_twos_compliment(value, bit_size):\n \"\"\"\n Converts integer value to twos compliment hex representation with given bit_size\n \"\"\"\n if value >= 0:\n return to_hex_with_size(value, bit_size)\n\n value = (1 << bit_size) + value\n hex_value = hex(value)\n hex_value = hex_value.rstrip(\"L\")\n return hex_value\n\n\ndef to_hex_with_size(value, bit_size):\n \"\"\"\n Converts a value to hex with given bit_size:\n \"\"\"\n return pad_hex(to_hex(value), bit_size)\n\n\ndef pad_hex(value, bit_size):\n \"\"\"\n Pads a hex string up to the given bit_size\n \"\"\"\n value = remove_0x_prefix(value)\n return add_0x_prefix(value.zfill(int(bit_size / 4)))\n\n\ndef trim_hex(hexstr):\n if hexstr.startswith('0x0'):\n hexstr = re.sub('^0x0+', '0x', hexstr)\n if hexstr == '0x':\n hexstr = '0x0'\n return hexstr\n\n\ndef to_int(value=None, hexstr=None, text=None):\n \"\"\"\n Converts value to it's integer representation.\n\n Values are converted this way:\n\n * value:\n * bytes: big-endian integer\n * bool: True => 1, False => 0\n * hexstr: interpret hex as integer\n * text: interpret as string of digits, like '12' => 12\n \"\"\"\n assert_one_val(value, hexstr=hexstr, text=text)\n\n if hexstr is not None:\n return int(hexstr, 16)\n elif text is not None:\n return int(text)\n elif isinstance(value, bytes):\n return big_endian_to_int(value)\n elif isinstance(value, str):\n raise TypeError(\"Pass in strings with keyword hexstr or text\")\n else:\n return int(value)\n\n\n@curry\ndef pad_bytes(fill_with, num_bytes, unpadded):\n return unpadded.rjust(num_bytes, fill_with)\n\n\nzpad_bytes = pad_bytes(b'\\0')\n\n\ndef to_bytes(primitive=None, hexstr=None, text=None):\n assert_one_val(primitive, hexstr=hexstr, text=text)\n\n if is_boolean(primitive):\n return b'\\x01' if primitive else b'\\x00'\n elif isinstance(primitive, bytes):\n return primitive\n elif is_integer(primitive):\n return to_bytes(hexstr=to_hex(primitive))\n elif hexstr is not None:\n if len(hexstr) % 2:\n hexstr = '0x0' + remove_0x_prefix(hexstr)\n return decode_hex(hexstr)\n elif text is not None:\n return text.encode('utf-8')\n raise TypeError(\"expected an int in first arg, or keyword of hexstr or text\")\n\n\ndef to_text(primitive=None, hexstr=None, text=None):\n assert_one_val(primitive, hexstr=hexstr, text=text)\n\n if hexstr is not None:\n return to_bytes(hexstr=hexstr).decode('utf-8')\n elif text is not None:\n return text\n elif isinstance(primitive, str):\n return to_text(hexstr=primitive)\n elif isinstance(primitive, bytes):\n return primitive.decode('utf-8')\n elif is_integer(primitive):\n byte_encoding = int_to_big_endian(primitive)\n return to_text(byte_encoding)\n raise TypeError(\"Expected an int, bytes or hexstr.\")\n\n\n@curry\ndef text_if_str(to_type, text_or_primitive):\n '''\n Convert to a type, assuming that strings can be only unicode text (not a hexstr)\n\n @param to_type is a function that takes the arguments (primitive, hexstr=hexstr, text=text),\n eg~ to_bytes, to_text, to_hex, to_int, etc\n @param hexstr_or_primitive in bytes, str, or int.\n '''\n if isinstance(text_or_primitive, str):\n (primitive, text) = (None, text_or_primitive)\n else:\n (primitive, text) = (text_or_primitive, None)\n return to_type(primitive, text=text)\n\n\n@curry\ndef hexstr_if_str(to_type, hexstr_or_primitive):\n '''\n Convert to a type, assuming that strings can be only hexstr (not unicode text)\n\n @param to_type is a function that takes the arguments (primitive, hexstr=hexstr, text=text),\n eg~ to_bytes, to_text, to_hex, to_int, etc\n @param text_or_primitive in bytes, str, or int.\n '''\n if isinstance(hexstr_or_primitive, str):\n (primitive, hexstr) = (None, hexstr_or_primitive)\n if remove_0x_prefix(hexstr) and not is_hex(hexstr):\n raise ValueError(\n \"when sending a str, it must be a hex string. Got: {0!r}\".format(\n hexstr_or_primitive,\n )\n )\n else:\n (primitive, hexstr) = (hexstr_or_primitive, None)\n return to_type(primitive, hexstr=hexstr)\n\n\nclass FriendlyJsonSerde:\n '''\n Friendly JSON serializer & deserializer\n\n When encoding or decoding fails, this class collects\n information on which fields failed, to show more\n helpful information in the raised error messages.\n '''\n def _json_mapping_errors(self, mapping):\n for key, val in mapping.items():\n try:\n self._friendly_json_encode(val)\n except TypeError as exc:\n yield \"%r: because (%s)\" % (key, exc)\n\n def _json_list_errors(self, iterable):\n for index, element in enumerate(iterable):\n try:\n self._friendly_json_encode(element)\n except TypeError as exc:\n yield \"%d: because (%s)\" % (index, exc)\n\n def _friendly_json_encode(self, obj):\n try:\n encoded = json.dumps(obj)\n return encoded\n except TypeError as full_exception:\n if hasattr(obj, 'items'):\n item_errors = '; '.join(self._json_mapping_errors(obj))\n raise TypeError(\"dict had unencodable value at keys: {{{}}}\".format(item_errors))\n elif is_list_like(obj):\n element_errors = '; '.join(self._json_list_errors(obj))\n raise TypeError(\"list had unencodable value at index: [{}]\".format(element_errors))\n else:\n raise full_exception\n\n def json_decode(self, json_str):\n try:\n decoded = json.loads(json_str)\n return decoded\n except json.decoder.JSONDecodeError as exc:\n err_msg = 'Could not decode {} because of {}.'.format(repr(json_str), exc)\n raise ValueError(err_msg)\n\n def json_encode(self, obj):\n try:\n return self._friendly_json_encode(obj)\n except TypeError as exc:\n raise TypeError(\"Could not encode to JSON: {}\".format(exc))\n\n\ndef to_4byte_hex(hex_or_str_or_bytes):\n size_of_4bytes = 4 * 8\n byte_str = hexstr_if_str(to_bytes, hex_or_str_or_bytes)\n if len(byte_str) > 4:\n raise ValueError(\n 'expected value of size 4 bytes. Got: %d bytes' % len(byte_str)\n )\n hex_str = encode_hex(byte_str)\n return pad_hex(hex_str, size_of_4bytes)\n", "path": "web3/utils/encoding.py"}]}
| 3,476 | 152 |
gh_patches_debug_9633
|
rasdani/github-patches
|
git_diff
|
acl-org__acl-anthology-433
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create shortened volume links
@mjpost https://aclweb.org/anthology/volumes/sem-2012-the-first-joint-conference-on-lexical-and-computational-semantics-volume-1-proceedings-of-the-main-conference-and-the-shared-task-and-volume-2-proceedings-of-the-sixth-international-workshop-on-semantic-evaluation-semeval-2012/
The URL of the PDF is:
https://www.aclweb.org/anthology/S12-1
For a paper:
https://aclweb.org/anthology/papers/S/S12/S12-1000/
(the title is the same, but the content not).
The PDF:
https://www.aclweb.org/anthology/S12-1000
It would be much easier to reference the first link if it was https://aclweb.org/anthology/papers/S/S12/S12-1/.
_Originally posted by @Aspie96 in https://github.com/acl-org/acl-anthology/issues/170#issuecomment-504191669_
</issue>
<code>
[start of bin/create_hugo_pages.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2019 Marcel Bollmann <[email protected]>
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 """Usage: create_hugo_pages.py [--dir=DIR] [-c] [--debug]
19
20 Creates page stubs for the full anthology based on the YAML data files.
21
22 This script can only be run after create_hugo_yaml.py!
23
24 Options:
25 --dir=DIR Hugo project directory. [default: {scriptdir}/../build/]
26 --debug Output debug-level log messages.
27 -c, --clean Delete existing files in target directory before generation.
28 -h, --help Display this helpful text.
29 """
30
31 from docopt import docopt
32 from glob import glob
33 from slugify import slugify
34 from tqdm import tqdm
35 import logging as log
36 import os
37 import shutil
38 import yaml
39
40 try:
41 from yaml import CLoader as Loader
42 except ImportError:
43 from yaml import Loader
44
45 from anthology.utils import SeverityTracker
46
47
48 def check_directory(cdir, clean=False):
49 if not os.path.isdir(cdir) and not os.path.exists(cdir):
50 os.mkdir(cdir)
51 return True
52 entries = os.listdir(cdir)
53 if "_index.md" in entries:
54 entries.remove("_index.md")
55 if entries and not clean:
56 log.critical("Directory already exists and has content files: {}".format(cdir))
57 log.info(
58 "Call this script with the -c/--clean flag to automatically DELETE existing files"
59 )
60 return False
61 for entry in entries:
62 entry = "{}/{}".format(cdir, entry)
63 if os.path.isdir(entry):
64 shutil.rmtree(entry)
65 else:
66 os.remove(entry)
67 return True
68
69
70 def create_papers(srcdir, clean=False):
71 """Creates page stubs for all papers in the Anthology."""
72 log.info("Creating stubs for papers...")
73 if not check_directory("{}/content/papers".format(srcdir), clean=clean):
74 return
75
76 # Go through all paper volumes
77 for yamlfile in tqdm(glob("{}/data/papers/*.yaml".format(srcdir))):
78 log.debug("Processing {}".format(yamlfile))
79 with open(yamlfile, "r") as f:
80 data = yaml.load(f, Loader=Loader)
81 # Create a paper stub for each entry in the volume
82 for anthology_id, entry in data.items():
83 paper_dir = "{}/content/papers/{}/{}".format(
84 srcdir, anthology_id[0], anthology_id[:3]
85 )
86 if not os.path.exists(paper_dir):
87 os.makedirs(paper_dir)
88 with open("{}/{}.md".format(paper_dir, anthology_id), "w") as f:
89 print("---", file=f)
90 yaml.dump(
91 {"anthology_id": anthology_id, "title": entry["title"]},
92 default_flow_style=False,
93 stream=f,
94 )
95 print("---", file=f)
96
97
98 def create_volumes(srcdir, clean=False):
99 """Creates page stubs for all proceedings volumes in the Anthology."""
100 log.info("Creating stubs for volumes...")
101 if not check_directory("{}/content/volumes".format(srcdir), clean=clean):
102 return
103
104 yamlfile = "{}/data/volumes.yaml".format(srcdir)
105 log.debug("Processing {}".format(yamlfile))
106 with open(yamlfile, "r") as f:
107 data = yaml.load(f, Loader=Loader)
108 # Create a paper stub for each proceedings volume
109 for anthology_id, entry in data.items():
110 with open("{}/content/volumes/{}.md".format(srcdir, anthology_id), "w") as f:
111 print("---", file=f)
112 yaml.dump(
113 {
114 "anthology_id": anthology_id,
115 "title": entry["title"],
116 "slug": slugify(entry["title"]),
117 },
118 default_flow_style=False,
119 stream=f,
120 )
121 print("---", file=f)
122
123 return data
124
125
126 def create_people(srcdir, clean=False):
127 """Creates page stubs for all authors/editors in the Anthology."""
128 log.info("Creating stubs for people...")
129 if not check_directory("{}/content/people".format(srcdir), clean=clean):
130 return
131
132 for yamlfile in tqdm(glob("{}/data/people/*.yaml".format(srcdir))):
133 log.debug("Processing {}".format(yamlfile))
134 with open(yamlfile, "r") as f:
135 data = yaml.load(f, Loader=Loader)
136 # Create a page stub for each person
137 for name, entry in data.items():
138 person_dir = "{}/content/people/{}".format(srcdir, name[0])
139 if not os.path.exists(person_dir):
140 os.makedirs(person_dir)
141 yaml_data = {
142 "name": name,
143 "title": entry["full"],
144 "lastname": entry["last"],
145 }
146 with open("{}/{}.md".format(person_dir, name), "w") as f:
147 print("---", file=f)
148 # "lastname" is dumped to allow sorting by it in Hugo
149 yaml.dump(yaml_data, default_flow_style=False, stream=f)
150 print("---", file=f)
151
152 return data
153
154
155 def create_venues_and_events(srcdir, clean=False):
156 """Creates page stubs for all venues and events in the Anthology."""
157 yamlfile = "{}/data/venues.yaml".format(srcdir)
158 log.debug("Processing {}".format(yamlfile))
159 with open(yamlfile, "r") as f:
160 data = yaml.load(f, Loader=Loader)
161
162 log.info("Creating stubs for venues...")
163 if not check_directory("{}/content/venues".format(srcdir), clean=clean):
164 return
165 # Create a paper stub for each venue (e.g. ACL)
166 for venue, venue_data in data.items():
167 venue_str = venue_data["slug"]
168 with open("{}/content/venues/{}.md".format(srcdir, venue_str), "w") as f:
169 print("---", file=f)
170 yaml_data = {"venue": venue, "title": venue_data["name"]}
171 if venue_data["is_toplevel"]:
172 main_letter = venue_data["main_letter"]
173 yaml_data["aliases"] = ["/papers/{}/".format(main_letter)]
174 yaml.dump(yaml_data, default_flow_style=False, stream=f)
175 print("---", file=f)
176
177 log.info("Creating stubs for events...")
178 if not check_directory("{}/content/events".format(srcdir), clean=clean):
179 return
180 # Create a paper stub for each event (= venue + year, e.g. ACL 2018)
181 for venue, venue_data in data.items():
182 venue_str = venue_data["slug"]
183 for year in venue_data["volumes_by_year"]:
184 with open(
185 "{}/content/events/{}-{}.md".format(srcdir, venue_str, year), "w"
186 ) as f:
187 print("---", file=f)
188 yaml_data = {
189 "venue": venue,
190 "year": year,
191 "title": "{} ({})".format(venue_data["name"], year),
192 }
193 if venue_data["is_toplevel"]:
194 main_letter = venue_data["main_letter"]
195 main_prefix = main_letter + year[-2:] # e.g., P05
196 yaml_data["aliases"] = [
197 "/papers/{}/{}/".format(main_letter, main_prefix)
198 ]
199 yaml.dump(yaml_data, default_flow_style=False, stream=f)
200 print("---", file=f)
201
202
203 def create_sigs(srcdir, clean=False):
204 """Creates page stubs for all SIGs in the Anthology."""
205 yamlfile = "{}/data/sigs.yaml".format(srcdir)
206 log.debug("Processing {}".format(yamlfile))
207 with open(yamlfile, "r") as f:
208 data = yaml.load(f, Loader=Loader)
209
210 log.info("Creating stubs for SIGs...")
211 if not check_directory("{}/content/sigs".format(srcdir), clean=clean):
212 return
213 # Create a paper stub for each SIGS (e.g. SIGMORPHON)
214 for sig, sig_data in data.items():
215 sig_str = sig_data["slug"]
216 with open("{}/content/sigs/{}.md".format(srcdir, sig_str), "w") as f:
217 print("---", file=f)
218 yaml.dump(
219 {
220 "acronym": sig,
221 "short_acronym": sig[3:] if sig.startswith("SIG") else sig,
222 "title": sig_data["name"],
223 },
224 default_flow_style=False,
225 stream=f,
226 )
227 print("---", file=f)
228
229
230 if __name__ == "__main__":
231 args = docopt(__doc__)
232 scriptdir = os.path.dirname(os.path.abspath(__file__))
233 if "{scriptdir}" in args["--dir"]:
234 args["--dir"] = args["--dir"].format(scriptdir=scriptdir)
235 dir_ = os.path.abspath(args["--dir"])
236
237 log_level = log.DEBUG if args["--debug"] else log.INFO
238 log.basicConfig(format="%(levelname)-8s %(message)s", level=log_level)
239 tracker = SeverityTracker()
240 log.getLogger().addHandler(tracker)
241
242 create_papers(dir_, clean=args["--clean"])
243 create_volumes(dir_, clean=args["--clean"])
244 create_people(dir_, clean=args["--clean"])
245 create_venues_and_events(dir_, clean=args["--clean"])
246 create_sigs(dir_, clean=args["--clean"])
247
248 if tracker.highest >= log.ERROR:
249 exit(1)
250
[end of bin/create_hugo_pages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bin/create_hugo_pages.py b/bin/create_hugo_pages.py
--- a/bin/create_hugo_pages.py
+++ b/bin/create_hugo_pages.py
@@ -113,7 +113,12 @@
{
"anthology_id": anthology_id,
"title": entry["title"],
- "slug": slugify(entry["title"]),
+ "aliases": [
+ slugify(entry["title"]),
+ "/papers/{}/{}/{}/".format(
+ anthology_id[0], anthology_id[:3], anthology_id
+ ),
+ ],
},
default_flow_style=False,
stream=f,
|
{"golden_diff": "diff --git a/bin/create_hugo_pages.py b/bin/create_hugo_pages.py\n--- a/bin/create_hugo_pages.py\n+++ b/bin/create_hugo_pages.py\n@@ -113,7 +113,12 @@\n {\n \"anthology_id\": anthology_id,\n \"title\": entry[\"title\"],\n- \"slug\": slugify(entry[\"title\"]),\n+ \"aliases\": [\n+ slugify(entry[\"title\"]),\n+ \"/papers/{}/{}/{}/\".format(\n+ anthology_id[0], anthology_id[:3], anthology_id\n+ ),\n+ ],\n },\n default_flow_style=False,\n stream=f,\n", "issue": "Create shortened volume links\n@mjpost https://aclweb.org/anthology/volumes/sem-2012-the-first-joint-conference-on-lexical-and-computational-semantics-volume-1-proceedings-of-the-main-conference-and-the-shared-task-and-volume-2-proceedings-of-the-sixth-international-workshop-on-semantic-evaluation-semeval-2012/\r\n\r\nThe URL of the PDF is:\r\nhttps://www.aclweb.org/anthology/S12-1\r\n\r\nFor a paper:\r\nhttps://aclweb.org/anthology/papers/S/S12/S12-1000/\r\n(the title is the same, but the content not).\r\n\r\nThe PDF:\r\nhttps://www.aclweb.org/anthology/S12-1000\r\n\r\nIt would be much easier to reference the first link if it was https://aclweb.org/anthology/papers/S/S12/S12-1/.\r\n\r\n_Originally posted by @Aspie96 in https://github.com/acl-org/acl-anthology/issues/170#issuecomment-504191669_\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Marcel Bollmann <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Usage: create_hugo_pages.py [--dir=DIR] [-c] [--debug]\n\nCreates page stubs for the full anthology based on the YAML data files.\n\nThis script can only be run after create_hugo_yaml.py!\n\nOptions:\n --dir=DIR Hugo project directory. [default: {scriptdir}/../build/]\n --debug Output debug-level log messages.\n -c, --clean Delete existing files in target directory before generation.\n -h, --help Display this helpful text.\n\"\"\"\n\nfrom docopt import docopt\nfrom glob import glob\nfrom slugify import slugify\nfrom tqdm import tqdm\nimport logging as log\nimport os\nimport shutil\nimport yaml\n\ntry:\n from yaml import CLoader as Loader\nexcept ImportError:\n from yaml import Loader\n\nfrom anthology.utils import SeverityTracker\n\n\ndef check_directory(cdir, clean=False):\n if not os.path.isdir(cdir) and not os.path.exists(cdir):\n os.mkdir(cdir)\n return True\n entries = os.listdir(cdir)\n if \"_index.md\" in entries:\n entries.remove(\"_index.md\")\n if entries and not clean:\n log.critical(\"Directory already exists and has content files: {}\".format(cdir))\n log.info(\n \"Call this script with the -c/--clean flag to automatically DELETE existing files\"\n )\n return False\n for entry in entries:\n entry = \"{}/{}\".format(cdir, entry)\n if os.path.isdir(entry):\n shutil.rmtree(entry)\n else:\n os.remove(entry)\n return True\n\n\ndef create_papers(srcdir, clean=False):\n \"\"\"Creates page stubs for all papers in the Anthology.\"\"\"\n log.info(\"Creating stubs for papers...\")\n if not check_directory(\"{}/content/papers\".format(srcdir), clean=clean):\n return\n\n # Go through all paper volumes\n for yamlfile in tqdm(glob(\"{}/data/papers/*.yaml\".format(srcdir))):\n log.debug(\"Processing {}\".format(yamlfile))\n with open(yamlfile, \"r\") as f:\n data = yaml.load(f, Loader=Loader)\n # Create a paper stub for each entry in the volume\n for anthology_id, entry in data.items():\n paper_dir = \"{}/content/papers/{}/{}\".format(\n srcdir, anthology_id[0], anthology_id[:3]\n )\n if not os.path.exists(paper_dir):\n os.makedirs(paper_dir)\n with open(\"{}/{}.md\".format(paper_dir, anthology_id), \"w\") as f:\n print(\"---\", file=f)\n yaml.dump(\n {\"anthology_id\": anthology_id, \"title\": entry[\"title\"]},\n default_flow_style=False,\n stream=f,\n )\n print(\"---\", file=f)\n\n\ndef create_volumes(srcdir, clean=False):\n \"\"\"Creates page stubs for all proceedings volumes in the Anthology.\"\"\"\n log.info(\"Creating stubs for volumes...\")\n if not check_directory(\"{}/content/volumes\".format(srcdir), clean=clean):\n return\n\n yamlfile = \"{}/data/volumes.yaml\".format(srcdir)\n log.debug(\"Processing {}\".format(yamlfile))\n with open(yamlfile, \"r\") as f:\n data = yaml.load(f, Loader=Loader)\n # Create a paper stub for each proceedings volume\n for anthology_id, entry in data.items():\n with open(\"{}/content/volumes/{}.md\".format(srcdir, anthology_id), \"w\") as f:\n print(\"---\", file=f)\n yaml.dump(\n {\n \"anthology_id\": anthology_id,\n \"title\": entry[\"title\"],\n \"slug\": slugify(entry[\"title\"]),\n },\n default_flow_style=False,\n stream=f,\n )\n print(\"---\", file=f)\n\n return data\n\n\ndef create_people(srcdir, clean=False):\n \"\"\"Creates page stubs for all authors/editors in the Anthology.\"\"\"\n log.info(\"Creating stubs for people...\")\n if not check_directory(\"{}/content/people\".format(srcdir), clean=clean):\n return\n\n for yamlfile in tqdm(glob(\"{}/data/people/*.yaml\".format(srcdir))):\n log.debug(\"Processing {}\".format(yamlfile))\n with open(yamlfile, \"r\") as f:\n data = yaml.load(f, Loader=Loader)\n # Create a page stub for each person\n for name, entry in data.items():\n person_dir = \"{}/content/people/{}\".format(srcdir, name[0])\n if not os.path.exists(person_dir):\n os.makedirs(person_dir)\n yaml_data = {\n \"name\": name,\n \"title\": entry[\"full\"],\n \"lastname\": entry[\"last\"],\n }\n with open(\"{}/{}.md\".format(person_dir, name), \"w\") as f:\n print(\"---\", file=f)\n # \"lastname\" is dumped to allow sorting by it in Hugo\n yaml.dump(yaml_data, default_flow_style=False, stream=f)\n print(\"---\", file=f)\n\n return data\n\n\ndef create_venues_and_events(srcdir, clean=False):\n \"\"\"Creates page stubs for all venues and events in the Anthology.\"\"\"\n yamlfile = \"{}/data/venues.yaml\".format(srcdir)\n log.debug(\"Processing {}\".format(yamlfile))\n with open(yamlfile, \"r\") as f:\n data = yaml.load(f, Loader=Loader)\n\n log.info(\"Creating stubs for venues...\")\n if not check_directory(\"{}/content/venues\".format(srcdir), clean=clean):\n return\n # Create a paper stub for each venue (e.g. ACL)\n for venue, venue_data in data.items():\n venue_str = venue_data[\"slug\"]\n with open(\"{}/content/venues/{}.md\".format(srcdir, venue_str), \"w\") as f:\n print(\"---\", file=f)\n yaml_data = {\"venue\": venue, \"title\": venue_data[\"name\"]}\n if venue_data[\"is_toplevel\"]:\n main_letter = venue_data[\"main_letter\"]\n yaml_data[\"aliases\"] = [\"/papers/{}/\".format(main_letter)]\n yaml.dump(yaml_data, default_flow_style=False, stream=f)\n print(\"---\", file=f)\n\n log.info(\"Creating stubs for events...\")\n if not check_directory(\"{}/content/events\".format(srcdir), clean=clean):\n return\n # Create a paper stub for each event (= venue + year, e.g. ACL 2018)\n for venue, venue_data in data.items():\n venue_str = venue_data[\"slug\"]\n for year in venue_data[\"volumes_by_year\"]:\n with open(\n \"{}/content/events/{}-{}.md\".format(srcdir, venue_str, year), \"w\"\n ) as f:\n print(\"---\", file=f)\n yaml_data = {\n \"venue\": venue,\n \"year\": year,\n \"title\": \"{} ({})\".format(venue_data[\"name\"], year),\n }\n if venue_data[\"is_toplevel\"]:\n main_letter = venue_data[\"main_letter\"]\n main_prefix = main_letter + year[-2:] # e.g., P05\n yaml_data[\"aliases\"] = [\n \"/papers/{}/{}/\".format(main_letter, main_prefix)\n ]\n yaml.dump(yaml_data, default_flow_style=False, stream=f)\n print(\"---\", file=f)\n\n\ndef create_sigs(srcdir, clean=False):\n \"\"\"Creates page stubs for all SIGs in the Anthology.\"\"\"\n yamlfile = \"{}/data/sigs.yaml\".format(srcdir)\n log.debug(\"Processing {}\".format(yamlfile))\n with open(yamlfile, \"r\") as f:\n data = yaml.load(f, Loader=Loader)\n\n log.info(\"Creating stubs for SIGs...\")\n if not check_directory(\"{}/content/sigs\".format(srcdir), clean=clean):\n return\n # Create a paper stub for each SIGS (e.g. SIGMORPHON)\n for sig, sig_data in data.items():\n sig_str = sig_data[\"slug\"]\n with open(\"{}/content/sigs/{}.md\".format(srcdir, sig_str), \"w\") as f:\n print(\"---\", file=f)\n yaml.dump(\n {\n \"acronym\": sig,\n \"short_acronym\": sig[3:] if sig.startswith(\"SIG\") else sig,\n \"title\": sig_data[\"name\"],\n },\n default_flow_style=False,\n stream=f,\n )\n print(\"---\", file=f)\n\n\nif __name__ == \"__main__\":\n args = docopt(__doc__)\n scriptdir = os.path.dirname(os.path.abspath(__file__))\n if \"{scriptdir}\" in args[\"--dir\"]:\n args[\"--dir\"] = args[\"--dir\"].format(scriptdir=scriptdir)\n dir_ = os.path.abspath(args[\"--dir\"])\n\n log_level = log.DEBUG if args[\"--debug\"] else log.INFO\n log.basicConfig(format=\"%(levelname)-8s %(message)s\", level=log_level)\n tracker = SeverityTracker()\n log.getLogger().addHandler(tracker)\n\n create_papers(dir_, clean=args[\"--clean\"])\n create_volumes(dir_, clean=args[\"--clean\"])\n create_people(dir_, clean=args[\"--clean\"])\n create_venues_and_events(dir_, clean=args[\"--clean\"])\n create_sigs(dir_, clean=args[\"--clean\"])\n\n if tracker.highest >= log.ERROR:\n exit(1)\n", "path": "bin/create_hugo_pages.py"}]}
| 3,607 | 139 |
gh_patches_debug_19359
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-4115
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add the possibility to query updates by releases in graphql
We currently have a getUpdates query in graphql that let us query updates using certain attributes, we should add the possibility to give a release name and get all the updates from a release.
For example ```query {getUpdates(releaseName: "F33") {alias}}```
For this we will most likely need to use a join query in the graphlq resolve function (https://github.com/fedora-infra/bodhi/blob/develop/bodhi/server/services/graphql.py#L132) to exploit the relationship between Updates and release.
Some hints https://stackoverflow.com/questions/8561470/sqlalchemy-filtering-by-relationship-attribute
</issue>
<code>
[start of bodhi/server/services/graphql.py]
1 # Copyright © 2020 Red Hat Inc., and others.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """Defines API endpoints related to GraphQL objects."""
19 import graphene
20 from cornice import Service
21 from webob_graphql import serve_graphql_request
22
23 from bodhi.server.config import config
24 from bodhi.server.graphql_schemas import Release, ReleaseModel, Update, UpdateModel
25
26 graphql = Service(name='graphql', path='/graphql', description='graphql service')
27
28
29 @graphql.get()
30 @graphql.post()
31 def graphql_get(request):
32 """
33 Perform a GET request.
34
35 Args:
36 request (pyramid.Request): The current request.
37 Returns:
38 The GraphQL response to the request.
39 """
40 context = {'session': request.session}
41 return serve_graphql_request(
42 request, schema, graphiql_enabled=config.get('graphiql_enabled'),
43 context_value=context)
44
45
46 class Query(graphene.ObjectType):
47 """Allow querying objects."""
48
49 allReleases = graphene.List(Release)
50 getReleases = graphene.Field(
51 lambda: graphene.List(Release), name=graphene.String(),
52 id_prefix=graphene.String(), composed_by_bodhi=graphene.Boolean(),
53 state=graphene.String())
54
55 getUpdates = graphene.Field(
56 lambda: graphene.List(Update), stable_karma=graphene.Int(),
57 stable_days=graphene.Int(), unstable_karma=graphene.Int(),
58 status=graphene.String(), request=graphene.String(),
59 pushed=graphene.Boolean(), critpath=graphene.Boolean(),
60 date_approved=graphene.String(), alias=graphene.String(),
61 user_id=graphene.Int())
62
63 def resolve_allReleases(self, info):
64 """Answer Queries by fetching data from the Schema."""
65 query = Release.get_query(info) # SQLAlchemy query
66 return query.all()
67
68 def resolve_getReleases(self, info, **args):
69 """Answer Release queries with a given argument."""
70 query = Release.get_query(info)
71
72 id_prefix = args.get("id_prefix")
73 if id_prefix is not None:
74 query = query.filter(ReleaseModel.id_prefix == id_prefix)
75
76 name = args.get("name")
77 if name is not None:
78 query = query.filter(ReleaseModel.name == name)
79
80 composed_by_bodhi = args.get("composed_by_bodhi")
81 if composed_by_bodhi is not None:
82 query = query.filter(ReleaseModel.composed_by_bodhi == composed_by_bodhi)
83
84 state = args.get("state")
85 if state is not None:
86 query = query.filter(ReleaseModel.state == state)
87
88 return query.all()
89
90 def resolve_getUpdates(self, info, **args):
91 """Answer Release queries with a given argument."""
92 query = Update.get_query(info)
93
94 stable_karma = args.get("stable_karma")
95 if stable_karma is not None:
96 query = query.filter(UpdateModel.stable_karma == stable_karma)
97
98 stable_days = args.get("stable_days")
99 if stable_days is not None:
100 query = query.filter(UpdateModel.stable_days == stable_days)
101
102 unstable_karma = args.get("unstable_karma")
103 if unstable_karma is not None:
104 query = query.filter(UpdateModel.unstable_karma == unstable_karma)
105
106 status = args.get("status")
107 if status is not None:
108 query = query.filter(UpdateModel.status == status)
109
110 request = args.get("request")
111 if request is not None:
112 query = query.filter(UpdateModel.request == request)
113
114 pushed = args.get("pushed")
115 if pushed is not None:
116 query = query.filter(UpdateModel.pushed == pushed)
117
118 critpath = args.get("critpath")
119 if critpath is not None:
120 query = query.filter(UpdateModel.critpath == critpath)
121
122 date_approved = args.get("date_approved")
123 if date_approved is not None:
124 query = query.filter(UpdateModel.date_approved == date_approved)
125
126 alias = args.get("alias")
127 if alias is not None:
128 query = query.filter(UpdateModel.alias == alias)
129
130 user_id = args.get("user_id")
131 if user_id is not None:
132 query = query.filter(UpdateModel.user_id == user_id)
133
134 return query.all()
135
136
137 schema = graphene.Schema(query=Query)
138
[end of bodhi/server/services/graphql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bodhi/server/services/graphql.py b/bodhi/server/services/graphql.py
--- a/bodhi/server/services/graphql.py
+++ b/bodhi/server/services/graphql.py
@@ -58,7 +58,7 @@
status=graphene.String(), request=graphene.String(),
pushed=graphene.Boolean(), critpath=graphene.Boolean(),
date_approved=graphene.String(), alias=graphene.String(),
- user_id=graphene.Int())
+ user_id=graphene.Int(), release_name=graphene.String())
def resolve_allReleases(self, info):
"""Answer Queries by fetching data from the Schema."""
@@ -131,6 +131,10 @@
if user_id is not None:
query = query.filter(UpdateModel.user_id == user_id)
+ release_name = args.get("release_name")
+ if release_name is not None:
+ query = query.join(UpdateModel.release).filter(ReleaseModel.name == release_name)
+
return query.all()
|
{"golden_diff": "diff --git a/bodhi/server/services/graphql.py b/bodhi/server/services/graphql.py\n--- a/bodhi/server/services/graphql.py\n+++ b/bodhi/server/services/graphql.py\n@@ -58,7 +58,7 @@\n status=graphene.String(), request=graphene.String(),\n pushed=graphene.Boolean(), critpath=graphene.Boolean(),\n date_approved=graphene.String(), alias=graphene.String(),\n- user_id=graphene.Int())\n+ user_id=graphene.Int(), release_name=graphene.String())\n \n def resolve_allReleases(self, info):\n \"\"\"Answer Queries by fetching data from the Schema.\"\"\"\n@@ -131,6 +131,10 @@\n if user_id is not None:\n query = query.filter(UpdateModel.user_id == user_id)\n \n+ release_name = args.get(\"release_name\")\n+ if release_name is not None:\n+ query = query.join(UpdateModel.release).filter(ReleaseModel.name == release_name)\n+\n return query.all()\n", "issue": "Add the possibility to query updates by releases in graphql\nWe currently have a getUpdates query in graphql that let us query updates using certain attributes, we should add the possibility to give a release name and get all the updates from a release.\r\n\r\nFor example ```query {getUpdates(releaseName: \"F33\") {alias}}```\r\n\r\nFor this we will most likely need to use a join query in the graphlq resolve function (https://github.com/fedora-infra/bodhi/blob/develop/bodhi/server/services/graphql.py#L132) to exploit the relationship between Updates and release.\r\n\r\nSome hints https://stackoverflow.com/questions/8561470/sqlalchemy-filtering-by-relationship-attribute\n", "before_files": [{"content": "# Copyright \u00a9 2020 Red Hat Inc., and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"Defines API endpoints related to GraphQL objects.\"\"\"\nimport graphene\nfrom cornice import Service\nfrom webob_graphql import serve_graphql_request\n\nfrom bodhi.server.config import config\nfrom bodhi.server.graphql_schemas import Release, ReleaseModel, Update, UpdateModel\n\ngraphql = Service(name='graphql', path='/graphql', description='graphql service')\n\n\[email protected]()\[email protected]()\ndef graphql_get(request):\n \"\"\"\n Perform a GET request.\n\n Args:\n request (pyramid.Request): The current request.\n Returns:\n The GraphQL response to the request.\n \"\"\"\n context = {'session': request.session}\n return serve_graphql_request(\n request, schema, graphiql_enabled=config.get('graphiql_enabled'),\n context_value=context)\n\n\nclass Query(graphene.ObjectType):\n \"\"\"Allow querying objects.\"\"\"\n\n allReleases = graphene.List(Release)\n getReleases = graphene.Field(\n lambda: graphene.List(Release), name=graphene.String(),\n id_prefix=graphene.String(), composed_by_bodhi=graphene.Boolean(),\n state=graphene.String())\n\n getUpdates = graphene.Field(\n lambda: graphene.List(Update), stable_karma=graphene.Int(),\n stable_days=graphene.Int(), unstable_karma=graphene.Int(),\n status=graphene.String(), request=graphene.String(),\n pushed=graphene.Boolean(), critpath=graphene.Boolean(),\n date_approved=graphene.String(), alias=graphene.String(),\n user_id=graphene.Int())\n\n def resolve_allReleases(self, info):\n \"\"\"Answer Queries by fetching data from the Schema.\"\"\"\n query = Release.get_query(info) # SQLAlchemy query\n return query.all()\n\n def resolve_getReleases(self, info, **args):\n \"\"\"Answer Release queries with a given argument.\"\"\"\n query = Release.get_query(info)\n\n id_prefix = args.get(\"id_prefix\")\n if id_prefix is not None:\n query = query.filter(ReleaseModel.id_prefix == id_prefix)\n\n name = args.get(\"name\")\n if name is not None:\n query = query.filter(ReleaseModel.name == name)\n\n composed_by_bodhi = args.get(\"composed_by_bodhi\")\n if composed_by_bodhi is not None:\n query = query.filter(ReleaseModel.composed_by_bodhi == composed_by_bodhi)\n\n state = args.get(\"state\")\n if state is not None:\n query = query.filter(ReleaseModel.state == state)\n\n return query.all()\n\n def resolve_getUpdates(self, info, **args):\n \"\"\"Answer Release queries with a given argument.\"\"\"\n query = Update.get_query(info)\n\n stable_karma = args.get(\"stable_karma\")\n if stable_karma is not None:\n query = query.filter(UpdateModel.stable_karma == stable_karma)\n\n stable_days = args.get(\"stable_days\")\n if stable_days is not None:\n query = query.filter(UpdateModel.stable_days == stable_days)\n\n unstable_karma = args.get(\"unstable_karma\")\n if unstable_karma is not None:\n query = query.filter(UpdateModel.unstable_karma == unstable_karma)\n\n status = args.get(\"status\")\n if status is not None:\n query = query.filter(UpdateModel.status == status)\n\n request = args.get(\"request\")\n if request is not None:\n query = query.filter(UpdateModel.request == request)\n\n pushed = args.get(\"pushed\")\n if pushed is not None:\n query = query.filter(UpdateModel.pushed == pushed)\n\n critpath = args.get(\"critpath\")\n if critpath is not None:\n query = query.filter(UpdateModel.critpath == critpath)\n\n date_approved = args.get(\"date_approved\")\n if date_approved is not None:\n query = query.filter(UpdateModel.date_approved == date_approved)\n\n alias = args.get(\"alias\")\n if alias is not None:\n query = query.filter(UpdateModel.alias == alias)\n\n user_id = args.get(\"user_id\")\n if user_id is not None:\n query = query.filter(UpdateModel.user_id == user_id)\n\n return query.all()\n\n\nschema = graphene.Schema(query=Query)\n", "path": "bodhi/server/services/graphql.py"}]}
| 2,103 | 224 |
gh_patches_debug_6998
|
rasdani/github-patches
|
git_diff
|
microsoft__hi-ml-80
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Handle the "v" in version numbering
Our code in `setup.py` will trigger with new tags. `setuptools.setup` will reject tags that are not release versions but we could do more to make that explicit by checking for the leading "v".
Also when we tag releases as, say, "v0.1.1" the leading "v" is carried through `setuptools.setup` so it becomes part of the pip test download
> Successfully installed pip-21.2.4
> Collecting hi-ml==v0.1.0
> Downloading hi_ml-0.1.0-py3-none-any.whl (25 kB)
(from [here](https://github.com/microsoft/hi-ml/runs/3362573497?check_suite_focus=true#step:6:29))
This works, but it would be cleaner to submit the version number using the public version identifier format mandated in [PEP 440](https://www.python.org/dev/peps/pep-0440/#public-version-identifiers), i.e. without the leading "v"
</issue>
<code>
[start of setup.py]
1 # ------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
4 # ------------------------------------------------------------------------------------------
5
6 """A setuptools based setup module.
7
8 See:
9 https://packaging.python.org/guides/distributing-packages-using-setuptools/
10 """
11
12 import os
13 from math import floor
14 import pathlib
15 from random import random
16 from setuptools import setup, find_packages # type: ignore
17
18
19 here = pathlib.Path(__file__).parent.resolve()
20
21 # Get the long description from the README file
22 long_description = (here / 'README.md').read_text(encoding='utf-8')
23
24 version = ''
25
26 # If running from a GitHub Action then a standard set of environment variables will be
27 # populated (https://docs.github.com/en/actions/reference/environment-variables#default-environment-variables).
28 # In particular, GITHUB_REF is the branch or tag ref that triggered the workflow.
29 # If this was triggered by a tagged commit then GITHUB_REF will be: 'ref/tags/new_tag'.
30 # Extract this tag and use it as a version string
31 # See also:
32 # https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/
33 # https://github.com/pypa/gh-action-pypi-publish
34 GITHUB_REF_TAG_COMMIT = 'refs/tags/'
35
36 github_ref = os.getenv('GITHUB_REF')
37 if github_ref and github_ref.startswith(GITHUB_REF_TAG_COMMIT):
38 version = github_ref[len(GITHUB_REF_TAG_COMMIT):]
39
40 # Otherwise, if running from a GitHub Action, but not a tagged commit then GITHUB_RUN_NUMBER will be populated.
41 # Use this as a post release number. For example if GITHUB_RUN_NUMBER = 124 then the version string will be
42 # '0.1.2.post124'. Although this is discouraged, see:
43 # https://www.python.org/dev/peps/pep-0440/#post-releases
44 # it is necessary here to avoid duplicate packages in Test.PyPI.
45 if not version:
46 # TODO: Replace this with more principled package version management for the package wheels built during local test
47 # runs, one which circumvents AzureML's apparent package caching:
48 build_number = os.getenv('GITHUB_RUN_NUMBER')
49 if build_number:
50 version = '0.1.0.post' + build_number
51 else:
52 default_random_version_number = floor(random() * 10_000_000_000)
53 version = f'0.1.0.post{str(default_random_version_number)}'
54
55 (here / 'latest_version.txt').write_text(version)
56
57 # Read run_requirements.txt to get install_requires
58 install_requires = (here / 'run_requirements.txt').read_text().split("\n")
59 # Remove any whitespace and blank lines
60 install_requires = [line.strip() for line in install_requires if line.strip()]
61
62 description = 'Microsoft Health Intelligence package to elevate and monitor scripts to an AzureML workspace'
63
64 setup(
65 name='hi-ml',
66 version=version,
67 description=description,
68 long_description=long_description,
69 long_description_content_type='text/markdown',
70 url='https://github.com/microsoft/hi-ml',
71 author="Microsoft Research Cambridge InnerEye Team ",
72 author_email="[email protected]",
73 classifiers=[
74 'Development Status :: 3 - Alpha',
75 'Intended Audience :: Science/Research',
76 "Topic :: Scientific/Engineering :: Medical Science Apps.",
77 'License :: OSI Approved :: MIT License',
78 'Programming Language :: Python :: 3.7'
79 ],
80 keywords='InnerEye, HealthIntelligence, AzureML',
81 license='MIT License',
82 packages=find_packages(where="src"),
83 package_dir={"": "src"},
84 include_package_data=True,
85 install_requires=install_requires,
86 scripts=['src/health/azure/run_tensorboard.py']
87 )
88
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
# See also:
# https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/
# https://github.com/pypa/gh-action-pypi-publish
-GITHUB_REF_TAG_COMMIT = 'refs/tags/'
+GITHUB_REF_TAG_COMMIT = 'refs/tags/v'
github_ref = os.getenv('GITHUB_REF')
if github_ref and github_ref.startswith(GITHUB_REF_TAG_COMMIT):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n # See also:\n # https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/\n # https://github.com/pypa/gh-action-pypi-publish\n-GITHUB_REF_TAG_COMMIT = 'refs/tags/'\n+GITHUB_REF_TAG_COMMIT = 'refs/tags/v'\n \n github_ref = os.getenv('GITHUB_REF')\n if github_ref and github_ref.startswith(GITHUB_REF_TAG_COMMIT):\n", "issue": "Handle the \"v\" in version numbering \nOur code in `setup.py` will trigger with new tags. `setuptools.setup` will reject tags that are not release versions but we could do more to make that explicit by checking for the leading \"v\".\r\n\r\nAlso when we tag releases as, say, \"v0.1.1\" the leading \"v\" is carried through `setuptools.setup` so it becomes part of the pip test download\r\n\r\n> Successfully installed pip-21.2.4\r\n> Collecting hi-ml==v0.1.0\r\n> Downloading hi_ml-0.1.0-py3-none-any.whl (25 kB)\r\n\r\n(from [here](https://github.com/microsoft/hi-ml/runs/3362573497?check_suite_focus=true#step:6:29))\r\n\r\nThis works, but it would be cleaner to submit the version number using the public version identifier format mandated in [PEP 440](https://www.python.org/dev/peps/pep-0440/#public-version-identifiers), i.e. without the leading \"v\"\n", "before_files": [{"content": "# ------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.\n# ------------------------------------------------------------------------------------------\n\n\"\"\"A setuptools based setup module.\n\nSee:\nhttps://packaging.python.org/guides/distributing-packages-using-setuptools/\n\"\"\"\n\nimport os\nfrom math import floor\nimport pathlib\nfrom random import random\nfrom setuptools import setup, find_packages # type: ignore\n\n\nhere = pathlib.Path(__file__).parent.resolve()\n\n# Get the long description from the README file\nlong_description = (here / 'README.md').read_text(encoding='utf-8')\n\nversion = ''\n\n# If running from a GitHub Action then a standard set of environment variables will be\n# populated (https://docs.github.com/en/actions/reference/environment-variables#default-environment-variables).\n# In particular, GITHUB_REF is the branch or tag ref that triggered the workflow.\n# If this was triggered by a tagged commit then GITHUB_REF will be: 'ref/tags/new_tag'.\n# Extract this tag and use it as a version string\n# See also:\n# https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/\n# https://github.com/pypa/gh-action-pypi-publish\nGITHUB_REF_TAG_COMMIT = 'refs/tags/'\n\ngithub_ref = os.getenv('GITHUB_REF')\nif github_ref and github_ref.startswith(GITHUB_REF_TAG_COMMIT):\n version = github_ref[len(GITHUB_REF_TAG_COMMIT):]\n\n# Otherwise, if running from a GitHub Action, but not a tagged commit then GITHUB_RUN_NUMBER will be populated.\n# Use this as a post release number. For example if GITHUB_RUN_NUMBER = 124 then the version string will be\n# '0.1.2.post124'. Although this is discouraged, see:\n# https://www.python.org/dev/peps/pep-0440/#post-releases\n# it is necessary here to avoid duplicate packages in Test.PyPI.\nif not version:\n # TODO: Replace this with more principled package version management for the package wheels built during local test\n # runs, one which circumvents AzureML's apparent package caching:\n build_number = os.getenv('GITHUB_RUN_NUMBER')\n if build_number:\n version = '0.1.0.post' + build_number\n else:\n default_random_version_number = floor(random() * 10_000_000_000)\n version = f'0.1.0.post{str(default_random_version_number)}'\n\n(here / 'latest_version.txt').write_text(version)\n\n# Read run_requirements.txt to get install_requires\ninstall_requires = (here / 'run_requirements.txt').read_text().split(\"\\n\")\n# Remove any whitespace and blank lines\ninstall_requires = [line.strip() for line in install_requires if line.strip()]\n\ndescription = 'Microsoft Health Intelligence package to elevate and monitor scripts to an AzureML workspace'\n\nsetup(\n name='hi-ml',\n version=version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/microsoft/hi-ml',\n author=\"Microsoft Research Cambridge InnerEye Team \",\n author_email=\"[email protected]\",\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.7'\n ],\n keywords='InnerEye, HealthIntelligence, AzureML',\n license='MIT License',\n packages=find_packages(where=\"src\"),\n package_dir={\"\": \"src\"},\n include_package_data=True,\n install_requires=install_requires,\n scripts=['src/health/azure/run_tensorboard.py']\n)\n", "path": "setup.py"}]}
| 1,782 | 125 |
gh_patches_debug_42542
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-2532
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
maximal_independent_set does not work for DiGraph
Currently [maximal_independent_set](https://github.com/networkx/networkx/blob/d7d906e1d16ef331da0bc1d149953e7532155acc/networkx/algorithms/mis.py#L70) returns the wrong results for a `DiGraph` because it uses the `G.neighbors` method which returns only the successor nodes in a `DiGraph`. I believe the [all_neighbors](https://github.com/networkx/networkx/blob/13b373bf6938c077d1e61adc60a48cb910a75755/networkx/classes/function.py#L540) function should be used instead to make `maximal_independent_set` work correctly for both graph types.
</issue>
<code>
[start of networkx/algorithms/mis.py]
1 # -*- coding: utf-8 -*-
2 # $Id: maximalIndependentSet.py 576 2011-03-01 05:50:34Z lleeoo $
3 """
4 Algorithm to find a maximal (not maximum) independent set.
5
6 """
7 # Leo Lopes <[email protected]>
8 # Aric Hagberg <[email protected]>
9 # Dan Schult <[email protected]>
10 # Pieter Swart <[email protected]>
11 # All rights reserved.
12 # BSD license.
13
14 __author__ = "\n".join(["Leo Lopes <[email protected]>",
15 "Loïc Séguin-C. <[email protected]>"])
16
17 __all__ = ['maximal_independent_set']
18
19 import random
20 import networkx as nx
21
22 def maximal_independent_set(G, nodes=None):
23 """Return a random maximal independent set guaranteed to contain
24 a given set of nodes.
25
26 An independent set is a set of nodes such that the subgraph
27 of G induced by these nodes contains no edges. A maximal
28 independent set is an independent set such that it is not possible
29 to add a new node and still get an independent set.
30
31 Parameters
32 ----------
33 G : NetworkX graph
34
35 nodes : list or iterable
36 Nodes that must be part of the independent set. This set of nodes
37 must be independent.
38
39 Returns
40 -------
41 indep_nodes : list
42 List of nodes that are part of a maximal independent set.
43
44 Raises
45 ------
46 NetworkXUnfeasible
47 If the nodes in the provided list are not part of the graph or
48 do not form an independent set, an exception is raised.
49
50 Examples
51 --------
52 >>> G = nx.path_graph(5)
53 >>> nx.maximal_independent_set(G) # doctest: +SKIP
54 [4, 0, 2]
55 >>> nx.maximal_independent_set(G, [1]) # doctest: +SKIP
56 [1, 3]
57
58 Notes
59 -----
60 This algorithm does not solve the maximum independent set problem.
61
62 """
63 if not nodes:
64 nodes = set([random.choice(list(G))])
65 else:
66 nodes = set(nodes)
67 if not nodes.issubset(G):
68 raise nx.NetworkXUnfeasible(
69 "%s is not a subset of the nodes of G" % nodes)
70 neighbors = set.union(*[set(G.neighbors(v)) for v in nodes])
71 if set.intersection(neighbors, nodes):
72 raise nx.NetworkXUnfeasible(
73 "%s is not an independent set of G" % nodes)
74 indep_nodes = list(nodes)
75 available_nodes = set(G.nodes()).difference(neighbors.union(nodes))
76 while available_nodes:
77 node = random.choice(list(available_nodes))
78 indep_nodes.append(node)
79 available_nodes.difference_update(list(G.neighbors(node)) + [node])
80 return indep_nodes
81
82
[end of networkx/algorithms/mis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/algorithms/mis.py b/networkx/algorithms/mis.py
--- a/networkx/algorithms/mis.py
+++ b/networkx/algorithms/mis.py
@@ -1,24 +1,26 @@
# -*- coding: utf-8 -*-
# $Id: maximalIndependentSet.py 576 2011-03-01 05:50:34Z lleeoo $
-"""
-Algorithm to find a maximal (not maximum) independent set.
-
-"""
# Leo Lopes <[email protected]>
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# All rights reserved.
# BSD license.
+#
+# Authors: Leo Lopes <[email protected]>
+# Loïc Séguin-C. <[email protected]>
+"""
+Algorithm to find a maximal (not maximum) independent set.
-__author__ = "\n".join(["Leo Lopes <[email protected]>",
- "Loïc Séguin-C. <[email protected]>"])
+"""
+import random
+import networkx as nx
+from networkx.utils import not_implemented_for
__all__ = ['maximal_independent_set']
-import random
-import networkx as nx
+@not_implemented_for('directed')
def maximal_independent_set(G, nodes=None):
"""Return a random maximal independent set guaranteed to contain
a given set of nodes.
@@ -27,10 +29,10 @@
of G induced by these nodes contains no edges. A maximal
independent set is an independent set such that it is not possible
to add a new node and still get an independent set.
-
+
Parameters
----------
- G : NetworkX graph
+ G : NetworkX graph
nodes : list or iterable
Nodes that must be part of the independent set. This set of nodes
@@ -38,7 +40,7 @@
Returns
-------
- indep_nodes : list
+ indep_nodes : list
List of nodes that are part of a maximal independent set.
Raises
@@ -47,6 +49,9 @@
If the nodes in the provided list are not part of the graph or
do not form an independent set, an exception is raised.
+ NetworkXNotImplemented
+ If `G` is directed.
+
Examples
--------
>>> G = nx.path_graph(5)
@@ -54,7 +59,7 @@
[4, 0, 2]
>>> nx.maximal_independent_set(G, [1]) # doctest: +SKIP
[1, 3]
-
+
Notes
-----
This algorithm does not solve the maximum independent set problem.
@@ -67,7 +72,7 @@
if not nodes.issubset(G):
raise nx.NetworkXUnfeasible(
"%s is not a subset of the nodes of G" % nodes)
- neighbors = set.union(*[set(G.neighbors(v)) for v in nodes])
+ neighbors = set.union(*[set(G.adj[v]) for v in nodes])
if set.intersection(neighbors, nodes):
raise nx.NetworkXUnfeasible(
"%s is not an independent set of G" % nodes)
@@ -76,6 +81,5 @@
while available_nodes:
node = random.choice(list(available_nodes))
indep_nodes.append(node)
- available_nodes.difference_update(list(G.neighbors(node)) + [node])
+ available_nodes.difference_update(list(G.adj[node]) + [node])
return indep_nodes
-
|
{"golden_diff": "diff --git a/networkx/algorithms/mis.py b/networkx/algorithms/mis.py\n--- a/networkx/algorithms/mis.py\n+++ b/networkx/algorithms/mis.py\n@@ -1,24 +1,26 @@\n # -*- coding: utf-8 -*-\n # $Id: maximalIndependentSet.py 576 2011-03-01 05:50:34Z lleeoo $\n-\"\"\"\n-Algorithm to find a maximal (not maximum) independent set.\n-\n-\"\"\"\n # Leo Lopes <[email protected]>\n # Aric Hagberg <[email protected]>\n # Dan Schult <[email protected]>\n # Pieter Swart <[email protected]>\n # All rights reserved.\n # BSD license.\n+#\n+# Authors: Leo Lopes <[email protected]>\n+# Lo\u00efc S\u00e9guin-C. <[email protected]>\n+\"\"\"\n+Algorithm to find a maximal (not maximum) independent set.\n \n-__author__ = \"\\n\".join([\"Leo Lopes <[email protected]>\",\n- \"Lo\u00efc S\u00e9guin-C. <[email protected]>\"])\n+\"\"\"\n+import random\n+import networkx as nx\n+from networkx.utils import not_implemented_for\n \n __all__ = ['maximal_independent_set']\n \n-import random\n-import networkx as nx\n \n+@not_implemented_for('directed')\n def maximal_independent_set(G, nodes=None):\n \"\"\"Return a random maximal independent set guaranteed to contain\n a given set of nodes.\n@@ -27,10 +29,10 @@\n of G induced by these nodes contains no edges. A maximal\n independent set is an independent set such that it is not possible\n to add a new node and still get an independent set.\n- \n+\n Parameters\n ----------\n- G : NetworkX graph \n+ G : NetworkX graph\n \n nodes : list or iterable\n Nodes that must be part of the independent set. This set of nodes\n@@ -38,7 +40,7 @@\n \n Returns\n -------\n- indep_nodes : list \n+ indep_nodes : list\n List of nodes that are part of a maximal independent set.\n \n Raises\n@@ -47,6 +49,9 @@\n If the nodes in the provided list are not part of the graph or\n do not form an independent set, an exception is raised.\n \n+ NetworkXNotImplemented\n+ If `G` is directed.\n+\n Examples\n --------\n >>> G = nx.path_graph(5)\n@@ -54,7 +59,7 @@\n [4, 0, 2]\n >>> nx.maximal_independent_set(G, [1]) # doctest: +SKIP\n [1, 3]\n- \n+\n Notes\n -----\n This algorithm does not solve the maximum independent set problem.\n@@ -67,7 +72,7 @@\n if not nodes.issubset(G):\n raise nx.NetworkXUnfeasible(\n \"%s is not a subset of the nodes of G\" % nodes)\n- neighbors = set.union(*[set(G.neighbors(v)) for v in nodes])\n+ neighbors = set.union(*[set(G.adj[v]) for v in nodes])\n if set.intersection(neighbors, nodes):\n raise nx.NetworkXUnfeasible(\n \"%s is not an independent set of G\" % nodes)\n@@ -76,6 +81,5 @@\n while available_nodes:\n node = random.choice(list(available_nodes))\n indep_nodes.append(node)\n- available_nodes.difference_update(list(G.neighbors(node)) + [node])\n+ available_nodes.difference_update(list(G.adj[node]) + [node])\n return indep_nodes\n-\n", "issue": "maximal_independent_set does not work for DiGraph\nCurrently [maximal_independent_set](https://github.com/networkx/networkx/blob/d7d906e1d16ef331da0bc1d149953e7532155acc/networkx/algorithms/mis.py#L70) returns the wrong results for a `DiGraph` because it uses the `G.neighbors` method which returns only the successor nodes in a `DiGraph`. I believe the [all_neighbors](https://github.com/networkx/networkx/blob/13b373bf6938c077d1e61adc60a48cb910a75755/networkx/classes/function.py#L540) function should be used instead to make `maximal_independent_set` work correctly for both graph types.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# $Id: maximalIndependentSet.py 576 2011-03-01 05:50:34Z lleeoo $\n\"\"\"\nAlgorithm to find a maximal (not maximum) independent set.\n\n\"\"\"\n# Leo Lopes <[email protected]>\n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\n__author__ = \"\\n\".join([\"Leo Lopes <[email protected]>\",\n \"Lo\u00efc S\u00e9guin-C. <[email protected]>\"])\n\n__all__ = ['maximal_independent_set']\n\nimport random\nimport networkx as nx\n\ndef maximal_independent_set(G, nodes=None):\n \"\"\"Return a random maximal independent set guaranteed to contain\n a given set of nodes.\n\n An independent set is a set of nodes such that the subgraph\n of G induced by these nodes contains no edges. A maximal\n independent set is an independent set such that it is not possible\n to add a new node and still get an independent set.\n \n Parameters\n ----------\n G : NetworkX graph \n\n nodes : list or iterable\n Nodes that must be part of the independent set. This set of nodes\n must be independent.\n\n Returns\n -------\n indep_nodes : list \n List of nodes that are part of a maximal independent set.\n\n Raises\n ------\n NetworkXUnfeasible\n If the nodes in the provided list are not part of the graph or\n do not form an independent set, an exception is raised.\n\n Examples\n --------\n >>> G = nx.path_graph(5)\n >>> nx.maximal_independent_set(G) # doctest: +SKIP\n [4, 0, 2]\n >>> nx.maximal_independent_set(G, [1]) # doctest: +SKIP\n [1, 3]\n \n Notes\n -----\n This algorithm does not solve the maximum independent set problem.\n\n \"\"\"\n if not nodes:\n nodes = set([random.choice(list(G))])\n else:\n nodes = set(nodes)\n if not nodes.issubset(G):\n raise nx.NetworkXUnfeasible(\n \"%s is not a subset of the nodes of G\" % nodes)\n neighbors = set.union(*[set(G.neighbors(v)) for v in nodes])\n if set.intersection(neighbors, nodes):\n raise nx.NetworkXUnfeasible(\n \"%s is not an independent set of G\" % nodes)\n indep_nodes = list(nodes)\n available_nodes = set(G.nodes()).difference(neighbors.union(nodes))\n while available_nodes:\n node = random.choice(list(available_nodes))\n indep_nodes.append(node)\n available_nodes.difference_update(list(G.neighbors(node)) + [node])\n return indep_nodes\n\n", "path": "networkx/algorithms/mis.py"}]}
| 1,543 | 846 |
gh_patches_debug_27752
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-52
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FourChanOrg don't work
When i try to download a thread (e.g. http://boards.4chan.org/wg/res/5176429) nothing happens, only BasePlugin will be used,
</issue>
<code>
[start of module/plugins/crypter/FourChanOrg.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import re
5
6 from module.plugins.Crypter import Crypter
7
8 class FourChanOrg(Crypter):
9 __name__ = "FourChanOrg"
10 __type__ = "container"
11 __pattern__ = r"http://(www\.)?(img\.)?(zip\.)?4chan.org/\w+/(res/|imgboard\.html)"
12 __version__ = "0.1"
13 __description__ = """4chan.org Thread Download Plugin"""
14 __author_name__ = ("Spoob")
15 __author_mail__ = ("[email protected]")
16
17 def __init__(self, parent):
18 Crypter.__init__(self, parent)
19 self.parent = parent
20 self.html = None
21
22 def file_exists(self):
23 """ returns True or False
24 """
25 return True
26
27 def proceed(self, url, location):
28 url = self.parent.url
29 html = self.req.load(url)
30 link_pattern = ""
31 temp_links = []
32 if "imagebord.html" in url:
33 link_pattern = '[<a href="(res/\d*\.html)">Reply</a>]'
34 temp_links = re.findall(link_pattern, html)
35 for link in re.findall(link_pattern, html):
36 temp_links.append(link)
37 else:
38 temp_links = re.findall('File : <a href="(http://(?:img\.)?(?:zip\.)?4chan\.org/\w{,3}/src/\d*\..{3})"', html)
39 self.links = temp_links
40
[end of module/plugins/crypter/FourChanOrg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/crypter/FourChanOrg.py b/module/plugins/crypter/FourChanOrg.py
--- a/module/plugins/crypter/FourChanOrg.py
+++ b/module/plugins/crypter/FourChanOrg.py
@@ -6,34 +6,20 @@
from module.plugins.Crypter import Crypter
class FourChanOrg(Crypter):
+ # Based on 4chandl by Roland Beermann
+ # https://gist.github.com/enkore/3492599
__name__ = "FourChanOrg"
__type__ = "container"
- __pattern__ = r"http://(www\.)?(img\.)?(zip\.)?4chan.org/\w+/(res/|imgboard\.html)"
- __version__ = "0.1"
- __description__ = """4chan.org Thread Download Plugin"""
- __author_name__ = ("Spoob")
- __author_mail__ = ("[email protected]")
+ __version__ = "0.3"
+ __pattern__ = r"http://boards\.4chan.org/\w+/res/(\d+)"
+ __description__ = "Downloader for entire 4chan threads"
- def __init__(self, parent):
- Crypter.__init__(self, parent)
- self.parent = parent
- self.html = None
+ def decrypt(self, pyfile):
+ pagehtml = self.load(pyfile.url)
- def file_exists(self):
- """ returns True or False
- """
- return True
+ images = set(re.findall(r'(images\.4chan\.org/[^/]*/src/[^"<]*)', pagehtml))
+ urls = []
+ for image in images:
+ urls.append("http://" + image)
- def proceed(self, url, location):
- url = self.parent.url
- html = self.req.load(url)
- link_pattern = ""
- temp_links = []
- if "imagebord.html" in url:
- link_pattern = '[<a href="(res/\d*\.html)">Reply</a>]'
- temp_links = re.findall(link_pattern, html)
- for link in re.findall(link_pattern, html):
- temp_links.append(link)
- else:
- temp_links = re.findall('File : <a href="(http://(?:img\.)?(?:zip\.)?4chan\.org/\w{,3}/src/\d*\..{3})"', html)
- self.links = temp_links
+ self.core.files.addLinks(urls, self.pyfile.package().id)
|
{"golden_diff": "diff --git a/module/plugins/crypter/FourChanOrg.py b/module/plugins/crypter/FourChanOrg.py\n--- a/module/plugins/crypter/FourChanOrg.py\n+++ b/module/plugins/crypter/FourChanOrg.py\n@@ -6,34 +6,20 @@\n from module.plugins.Crypter import Crypter\n \n class FourChanOrg(Crypter):\n+ # Based on 4chandl by Roland Beermann\n+ # https://gist.github.com/enkore/3492599\n __name__ = \"FourChanOrg\"\n __type__ = \"container\"\n- __pattern__ = r\"http://(www\\.)?(img\\.)?(zip\\.)?4chan.org/\\w+/(res/|imgboard\\.html)\"\n- __version__ = \"0.1\"\n- __description__ = \"\"\"4chan.org Thread Download Plugin\"\"\"\n- __author_name__ = (\"Spoob\")\n- __author_mail__ = (\"[email protected]\")\n+ __version__ = \"0.3\"\n+ __pattern__ = r\"http://boards\\.4chan.org/\\w+/res/(\\d+)\"\n+ __description__ = \"Downloader for entire 4chan threads\"\n \n- def __init__(self, parent):\n- Crypter.__init__(self, parent)\n- self.parent = parent\n- self.html = None\n+ def decrypt(self, pyfile):\n+ pagehtml = self.load(pyfile.url)\n \n- def file_exists(self):\n- \"\"\" returns True or False\n- \"\"\"\n- return True\n+ images = set(re.findall(r'(images\\.4chan\\.org/[^/]*/src/[^\"<]*)', pagehtml))\n+ urls = []\n+ for image in images:\n+ urls.append(\"http://\" + image)\n \n- def proceed(self, url, location):\n- url = self.parent.url\n- html = self.req.load(url)\n- link_pattern = \"\"\n- temp_links = []\n- if \"imagebord.html\" in url:\n- link_pattern = '[<a href=\"(res/\\d*\\.html)\">Reply</a>]'\n- temp_links = re.findall(link_pattern, html)\n- for link in re.findall(link_pattern, html):\n- temp_links.append(link)\n- else:\n- temp_links = re.findall('File : <a href=\"(http://(?:img\\.)?(?:zip\\.)?4chan\\.org/\\w{,3}/src/\\d*\\..{3})\"', html)\n- self.links = temp_links\n+ self.core.files.addLinks(urls, self.pyfile.package().id)\n", "issue": "FourChanOrg don't work\nWhen i try to download a thread (e.g. http://boards.4chan.org/wg/res/5176429) nothing happens, only BasePlugin will be used,\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.Crypter import Crypter\n\nclass FourChanOrg(Crypter):\n __name__ = \"FourChanOrg\"\n __type__ = \"container\"\n __pattern__ = r\"http://(www\\.)?(img\\.)?(zip\\.)?4chan.org/\\w+/(res/|imgboard\\.html)\"\n __version__ = \"0.1\"\n __description__ = \"\"\"4chan.org Thread Download Plugin\"\"\"\n __author_name__ = (\"Spoob\")\n __author_mail__ = (\"[email protected]\")\n\n def __init__(self, parent):\n Crypter.__init__(self, parent)\n self.parent = parent\n self.html = None\n\n def file_exists(self):\n \"\"\" returns True or False\n \"\"\"\n return True\n\n def proceed(self, url, location):\n url = self.parent.url\n html = self.req.load(url)\n link_pattern = \"\"\n temp_links = []\n if \"imagebord.html\" in url:\n link_pattern = '[<a href=\"(res/\\d*\\.html)\">Reply</a>]'\n temp_links = re.findall(link_pattern, html)\n for link in re.findall(link_pattern, html):\n temp_links.append(link)\n else:\n temp_links = re.findall('File : <a href=\"(http://(?:img\\.)?(?:zip\\.)?4chan\\.org/\\w{,3}/src/\\d*\\..{3})\"', html)\n self.links = temp_links\n", "path": "module/plugins/crypter/FourChanOrg.py"}]}
| 1,006 | 582 |
gh_patches_debug_30209
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-77
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update to Python 3.7
Python 3.7 is out.
</issue>
<code>
[start of tools/common.py]
1 from pathlib import Path
2
3
4 ROOTDIR = Path(__file__).parent.resolve()
5 HOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.6.4' / 'host'
6 TARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.6.4'
7 DEFAULTCFLAGS = ''
8 DEFAULTLDFLAGS = ' '.join([
9 '-O3',
10 '-s', "BINARYEN_METHOD='native-wasm'",
11 '-Werror',
12 '-s', 'EMULATED_FUNCTION_POINTERS=1',
13 '-s', 'EMULATE_FUNCTION_POINTER_CASTS=1',
14 '-s', 'SIDE_MODULE=1',
15 '-s', 'WASM=1',
16 '--memory-init-file', '0'
17 ])
18
19
20 def parse_package(package):
21 # Import yaml here because pywasmcross needs to run in the built native
22 # Python, which won't have PyYAML
23 import yaml
24 # TODO: Validate against a schema
25 with open(package) as fd:
26 return yaml.load(fd)
27
[end of tools/common.py]
[start of tools/buildpkg.py]
1 #!/usr/bin/env python3
2
3 """
4 Builds a Pyodide package.
5 """
6
7 import argparse
8 import hashlib
9 import os
10 from pathlib import Path
11 import shutil
12 import subprocess
13
14
15 import common
16
17
18 ROOTDIR = Path(__file__).parent.resolve()
19
20
21 def check_checksum(path, pkg):
22 """
23 Checks that a tarball matches the checksum in the package metadata.
24 """
25 checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])
26 if not checksum_keys:
27 return
28 elif len(checksum_keys) != 1:
29 raise ValueError('Only one checksum should be included in a package '
30 'setup; found {}.'.format(checksum_keys))
31 checksum_algorithm = checksum_keys.pop()
32 checksum = pkg['source'][checksum_algorithm]
33 CHUNK_SIZE = 1 << 16
34 h = getattr(hashlib, checksum_algorithm)()
35 with open(path, 'rb') as fd:
36 while True:
37 chunk = fd.read(CHUNK_SIZE)
38 h.update(chunk)
39 if len(chunk) < CHUNK_SIZE:
40 break
41 if h.hexdigest() != checksum:
42 raise ValueError("Invalid {} checksum".format(checksum_algorithm))
43
44
45 def download_and_extract(buildpath, packagedir, pkg, args):
46 tarballpath = buildpath / Path(pkg['source']['url']).name
47 if not tarballpath.is_file():
48 subprocess.run([
49 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']
50 ], check=True)
51 check_checksum(tarballpath, pkg)
52 srcpath = buildpath / packagedir
53 if not srcpath.is_dir():
54 shutil.unpack_archive(str(tarballpath), str(buildpath))
55 return srcpath
56
57
58 def patch(path, srcpath, pkg, args):
59 if (srcpath / '.patched').is_file():
60 return
61
62 # Apply all of the patches
63 orig_dir = Path.cwd()
64 pkgdir = path.parent.resolve()
65 os.chdir(srcpath)
66 try:
67 for patch in pkg['source'].get('patches', []):
68 subprocess.run([
69 'patch', '-p1', '--binary', '-i', pkgdir / patch
70 ], check=True)
71 finally:
72 os.chdir(orig_dir)
73
74 # Add any extra files
75 for src, dst in pkg['source'].get('extras', []):
76 shutil.copyfile(pkgdir / src, srcpath / dst)
77
78 with open(srcpath / '.patched', 'wb') as fd:
79 fd.write(b'\n')
80
81
82 def get_libdir(srcpath, args):
83 # Get the name of the build/lib.XXX directory that distutils wrote its
84 # output to
85 slug = subprocess.check_output([
86 str(Path(args.host) / 'bin' / 'python3'),
87 '-c',
88 'import sysconfig, sys; '
89 'print("{}-{}.{}".format('
90 'sysconfig.get_platform(), '
91 'sys.version_info[0], '
92 'sys.version_info[1]))']).decode('ascii').strip()
93 purelib = srcpath / 'build' / 'lib'
94 if purelib.is_dir():
95 libdir = purelib
96 else:
97 libdir = srcpath / 'build' / ('lib.' + slug)
98 return libdir
99
100
101 def compile(path, srcpath, pkg, args):
102 if (srcpath / '.built').is_file():
103 return
104
105 orig_dir = Path.cwd()
106 os.chdir(srcpath)
107 try:
108 subprocess.run([
109 str(Path(args.host) / 'bin' / 'python3'),
110 str(ROOTDIR / 'pywasmcross'),
111 '--cflags',
112 args.cflags + ' ' +
113 pkg.get('build', {}).get('cflags', ''),
114 '--ldflags',
115 args.ldflags + ' ' +
116 pkg.get('build', {}).get('ldflags', ''),
117 '--host', args.host,
118 '--target', args.target], check=True)
119 finally:
120 os.chdir(orig_dir)
121
122 post = pkg.get('build', {}).get('post')
123 if post is not None:
124 libdir = get_libdir(srcpath, args)
125 pkgdir = path.parent.resolve()
126 env = {
127 'BUILD': libdir,
128 'PKGDIR': pkgdir
129 }
130 subprocess.run([
131 'bash', '-c', post], env=env, check=True)
132
133 with open(srcpath / '.built', 'wb') as fd:
134 fd.write(b'\n')
135
136
137 def package_files(buildpath, srcpath, pkg, args):
138 if (buildpath / '.packaged').is_file():
139 return
140
141 name = pkg['package']['name']
142 install_prefix = (srcpath / 'install').resolve()
143 subprocess.run([
144 'python',
145 Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
146 name + '.data',
147 '--preload',
148 '{}@/'.format(install_prefix),
149 '--js-output={}'.format(name + '.js'),
150 '--export-name=pyodide',
151 '--exclude', '*.wasm.pre',
152 '--exclude', '__pycache__',
153 '--use-preload-plugins'],
154 cwd=buildpath, check=True)
155 subprocess.run([
156 'uglifyjs',
157 buildpath / (name + '.js'),
158 '-o',
159 buildpath / (name + '.js')], check=True)
160
161 with open(buildpath / '.packaged', 'wb') as fd:
162 fd.write(b'\n')
163
164
165 def build_package(path, args):
166 pkg = common.parse_package(path)
167 packagedir = pkg['package']['name'] + '-' + pkg['package']['version']
168 dirpath = path.parent
169 orig_path = Path.cwd()
170 os.chdir(dirpath)
171 try:
172 buildpath = dirpath / 'build'
173 if not buildpath.is_dir():
174 os.makedirs(buildpath)
175 srcpath = download_and_extract(buildpath, packagedir, pkg, args)
176 patch(path, srcpath, pkg, args)
177 compile(path, srcpath, pkg, args)
178 package_files(buildpath, srcpath, pkg, args)
179 finally:
180 os.chdir(orig_path)
181
182
183 def parse_args():
184 parser = argparse.ArgumentParser('Build a pyodide package.')
185 parser.add_argument(
186 'package', type=str, nargs=1,
187 help="Path to meta.yaml package description")
188 parser.add_argument(
189 '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,
190 help='Extra compiling flags')
191 parser.add_argument(
192 '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,
193 help='Extra linking flags')
194 parser.add_argument(
195 '--host', type=str, nargs='?', default=common.HOSTPYTHON,
196 help='The path to the host Python installation')
197 parser.add_argument(
198 '--target', type=str, nargs='?', default=common.TARGETPYTHON,
199 help='The path to the target Python installation')
200 return parser.parse_args()
201
202
203 def main(args):
204 path = Path(args.package[0]).resolve()
205 build_package(path, args)
206
207
208 if __name__ == '__main__':
209 args = parse_args()
210 main(args)
211
[end of tools/buildpkg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/buildpkg.py b/tools/buildpkg.py
--- a/tools/buildpkg.py
+++ b/tools/buildpkg.py
@@ -79,25 +79,6 @@
fd.write(b'\n')
-def get_libdir(srcpath, args):
- # Get the name of the build/lib.XXX directory that distutils wrote its
- # output to
- slug = subprocess.check_output([
- str(Path(args.host) / 'bin' / 'python3'),
- '-c',
- 'import sysconfig, sys; '
- 'print("{}-{}.{}".format('
- 'sysconfig.get_platform(), '
- 'sys.version_info[0], '
- 'sys.version_info[1]))']).decode('ascii').strip()
- purelib = srcpath / 'build' / 'lib'
- if purelib.is_dir():
- libdir = purelib
- else:
- libdir = srcpath / 'build' / ('lib.' + slug)
- return libdir
-
-
def compile(path, srcpath, pkg, args):
if (srcpath / '.built').is_file():
return
@@ -121,10 +102,11 @@
post = pkg.get('build', {}).get('post')
if post is not None:
- libdir = get_libdir(srcpath, args)
+ site_packages_dir = (
+ srcpath / 'install' / 'lib' / 'python3.7' / 'site-packages')
pkgdir = path.parent.resolve()
env = {
- 'BUILD': libdir,
+ 'SITEPACKAGES': site_packages_dir,
'PKGDIR': pkgdir
}
subprocess.run([
diff --git a/tools/common.py b/tools/common.py
--- a/tools/common.py
+++ b/tools/common.py
@@ -2,8 +2,8 @@
ROOTDIR = Path(__file__).parent.resolve()
-HOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.6.4' / 'host'
-TARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.6.4'
+HOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.7.0' / 'host'
+TARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.7.0'
DEFAULTCFLAGS = ''
DEFAULTLDFLAGS = ' '.join([
'-O3',
|
{"golden_diff": "diff --git a/tools/buildpkg.py b/tools/buildpkg.py\n--- a/tools/buildpkg.py\n+++ b/tools/buildpkg.py\n@@ -79,25 +79,6 @@\n fd.write(b'\\n')\n \n \n-def get_libdir(srcpath, args):\n- # Get the name of the build/lib.XXX directory that distutils wrote its\n- # output to\n- slug = subprocess.check_output([\n- str(Path(args.host) / 'bin' / 'python3'),\n- '-c',\n- 'import sysconfig, sys; '\n- 'print(\"{}-{}.{}\".format('\n- 'sysconfig.get_platform(), '\n- 'sys.version_info[0], '\n- 'sys.version_info[1]))']).decode('ascii').strip()\n- purelib = srcpath / 'build' / 'lib'\n- if purelib.is_dir():\n- libdir = purelib\n- else:\n- libdir = srcpath / 'build' / ('lib.' + slug)\n- return libdir\n-\n-\n def compile(path, srcpath, pkg, args):\n if (srcpath / '.built').is_file():\n return\n@@ -121,10 +102,11 @@\n \n post = pkg.get('build', {}).get('post')\n if post is not None:\n- libdir = get_libdir(srcpath, args)\n+ site_packages_dir = (\n+ srcpath / 'install' / 'lib' / 'python3.7' / 'site-packages')\n pkgdir = path.parent.resolve()\n env = {\n- 'BUILD': libdir,\n+ 'SITEPACKAGES': site_packages_dir,\n 'PKGDIR': pkgdir\n }\n subprocess.run([\ndiff --git a/tools/common.py b/tools/common.py\n--- a/tools/common.py\n+++ b/tools/common.py\n@@ -2,8 +2,8 @@\n \n \n ROOTDIR = Path(__file__).parent.resolve()\n-HOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.6.4' / 'host'\n-TARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.6.4'\n+HOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.7.0' / 'host'\n+TARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.7.0'\n DEFAULTCFLAGS = ''\n DEFAULTLDFLAGS = ' '.join([\n '-O3',\n", "issue": "Update to Python 3.7\nPython 3.7 is out.\n", "before_files": [{"content": "from pathlib import Path\n\n\nROOTDIR = Path(__file__).parent.resolve()\nHOSTPYTHON = ROOTDIR / '..' / 'cpython' / 'build' / '3.6.4' / 'host'\nTARGETPYTHON = ROOTDIR / '..' / 'cpython' / 'installs' / 'python-3.6.4'\nDEFAULTCFLAGS = ''\nDEFAULTLDFLAGS = ' '.join([\n '-O3',\n '-s', \"BINARYEN_METHOD='native-wasm'\",\n '-Werror',\n '-s', 'EMULATED_FUNCTION_POINTERS=1',\n '-s', 'EMULATE_FUNCTION_POINTER_CASTS=1',\n '-s', 'SIDE_MODULE=1',\n '-s', 'WASM=1',\n '--memory-init-file', '0'\n ])\n\n\ndef parse_package(package):\n # Import yaml here because pywasmcross needs to run in the built native\n # Python, which won't have PyYAML\n import yaml\n # TODO: Validate against a schema\n with open(package) as fd:\n return yaml.load(fd)\n", "path": "tools/common.py"}, {"content": "#!/usr/bin/env python3\n\n\"\"\"\nBuilds a Pyodide package.\n\"\"\"\n\nimport argparse\nimport hashlib\nimport os\nfrom pathlib import Path\nimport shutil\nimport subprocess\n\n\nimport common\n\n\nROOTDIR = Path(__file__).parent.resolve()\n\n\ndef check_checksum(path, pkg):\n \"\"\"\n Checks that a tarball matches the checksum in the package metadata.\n \"\"\"\n checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])\n if not checksum_keys:\n return\n elif len(checksum_keys) != 1:\n raise ValueError('Only one checksum should be included in a package '\n 'setup; found {}.'.format(checksum_keys))\n checksum_algorithm = checksum_keys.pop()\n checksum = pkg['source'][checksum_algorithm]\n CHUNK_SIZE = 1 << 16\n h = getattr(hashlib, checksum_algorithm)()\n with open(path, 'rb') as fd:\n while True:\n chunk = fd.read(CHUNK_SIZE)\n h.update(chunk)\n if len(chunk) < CHUNK_SIZE:\n break\n if h.hexdigest() != checksum:\n raise ValueError(\"Invalid {} checksum\".format(checksum_algorithm))\n\n\ndef download_and_extract(buildpath, packagedir, pkg, args):\n tarballpath = buildpath / Path(pkg['source']['url']).name\n if not tarballpath.is_file():\n subprocess.run([\n 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']\n ], check=True)\n check_checksum(tarballpath, pkg)\n srcpath = buildpath / packagedir\n if not srcpath.is_dir():\n shutil.unpack_archive(str(tarballpath), str(buildpath))\n return srcpath\n\n\ndef patch(path, srcpath, pkg, args):\n if (srcpath / '.patched').is_file():\n return\n\n # Apply all of the patches\n orig_dir = Path.cwd()\n pkgdir = path.parent.resolve()\n os.chdir(srcpath)\n try:\n for patch in pkg['source'].get('patches', []):\n subprocess.run([\n 'patch', '-p1', '--binary', '-i', pkgdir / patch\n ], check=True)\n finally:\n os.chdir(orig_dir)\n\n # Add any extra files\n for src, dst in pkg['source'].get('extras', []):\n shutil.copyfile(pkgdir / src, srcpath / dst)\n\n with open(srcpath / '.patched', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef get_libdir(srcpath, args):\n # Get the name of the build/lib.XXX directory that distutils wrote its\n # output to\n slug = subprocess.check_output([\n str(Path(args.host) / 'bin' / 'python3'),\n '-c',\n 'import sysconfig, sys; '\n 'print(\"{}-{}.{}\".format('\n 'sysconfig.get_platform(), '\n 'sys.version_info[0], '\n 'sys.version_info[1]))']).decode('ascii').strip()\n purelib = srcpath / 'build' / 'lib'\n if purelib.is_dir():\n libdir = purelib\n else:\n libdir = srcpath / 'build' / ('lib.' + slug)\n return libdir\n\n\ndef compile(path, srcpath, pkg, args):\n if (srcpath / '.built').is_file():\n return\n\n orig_dir = Path.cwd()\n os.chdir(srcpath)\n try:\n subprocess.run([\n str(Path(args.host) / 'bin' / 'python3'),\n str(ROOTDIR / 'pywasmcross'),\n '--cflags',\n args.cflags + ' ' +\n pkg.get('build', {}).get('cflags', ''),\n '--ldflags',\n args.ldflags + ' ' +\n pkg.get('build', {}).get('ldflags', ''),\n '--host', args.host,\n '--target', args.target], check=True)\n finally:\n os.chdir(orig_dir)\n\n post = pkg.get('build', {}).get('post')\n if post is not None:\n libdir = get_libdir(srcpath, args)\n pkgdir = path.parent.resolve()\n env = {\n 'BUILD': libdir,\n 'PKGDIR': pkgdir\n }\n subprocess.run([\n 'bash', '-c', post], env=env, check=True)\n\n with open(srcpath / '.built', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef package_files(buildpath, srcpath, pkg, args):\n if (buildpath / '.packaged').is_file():\n return\n\n name = pkg['package']['name']\n install_prefix = (srcpath / 'install').resolve()\n subprocess.run([\n 'python',\n Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',\n name + '.data',\n '--preload',\n '{}@/'.format(install_prefix),\n '--js-output={}'.format(name + '.js'),\n '--export-name=pyodide',\n '--exclude', '*.wasm.pre',\n '--exclude', '__pycache__',\n '--use-preload-plugins'],\n cwd=buildpath, check=True)\n subprocess.run([\n 'uglifyjs',\n buildpath / (name + '.js'),\n '-o',\n buildpath / (name + '.js')], check=True)\n\n with open(buildpath / '.packaged', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef build_package(path, args):\n pkg = common.parse_package(path)\n packagedir = pkg['package']['name'] + '-' + pkg['package']['version']\n dirpath = path.parent\n orig_path = Path.cwd()\n os.chdir(dirpath)\n try:\n buildpath = dirpath / 'build'\n if not buildpath.is_dir():\n os.makedirs(buildpath)\n srcpath = download_and_extract(buildpath, packagedir, pkg, args)\n patch(path, srcpath, pkg, args)\n compile(path, srcpath, pkg, args)\n package_files(buildpath, srcpath, pkg, args)\n finally:\n os.chdir(orig_path)\n\n\ndef parse_args():\n parser = argparse.ArgumentParser('Build a pyodide package.')\n parser.add_argument(\n 'package', type=str, nargs=1,\n help=\"Path to meta.yaml package description\")\n parser.add_argument(\n '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,\n help='Extra compiling flags')\n parser.add_argument(\n '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,\n help='Extra linking flags')\n parser.add_argument(\n '--host', type=str, nargs='?', default=common.HOSTPYTHON,\n help='The path to the host Python installation')\n parser.add_argument(\n '--target', type=str, nargs='?', default=common.TARGETPYTHON,\n help='The path to the target Python installation')\n return parser.parse_args()\n\n\ndef main(args):\n path = Path(args.package[0]).resolve()\n build_package(path, args)\n\n\nif __name__ == '__main__':\n args = parse_args()\n main(args)\n", "path": "tools/buildpkg.py"}]}
| 2,921 | 564 |
gh_patches_debug_31362
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-69
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Start current_epoch + current_iteration from 1 instead of 0.
Also increment these counters *after* the `ITERATION_COMPLETE`/`EPOCH_COMPLETE` events.
See discussion in #59 for more details
</issue>
<code>
[start of ignite/engine.py]
1 import logging
2 from abc import ABCMeta, abstractmethod
3 from enum import Enum
4 import time
5
6 from ignite.history import History
7 from ignite._utils import _to_hours_mins_secs
8
9
10 class Events(Enum):
11 EPOCH_STARTED = "epoch_started"
12 EPOCH_COMPLETED = "epoch_completed"
13 STARTED = "started"
14 COMPLETED = "completed"
15 ITERATION_STARTED = "iteration_started"
16 ITERATION_COMPLETED = "iteration_completed"
17 EXCEPTION_RAISED = "exception_raised"
18
19
20 class Engine(object):
21 __metaclass__ = ABCMeta
22
23 """
24 Abstract Engine class that is the super class of the Trainer and Evaluator engines.
25
26 Parameters
27 ----------
28 process_function : callable
29 A function receiving the current training batch in each iteration, outputing data to be stored in the history
30
31 """
32 def __init__(self, process_function):
33 self._event_handlers = {}
34 self._logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
35 self._logger.addHandler(logging.NullHandler())
36 self._process_function = process_function
37 self.current_iteration = 0
38 self.history = History()
39 self.should_terminate = False
40
41 if self._process_function is None:
42 raise ValueError("Engine must be given a processing function in order to run")
43
44 def add_event_handler(self, event_name, handler, *args, **kwargs):
45 """
46 Add an event handler to be executed when the specified event is fired
47
48 Parameters
49 ----------
50 event_name: enum
51 event from ignite.trainer.TrainingEvents to attach the
52 handler to
53 handler: Callable
54 the callable event handler that should be invoked
55 args:
56 optional args to be passed to `handler`
57 kwargs:
58 optional keyword args to be passed to `handler`
59
60 Returns
61 -------
62 None
63 """
64 if event_name not in Events.__members__.values():
65 self._logger.error("attempt to add event handler to an invalid event %s ", event_name)
66 raise ValueError("Event {} is not a valid event for this Engine".format(event_name))
67
68 if event_name not in self._event_handlers:
69 self._event_handlers[event_name] = []
70
71 self._event_handlers[event_name].append((handler, args, kwargs))
72 self._logger.debug("added handler for event % ", event_name)
73
74 def on(self, event_name, *args, **kwargs):
75 """
76 Decorator shortcut for add_event_handler
77
78 Parameters
79 ----------
80 event_name: enum
81 event to attach the handler to
82 args:
83 optional args to be passed to `handler`
84 kwargs:
85 optional keyword args to be passed to `handler`
86
87 Returns
88 -------
89 None
90 """
91 def decorator(f):
92 self.add_event_handler(event_name, f, *args, **kwargs)
93 return f
94 return decorator
95
96 def _fire_event(self, event_name):
97 if event_name in self._event_handlers.keys():
98 self._logger.debug("firing handlers for event %s ", event_name)
99 for func, args, kwargs in self._event_handlers[event_name]:
100 func(self, *args, **kwargs)
101
102 def terminate(self):
103 """
104 Sends terminate signal to the engine, so that it terminates after the current iteration
105 """
106 self._logger.info("Terminate signaled. Engine will stop after current iteration is finished")
107 self.should_terminate = True
108
109 def _run_once_on_dataset(self, dataset):
110 self.dataset = dataset
111 try:
112 start_time = time.time()
113 for batch in dataset:
114 self._fire_event(Events.ITERATION_STARTED)
115 step_result = self._process_function(batch)
116 if step_result is not None:
117 self.history.append(step_result)
118
119 self.current_iteration += 1
120 self._fire_event(Events.ITERATION_COMPLETED)
121 if self.should_terminate:
122 break
123 time_taken = time.time() - start_time
124 hours, mins, secs = _to_hours_mins_secs(time_taken)
125 return hours, mins, secs
126 except BaseException as e:
127 self._logger.error("Current run is terminating due to exception: %s", str(e))
128 self._fire_event(Events.EXCEPTION_RAISED)
129 raise e
130
131 @abstractmethod
132 def run(self, data, **kwargs):
133 """
134 Train the model, evaluate the validation set and update best parameters if the validation loss
135 improves.
136 In the event that the validation set is not run (or doesn't exist), the training loss is used
137 to update the best parameters.
138
139 Parameters
140 ----------
141 data : Iterable
142 Collection of batches allowing for the engine to iterate over(e.g., list or DataLoader)
143 **kwargs: optional
144 Any additional kwargs
145
146 Returns
147 -------
148 None
149 """
150 raise NotImplementedError("This method should be implemented by a subclass")
151
[end of ignite/engine.py]
[start of ignite/trainer.py]
1 from __future__ import division
2
3 import time
4 from collections import Iterable
5 from torch.autograd import Variable
6
7 from ignite.engine import Engine, Events
8 from ignite._utils import _to_hours_mins_secs, to_variable
9
10 __all__ = ["Trainer", "create_supervised_trainer"]
11
12
13 class Trainer(Engine):
14 """
15 Generic trainer class.
16
17 Training update and validation functions receive batches of data and return values which will
18 be stored in the `training_history` and `validation_history`. The trainer defines multiple
19 events in `TrainingEvents` for which the user can attach event handlers to. The events get
20 passed the trainer, so they can access the training/validation history
21
22
23 Parameters
24 ----------
25 training_update_function : callable
26 Update function receiving the current training batch in each iteration
27 """
28
29 def __init__(self, training_update_function):
30 super(Trainer, self).__init__(training_update_function)
31 self.current_epoch = 0
32 self.max_epochs = 0
33
34 def _train_one_epoch(self, training_data):
35 hours, mins, secs = self._run_once_on_dataset(training_data)
36 self._logger.info("Epoch[%s] Complete. Time taken: %02d:%02d:%02d", self.current_epoch, hours,
37 mins, secs)
38
39 def run(self, training_data, max_epochs=1):
40 """
41 Train the model, evaluate the validation set and update best parameters if the validation loss
42 improves.
43 In the event that the validation set is not run (or doesn't exist), the training loss is used
44 to update the best parameters.
45
46 Parameters
47 ----------
48 training_data : Iterable
49 Collection of training batches allowing repeated iteration (e.g., list or DataLoader)
50 max_epochs: int, optional
51 max epochs to train for [default=1]
52
53 Returns
54 -------
55 None
56 """
57 self.dataset = training_data
58 self.current_iteration = 0
59 self.current_epoch = 0
60
61 try:
62 self._logger.info("Training starting with max_epochs={}".format(max_epochs))
63
64 self.max_epochs = max_epochs
65
66 start_time = time.time()
67
68 self._fire_event(Events.STARTED)
69 while self.current_epoch < max_epochs and not self.should_terminate:
70 self._fire_event(Events.EPOCH_STARTED)
71 self._train_one_epoch(training_data)
72 if self.should_terminate:
73 break
74 self._fire_event(Events.EPOCH_COMPLETED)
75 self.current_epoch += 1
76
77 self._fire_event(Events.COMPLETED)
78 time_taken = time.time() - start_time
79 hours, mins, secs = _to_hours_mins_secs(time_taken)
80 self._logger.info("Training complete. Time taken %02d:%02d:%02d" % (hours, mins, secs))
81
82 except BaseException as e:
83 self._logger.error("Training is terminating due to exception: %s", str(e))
84 self._fire_event(Events.EXCEPTION_RAISED)
85 raise e
86
87
88 def create_supervised_trainer(model, optimizer, loss_fn, cuda=False):
89 """
90 Factory function for creating a trainer for supervised models
91
92 Args:
93 model (torch.nn.Module): the model to train
94 optimizer (torch.optim.Optimizer): the optimizer to use
95 loss_fn (torch.nn loss function): the loss function to use
96 cuda (bool, optional): whether or not to transfer batch to GPU (default: False)
97
98 Returns:
99 Trainer: a trainer instance with supervised update function
100 """
101
102 def _prepare_batch(batch):
103 x, y = batch
104 x = to_variable(x, cuda=cuda)
105 y = to_variable(y, cuda=cuda)
106 return x, y
107
108 def _update(batch):
109 model.train()
110 optimizer.zero_grad()
111 x, y = _prepare_batch(batch)
112 y_pred = model(x)
113 loss = loss_fn(y_pred, y)
114 loss.backward()
115 optimizer.step()
116 return loss.data.cpu()[0]
117
118 return Trainer(_update)
119
[end of ignite/trainer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/engine.py b/ignite/engine.py
--- a/ignite/engine.py
+++ b/ignite/engine.py
@@ -111,15 +111,16 @@
try:
start_time = time.time()
for batch in dataset:
+ self.current_iteration += 1
self._fire_event(Events.ITERATION_STARTED)
step_result = self._process_function(batch)
if step_result is not None:
self.history.append(step_result)
- self.current_iteration += 1
self._fire_event(Events.ITERATION_COMPLETED)
if self.should_terminate:
break
+
time_taken = time.time() - start_time
hours, mins, secs = _to_hours_mins_secs(time_taken)
return hours, mins, secs
diff --git a/ignite/trainer.py b/ignite/trainer.py
--- a/ignite/trainer.py
+++ b/ignite/trainer.py
@@ -67,12 +67,12 @@
self._fire_event(Events.STARTED)
while self.current_epoch < max_epochs and not self.should_terminate:
+ self.current_epoch += 1
self._fire_event(Events.EPOCH_STARTED)
self._train_one_epoch(training_data)
if self.should_terminate:
break
self._fire_event(Events.EPOCH_COMPLETED)
- self.current_epoch += 1
self._fire_event(Events.COMPLETED)
time_taken = time.time() - start_time
|
{"golden_diff": "diff --git a/ignite/engine.py b/ignite/engine.py\n--- a/ignite/engine.py\n+++ b/ignite/engine.py\n@@ -111,15 +111,16 @@\n try:\n start_time = time.time()\n for batch in dataset:\n+ self.current_iteration += 1\n self._fire_event(Events.ITERATION_STARTED)\n step_result = self._process_function(batch)\n if step_result is not None:\n self.history.append(step_result)\n \n- self.current_iteration += 1\n self._fire_event(Events.ITERATION_COMPLETED)\n if self.should_terminate:\n break\n+\n time_taken = time.time() - start_time\n hours, mins, secs = _to_hours_mins_secs(time_taken)\n return hours, mins, secs\ndiff --git a/ignite/trainer.py b/ignite/trainer.py\n--- a/ignite/trainer.py\n+++ b/ignite/trainer.py\n@@ -67,12 +67,12 @@\n \n self._fire_event(Events.STARTED)\n while self.current_epoch < max_epochs and not self.should_terminate:\n+ self.current_epoch += 1\n self._fire_event(Events.EPOCH_STARTED)\n self._train_one_epoch(training_data)\n if self.should_terminate:\n break\n self._fire_event(Events.EPOCH_COMPLETED)\n- self.current_epoch += 1\n \n self._fire_event(Events.COMPLETED)\n time_taken = time.time() - start_time\n", "issue": "Start current_epoch + current_iteration from 1 instead of 0.\nAlso increment these counters *after* the `ITERATION_COMPLETE`/`EPOCH_COMPLETE` events.\r\n\r\nSee discussion in #59 for more details\n", "before_files": [{"content": "import logging\nfrom abc import ABCMeta, abstractmethod\nfrom enum import Enum\nimport time\n\nfrom ignite.history import History\nfrom ignite._utils import _to_hours_mins_secs\n\n\nclass Events(Enum):\n EPOCH_STARTED = \"epoch_started\"\n EPOCH_COMPLETED = \"epoch_completed\"\n STARTED = \"started\"\n COMPLETED = \"completed\"\n ITERATION_STARTED = \"iteration_started\"\n ITERATION_COMPLETED = \"iteration_completed\"\n EXCEPTION_RAISED = \"exception_raised\"\n\n\nclass Engine(object):\n __metaclass__ = ABCMeta\n\n \"\"\"\n Abstract Engine class that is the super class of the Trainer and Evaluator engines.\n\n Parameters\n ----------\n process_function : callable\n A function receiving the current training batch in each iteration, outputing data to be stored in the history\n\n \"\"\"\n def __init__(self, process_function):\n self._event_handlers = {}\n self._logger = logging.getLogger(__name__ + \".\" + self.__class__.__name__)\n self._logger.addHandler(logging.NullHandler())\n self._process_function = process_function\n self.current_iteration = 0\n self.history = History()\n self.should_terminate = False\n\n if self._process_function is None:\n raise ValueError(\"Engine must be given a processing function in order to run\")\n\n def add_event_handler(self, event_name, handler, *args, **kwargs):\n \"\"\"\n Add an event handler to be executed when the specified event is fired\n\n Parameters\n ----------\n event_name: enum\n event from ignite.trainer.TrainingEvents to attach the\n handler to\n handler: Callable\n the callable event handler that should be invoked\n args:\n optional args to be passed to `handler`\n kwargs:\n optional keyword args to be passed to `handler`\n\n Returns\n -------\n None\n \"\"\"\n if event_name not in Events.__members__.values():\n self._logger.error(\"attempt to add event handler to an invalid event %s \", event_name)\n raise ValueError(\"Event {} is not a valid event for this Engine\".format(event_name))\n\n if event_name not in self._event_handlers:\n self._event_handlers[event_name] = []\n\n self._event_handlers[event_name].append((handler, args, kwargs))\n self._logger.debug(\"added handler for event % \", event_name)\n\n def on(self, event_name, *args, **kwargs):\n \"\"\"\n Decorator shortcut for add_event_handler\n\n Parameters\n ----------\n event_name: enum\n event to attach the handler to\n args:\n optional args to be passed to `handler`\n kwargs:\n optional keyword args to be passed to `handler`\n\n Returns\n -------\n None\n \"\"\"\n def decorator(f):\n self.add_event_handler(event_name, f, *args, **kwargs)\n return f\n return decorator\n\n def _fire_event(self, event_name):\n if event_name in self._event_handlers.keys():\n self._logger.debug(\"firing handlers for event %s \", event_name)\n for func, args, kwargs in self._event_handlers[event_name]:\n func(self, *args, **kwargs)\n\n def terminate(self):\n \"\"\"\n Sends terminate signal to the engine, so that it terminates after the current iteration\n \"\"\"\n self._logger.info(\"Terminate signaled. Engine will stop after current iteration is finished\")\n self.should_terminate = True\n\n def _run_once_on_dataset(self, dataset):\n self.dataset = dataset\n try:\n start_time = time.time()\n for batch in dataset:\n self._fire_event(Events.ITERATION_STARTED)\n step_result = self._process_function(batch)\n if step_result is not None:\n self.history.append(step_result)\n\n self.current_iteration += 1\n self._fire_event(Events.ITERATION_COMPLETED)\n if self.should_terminate:\n break\n time_taken = time.time() - start_time\n hours, mins, secs = _to_hours_mins_secs(time_taken)\n return hours, mins, secs\n except BaseException as e:\n self._logger.error(\"Current run is terminating due to exception: %s\", str(e))\n self._fire_event(Events.EXCEPTION_RAISED)\n raise e\n\n @abstractmethod\n def run(self, data, **kwargs):\n \"\"\"\n Train the model, evaluate the validation set and update best parameters if the validation loss\n improves.\n In the event that the validation set is not run (or doesn't exist), the training loss is used\n to update the best parameters.\n\n Parameters\n ----------\n data : Iterable\n Collection of batches allowing for the engine to iterate over(e.g., list or DataLoader)\n **kwargs: optional\n Any additional kwargs\n\n Returns\n -------\n None\n \"\"\"\n raise NotImplementedError(\"This method should be implemented by a subclass\")\n", "path": "ignite/engine.py"}, {"content": "from __future__ import division\n\nimport time\nfrom collections import Iterable\nfrom torch.autograd import Variable\n\nfrom ignite.engine import Engine, Events\nfrom ignite._utils import _to_hours_mins_secs, to_variable\n\n__all__ = [\"Trainer\", \"create_supervised_trainer\"]\n\n\nclass Trainer(Engine):\n \"\"\"\n Generic trainer class.\n\n Training update and validation functions receive batches of data and return values which will\n be stored in the `training_history` and `validation_history`. The trainer defines multiple\n events in `TrainingEvents` for which the user can attach event handlers to. The events get\n passed the trainer, so they can access the training/validation history\n\n\n Parameters\n ----------\n training_update_function : callable\n Update function receiving the current training batch in each iteration\n \"\"\"\n\n def __init__(self, training_update_function):\n super(Trainer, self).__init__(training_update_function)\n self.current_epoch = 0\n self.max_epochs = 0\n\n def _train_one_epoch(self, training_data):\n hours, mins, secs = self._run_once_on_dataset(training_data)\n self._logger.info(\"Epoch[%s] Complete. Time taken: %02d:%02d:%02d\", self.current_epoch, hours,\n mins, secs)\n\n def run(self, training_data, max_epochs=1):\n \"\"\"\n Train the model, evaluate the validation set and update best parameters if the validation loss\n improves.\n In the event that the validation set is not run (or doesn't exist), the training loss is used\n to update the best parameters.\n\n Parameters\n ----------\n training_data : Iterable\n Collection of training batches allowing repeated iteration (e.g., list or DataLoader)\n max_epochs: int, optional\n max epochs to train for [default=1]\n\n Returns\n -------\n None\n \"\"\"\n self.dataset = training_data\n self.current_iteration = 0\n self.current_epoch = 0\n\n try:\n self._logger.info(\"Training starting with max_epochs={}\".format(max_epochs))\n\n self.max_epochs = max_epochs\n\n start_time = time.time()\n\n self._fire_event(Events.STARTED)\n while self.current_epoch < max_epochs and not self.should_terminate:\n self._fire_event(Events.EPOCH_STARTED)\n self._train_one_epoch(training_data)\n if self.should_terminate:\n break\n self._fire_event(Events.EPOCH_COMPLETED)\n self.current_epoch += 1\n\n self._fire_event(Events.COMPLETED)\n time_taken = time.time() - start_time\n hours, mins, secs = _to_hours_mins_secs(time_taken)\n self._logger.info(\"Training complete. Time taken %02d:%02d:%02d\" % (hours, mins, secs))\n\n except BaseException as e:\n self._logger.error(\"Training is terminating due to exception: %s\", str(e))\n self._fire_event(Events.EXCEPTION_RAISED)\n raise e\n\n\ndef create_supervised_trainer(model, optimizer, loss_fn, cuda=False):\n \"\"\"\n Factory function for creating a trainer for supervised models\n\n Args:\n model (torch.nn.Module): the model to train\n optimizer (torch.optim.Optimizer): the optimizer to use\n loss_fn (torch.nn loss function): the loss function to use\n cuda (bool, optional): whether or not to transfer batch to GPU (default: False)\n\n Returns:\n Trainer: a trainer instance with supervised update function\n \"\"\"\n\n def _prepare_batch(batch):\n x, y = batch\n x = to_variable(x, cuda=cuda)\n y = to_variable(y, cuda=cuda)\n return x, y\n\n def _update(batch):\n model.train()\n optimizer.zero_grad()\n x, y = _prepare_batch(batch)\n y_pred = model(x)\n loss = loss_fn(y_pred, y)\n loss.backward()\n optimizer.step()\n return loss.data.cpu()[0]\n\n return Trainer(_update)\n", "path": "ignite/trainer.py"}]}
| 3,111 | 326 |
gh_patches_debug_11575
|
rasdani/github-patches
|
git_diff
|
mindsdb__lightwood-968
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve runtime of `LightGBMArray` for long-horizon forecasting
Two main approaches:
- Disable optuna hyperparam search past some threshold.
- Opt for a recursive strategy instead of direct (i.e. same regressor trained for all timesteps v/s one for each step).
</issue>
<code>
[start of lightwood/mixer/lightgbm_array.py]
1 from copy import deepcopy
2 from typing import Dict, List, Union
3
4 import numpy as np
5 import pandas as pd
6
7 from lightwood.helpers.log import log
8 from lightwood.mixer.helpers.ts import _apply_stl_on_training, _stl_transform, _stl_inverse_transform
9 from lightwood.encoder.base import BaseEncoder
10 from lightwood.mixer.base import BaseMixer
11 from lightwood.mixer.lightgbm import LightGBM
12 from lightwood.api.types import PredictionArguments, TimeseriesSettings
13 from lightwood.data.encoded_ds import EncodedDs, ConcatedEncodedDs
14
15
16 class LightGBMArray(BaseMixer):
17 """LightGBM-based model, intended for usage in time series tasks."""
18 models: List[LightGBM]
19 submodel_stop_after: float
20 target: str
21 supports_proba: bool
22 ts_analysis: Dict
23 tss: TimeseriesSettings
24
25 def __init__(
26 self,
27 stop_after: float,
28 target: str,
29 dtype_dict: Dict[str, str],
30 input_cols: List[str],
31 fit_on_dev: bool,
32 target_encoder: BaseEncoder,
33 ts_analysis: Dict[str, object],
34 use_stl: bool,
35 tss: TimeseriesSettings
36 ):
37 super().__init__(stop_after)
38 self.tss = tss
39 self.horizon = tss.horizon
40 self.submodel_stop_after = stop_after / self.horizon
41 self.target = target
42 self.offset_pred_cols = [f'{self.target}_timestep_{i}' for i in range(1, self.horizon)]
43 if set(input_cols) != {self.tss.order_by}:
44 input_cols.remove(self.tss.order_by)
45 for col in self.offset_pred_cols:
46 dtype_dict[col] = dtype_dict[self.target]
47 self.models = [LightGBM(self.submodel_stop_after,
48 target_col,
49 dtype_dict,
50 input_cols,
51 False, # fit_on_dev,
52 True, # use_optuna
53 target_encoder)
54 for _, target_col in zip(range(self.horizon), [target] + self.offset_pred_cols)]
55 self.ts_analysis = ts_analysis
56 self.supports_proba = False
57 self.use_stl = False
58 self.stable = True
59
60 def _fit(self, train_data: EncodedDs, dev_data: EncodedDs, submodel_method='fit') -> None:
61 original_train = deepcopy(train_data.data_frame)
62 original_dev = deepcopy(dev_data.data_frame)
63
64 if self.use_stl and self.ts_analysis.get('stl_transforms', False):
65 _apply_stl_on_training(train_data, dev_data, self.target, self.tss, self.ts_analysis)
66
67 for timestep in range(self.horizon):
68 getattr(self.models[timestep], submodel_method)(train_data, dev_data)
69
70 # restore dfs
71 train_data.data_frame = original_train
72 dev_data.data_frame = original_dev
73
74 def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:
75 log.info('Started fitting LGBM models for array prediction')
76 self._fit(train_data, dev_data, submodel_method='fit')
77
78 def partial_fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:
79 log.info('Updating array of LGBM models...')
80 self._fit(train_data, dev_data, submodel_method='partial_fit')
81
82 def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],
83 args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:
84 if args.predict_proba:
85 log.warning('This model does not output probability estimates')
86
87 original_df = deepcopy(ds.data_frame)
88 length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)
89 ydf = pd.DataFrame(0, # zero-filled
90 index=np.arange(length),
91 columns=[f'prediction_{i}' for i in range(self.horizon)])
92
93 if self.use_stl and self.ts_analysis.get('stl_transforms', False):
94 ds.data_frame = _stl_transform(ydf, ds, self.target, self.tss, self.ts_analysis)
95
96 for timestep in range(self.horizon):
97 ydf[f'prediction_{timestep}'] = self.models[timestep](ds, args)['prediction'].values
98
99 if self.use_stl and self.ts_analysis.get('stl_transforms', False):
100 ydf = _stl_inverse_transform(ydf, ds, self.tss, self.ts_analysis)
101
102 if self.models[0].positive_domain:
103 ydf = ydf.clip(0)
104
105 ydf['prediction'] = ydf.values.tolist()
106 ds.data_frame = original_df
107 return ydf[['prediction']]
108
[end of lightwood/mixer/lightgbm_array.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lightwood/mixer/lightgbm_array.py b/lightwood/mixer/lightgbm_array.py
--- a/lightwood/mixer/lightgbm_array.py
+++ b/lightwood/mixer/lightgbm_array.py
@@ -49,7 +49,7 @@
dtype_dict,
input_cols,
False, # fit_on_dev,
- True, # use_optuna
+ True if tss.horizon < 10 else False, # use_optuna
target_encoder)
for _, target_col in zip(range(self.horizon), [target] + self.offset_pred_cols)]
self.ts_analysis = ts_analysis
|
{"golden_diff": "diff --git a/lightwood/mixer/lightgbm_array.py b/lightwood/mixer/lightgbm_array.py\n--- a/lightwood/mixer/lightgbm_array.py\n+++ b/lightwood/mixer/lightgbm_array.py\n@@ -49,7 +49,7 @@\n dtype_dict,\n input_cols,\n False, # fit_on_dev,\n- True, # use_optuna\n+ True if tss.horizon < 10 else False, # use_optuna\n target_encoder)\n for _, target_col in zip(range(self.horizon), [target] + self.offset_pred_cols)]\n self.ts_analysis = ts_analysis\n", "issue": "Improve runtime of `LightGBMArray` for long-horizon forecasting\nTwo main approaches:\r\n\r\n- Disable optuna hyperparam search past some threshold.\r\n- Opt for a recursive strategy instead of direct (i.e. same regressor trained for all timesteps v/s one for each step).\n", "before_files": [{"content": "from copy import deepcopy\nfrom typing import Dict, List, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom lightwood.helpers.log import log\nfrom lightwood.mixer.helpers.ts import _apply_stl_on_training, _stl_transform, _stl_inverse_transform\nfrom lightwood.encoder.base import BaseEncoder\nfrom lightwood.mixer.base import BaseMixer\nfrom lightwood.mixer.lightgbm import LightGBM\nfrom lightwood.api.types import PredictionArguments, TimeseriesSettings\nfrom lightwood.data.encoded_ds import EncodedDs, ConcatedEncodedDs\n\n\nclass LightGBMArray(BaseMixer):\n \"\"\"LightGBM-based model, intended for usage in time series tasks.\"\"\"\n models: List[LightGBM]\n submodel_stop_after: float\n target: str\n supports_proba: bool\n ts_analysis: Dict\n tss: TimeseriesSettings\n\n def __init__(\n self,\n stop_after: float,\n target: str,\n dtype_dict: Dict[str, str],\n input_cols: List[str],\n fit_on_dev: bool,\n target_encoder: BaseEncoder,\n ts_analysis: Dict[str, object],\n use_stl: bool,\n tss: TimeseriesSettings\n ):\n super().__init__(stop_after)\n self.tss = tss\n self.horizon = tss.horizon\n self.submodel_stop_after = stop_after / self.horizon\n self.target = target\n self.offset_pred_cols = [f'{self.target}_timestep_{i}' for i in range(1, self.horizon)]\n if set(input_cols) != {self.tss.order_by}:\n input_cols.remove(self.tss.order_by)\n for col in self.offset_pred_cols:\n dtype_dict[col] = dtype_dict[self.target]\n self.models = [LightGBM(self.submodel_stop_after,\n target_col,\n dtype_dict,\n input_cols,\n False, # fit_on_dev,\n True, # use_optuna\n target_encoder)\n for _, target_col in zip(range(self.horizon), [target] + self.offset_pred_cols)]\n self.ts_analysis = ts_analysis\n self.supports_proba = False\n self.use_stl = False\n self.stable = True\n\n def _fit(self, train_data: EncodedDs, dev_data: EncodedDs, submodel_method='fit') -> None:\n original_train = deepcopy(train_data.data_frame)\n original_dev = deepcopy(dev_data.data_frame)\n\n if self.use_stl and self.ts_analysis.get('stl_transforms', False):\n _apply_stl_on_training(train_data, dev_data, self.target, self.tss, self.ts_analysis)\n\n for timestep in range(self.horizon):\n getattr(self.models[timestep], submodel_method)(train_data, dev_data)\n\n # restore dfs\n train_data.data_frame = original_train\n dev_data.data_frame = original_dev\n\n def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n log.info('Started fitting LGBM models for array prediction')\n self._fit(train_data, dev_data, submodel_method='fit')\n\n def partial_fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n log.info('Updating array of LGBM models...')\n self._fit(train_data, dev_data, submodel_method='partial_fit')\n\n def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],\n args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:\n if args.predict_proba:\n log.warning('This model does not output probability estimates')\n\n original_df = deepcopy(ds.data_frame)\n length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)\n ydf = pd.DataFrame(0, # zero-filled\n index=np.arange(length),\n columns=[f'prediction_{i}' for i in range(self.horizon)])\n\n if self.use_stl and self.ts_analysis.get('stl_transforms', False):\n ds.data_frame = _stl_transform(ydf, ds, self.target, self.tss, self.ts_analysis)\n\n for timestep in range(self.horizon):\n ydf[f'prediction_{timestep}'] = self.models[timestep](ds, args)['prediction'].values\n\n if self.use_stl and self.ts_analysis.get('stl_transforms', False):\n ydf = _stl_inverse_transform(ydf, ds, self.tss, self.ts_analysis)\n\n if self.models[0].positive_domain:\n ydf = ydf.clip(0)\n\n ydf['prediction'] = ydf.values.tolist()\n ds.data_frame = original_df\n return ydf[['prediction']]\n", "path": "lightwood/mixer/lightgbm_array.py"}]}
| 1,856 | 143 |
gh_patches_debug_16279
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-1367
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid deprecation of canny, perhaps others
```
$ python -c "from skimage import filters as F; F.canny(0)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stefan/src/scikit-image/skimage/_shared/utils.py", line 46, in __call__
msg = 'Call to deprecated function ``%s``.' % func.__name__
AttributeError: 'int' object has no attribute '__name__'
```
</issue>
<code>
[start of skimage/filters/__init__.py]
1 from .lpi_filter import inverse, wiener, LPIFilter2D
2 from ._gaussian import gaussian_filter
3 from .edges import (sobel, hsobel, vsobel, sobel_h, sobel_v,
4 scharr, hscharr, vscharr, scharr_h, scharr_v,
5 prewitt, hprewitt, vprewitt, prewitt_h, prewitt_v,
6 roberts, roberts_positive_diagonal,
7 roberts_negative_diagonal, roberts_pos_diag,
8 roberts_neg_diag)
9 from ._rank_order import rank_order
10 from ._gabor import gabor_kernel, gabor_filter
11 from .thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,
12 threshold_isodata)
13 from . import rank
14 from .rank import median
15
16 from .._shared.utils import deprecated
17 from .. import restoration
18 denoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\
19 (restoration.denoise_bilateral)
20 denoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\
21 (restoration.denoise_tv_bregman)
22 denoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\
23 (restoration.denoise_tv_chambolle)
24
25 # Backward compatibility v<0.11
26 @deprecated
27 def canny(*args, **kwargs):
28 # Hack to avoid circular import
29 from ..feature._canny import canny as canny_
30 return canny_(*args, **kwargs)
31
32
33 __all__ = ['inverse',
34 'wiener',
35 'LPIFilter2D',
36 'gaussian_filter',
37 'median',
38 'canny',
39 'sobel',
40 'hsobel',
41 'vsobel',
42 'sobel_h',
43 'sobel_v',
44 'scharr',
45 'hscharr',
46 'vscharr',
47 'scharr_h',
48 'scharr_v',
49 'prewitt',
50 'hprewitt',
51 'vprewitt',
52 'prewitt_h',
53 'prewitt_v',
54 'roberts',
55 'roberts_positive_diagonal',
56 'roberts_negative_diagonal',
57 'roberts_pos_diag',
58 'roberts_neg_diag',
59 'denoise_tv_chambolle',
60 'denoise_bilateral',
61 'denoise_tv_bregman',
62 'rank_order',
63 'gabor_kernel',
64 'gabor_filter',
65 'threshold_adaptive',
66 'threshold_otsu',
67 'threshold_yen',
68 'threshold_isodata',
69 'rank']
70
[end of skimage/filters/__init__.py]
[start of skimage/filter/__init__.py]
1 from .._shared.utils import skimage_deprecation
2 from warnings import warn
3
4 global _import_warned
5
6 warn(skimage_deprecation('The `skimage.filter` module has been renamed '
7 'to `skimage.filters`. This placeholder module '
8 'will be removed in v0.13.'))
9 _import_warned = True
10
11 del warn
12 del skimage_deprecation
13
14 from ..filters.lpi_filter import inverse, wiener, LPIFilter2D
15 from ..filters._gaussian import gaussian_filter
16 from ..filters.edges import (sobel, hsobel, vsobel, sobel_h, sobel_v,
17 scharr, hscharr, vscharr, scharr_h, scharr_v,
18 prewitt, hprewitt, vprewitt, prewitt_h, prewitt_v,
19 roberts, roberts_positive_diagonal,
20 roberts_negative_diagonal, roberts_pos_diag,
21 roberts_neg_diag)
22 from ..filters._rank_order import rank_order
23 from ..filters._gabor import gabor_kernel, gabor_filter
24 from ..filters.thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,
25 threshold_isodata)
26 from ..filters import rank
27 from ..filters.rank import median
28
29 from skimage._shared.utils import deprecated
30 from skimage import restoration
31 denoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\
32 (restoration.denoise_bilateral)
33 denoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\
34 (restoration.denoise_tv_bregman)
35 denoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\
36 (restoration.denoise_tv_chambolle)
37
38 # Backward compatibility v<0.11
39 @deprecated
40 def canny(*args, **kwargs):
41 # Hack to avoid circular import
42 from skimage.feature._canny import canny as canny_
43 return canny_(*args, **kwargs)
44
45
46 __all__ = ['inverse',
47 'wiener',
48 'LPIFilter2D',
49 'gaussian_filter',
50 'median',
51 'canny',
52 'sobel',
53 'hsobel',
54 'vsobel',
55 'sobel_h',
56 'sobel_v',
57 'scharr',
58 'hscharr',
59 'vscharr',
60 'scharr_h',
61 'scharr_v',
62 'prewitt',
63 'hprewitt',
64 'vprewitt',
65 'prewitt_h',
66 'prewitt_v',
67 'roberts',
68 'roberts_positive_diagonal',
69 'roberts_negative_diagonal',
70 'roberts_pos_diag',
71 'roberts_neg_diag',
72 'denoise_tv_chambolle',
73 'denoise_bilateral',
74 'denoise_tv_bregman',
75 'rank_order',
76 'gabor_kernel',
77 'gabor_filter',
78 'threshold_adaptive',
79 'threshold_otsu',
80 'threshold_yen',
81 'threshold_isodata',
82 'rank']
83
[end of skimage/filter/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/filter/__init__.py b/skimage/filter/__init__.py
--- a/skimage/filter/__init__.py
+++ b/skimage/filter/__init__.py
@@ -36,7 +36,7 @@
(restoration.denoise_tv_chambolle)
# Backward compatibility v<0.11
-@deprecated
+@deprecated('skimage.feature.canny')
def canny(*args, **kwargs):
# Hack to avoid circular import
from skimage.feature._canny import canny as canny_
diff --git a/skimage/filters/__init__.py b/skimage/filters/__init__.py
--- a/skimage/filters/__init__.py
+++ b/skimage/filters/__init__.py
@@ -23,7 +23,7 @@
(restoration.denoise_tv_chambolle)
# Backward compatibility v<0.11
-@deprecated
+@deprecated('skimage.feature.canny')
def canny(*args, **kwargs):
# Hack to avoid circular import
from ..feature._canny import canny as canny_
|
{"golden_diff": "diff --git a/skimage/filter/__init__.py b/skimage/filter/__init__.py\n--- a/skimage/filter/__init__.py\n+++ b/skimage/filter/__init__.py\n@@ -36,7 +36,7 @@\n (restoration.denoise_tv_chambolle)\n \n # Backward compatibility v<0.11\n-@deprecated\n+@deprecated('skimage.feature.canny')\n def canny(*args, **kwargs):\n # Hack to avoid circular import\n from skimage.feature._canny import canny as canny_\ndiff --git a/skimage/filters/__init__.py b/skimage/filters/__init__.py\n--- a/skimage/filters/__init__.py\n+++ b/skimage/filters/__init__.py\n@@ -23,7 +23,7 @@\n (restoration.denoise_tv_chambolle)\n \n # Backward compatibility v<0.11\n-@deprecated\n+@deprecated('skimage.feature.canny')\n def canny(*args, **kwargs):\n # Hack to avoid circular import\n from ..feature._canny import canny as canny_\n", "issue": "Invalid deprecation of canny, perhaps others\n```\n$ python -c \"from skimage import filters as F; F.canny(0)\"\nTraceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"/home/stefan/src/scikit-image/skimage/_shared/utils.py\", line 46, in __call__\n msg = 'Call to deprecated function ``%s``.' % func.__name__\nAttributeError: 'int' object has no attribute '__name__'\n```\n\n", "before_files": [{"content": "from .lpi_filter import inverse, wiener, LPIFilter2D\nfrom ._gaussian import gaussian_filter\nfrom .edges import (sobel, hsobel, vsobel, sobel_h, sobel_v,\n scharr, hscharr, vscharr, scharr_h, scharr_v,\n prewitt, hprewitt, vprewitt, prewitt_h, prewitt_v,\n roberts, roberts_positive_diagonal,\n roberts_negative_diagonal, roberts_pos_diag,\n roberts_neg_diag)\nfrom ._rank_order import rank_order\nfrom ._gabor import gabor_kernel, gabor_filter\nfrom .thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,\n threshold_isodata)\nfrom . import rank\nfrom .rank import median\n\nfrom .._shared.utils import deprecated\nfrom .. import restoration\ndenoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\\\n (restoration.denoise_bilateral)\ndenoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\\\n (restoration.denoise_tv_bregman)\ndenoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\\\n (restoration.denoise_tv_chambolle)\n\n# Backward compatibility v<0.11\n@deprecated\ndef canny(*args, **kwargs):\n # Hack to avoid circular import\n from ..feature._canny import canny as canny_\n return canny_(*args, **kwargs)\n\n\n__all__ = ['inverse',\n 'wiener',\n 'LPIFilter2D',\n 'gaussian_filter',\n 'median',\n 'canny',\n 'sobel',\n 'hsobel',\n 'vsobel',\n 'sobel_h',\n 'sobel_v',\n 'scharr',\n 'hscharr',\n 'vscharr',\n 'scharr_h',\n 'scharr_v',\n 'prewitt',\n 'hprewitt',\n 'vprewitt',\n 'prewitt_h',\n 'prewitt_v',\n 'roberts',\n 'roberts_positive_diagonal',\n 'roberts_negative_diagonal',\n 'roberts_pos_diag',\n 'roberts_neg_diag',\n 'denoise_tv_chambolle',\n 'denoise_bilateral',\n 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor_filter',\n 'threshold_adaptive',\n 'threshold_otsu',\n 'threshold_yen',\n 'threshold_isodata',\n 'rank']\n", "path": "skimage/filters/__init__.py"}, {"content": "from .._shared.utils import skimage_deprecation\nfrom warnings import warn\n\nglobal _import_warned\n\nwarn(skimage_deprecation('The `skimage.filter` module has been renamed '\n 'to `skimage.filters`. This placeholder module '\n 'will be removed in v0.13.'))\n_import_warned = True\n\ndel warn\ndel skimage_deprecation\n\nfrom ..filters.lpi_filter import inverse, wiener, LPIFilter2D\nfrom ..filters._gaussian import gaussian_filter\nfrom ..filters.edges import (sobel, hsobel, vsobel, sobel_h, sobel_v,\n scharr, hscharr, vscharr, scharr_h, scharr_v,\n prewitt, hprewitt, vprewitt, prewitt_h, prewitt_v,\n roberts, roberts_positive_diagonal,\n roberts_negative_diagonal, roberts_pos_diag,\n roberts_neg_diag)\nfrom ..filters._rank_order import rank_order\nfrom ..filters._gabor import gabor_kernel, gabor_filter\nfrom ..filters.thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,\n threshold_isodata)\nfrom ..filters import rank\nfrom ..filters.rank import median\n\nfrom skimage._shared.utils import deprecated\nfrom skimage import restoration\ndenoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\\\n (restoration.denoise_bilateral)\ndenoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\\\n (restoration.denoise_tv_bregman)\ndenoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\\\n (restoration.denoise_tv_chambolle)\n\n# Backward compatibility v<0.11\n@deprecated\ndef canny(*args, **kwargs):\n # Hack to avoid circular import\n from skimage.feature._canny import canny as canny_\n return canny_(*args, **kwargs)\n\n\n__all__ = ['inverse',\n 'wiener',\n 'LPIFilter2D',\n 'gaussian_filter',\n 'median',\n 'canny',\n 'sobel',\n 'hsobel',\n 'vsobel',\n 'sobel_h',\n 'sobel_v',\n 'scharr',\n 'hscharr',\n 'vscharr',\n 'scharr_h',\n 'scharr_v',\n 'prewitt',\n 'hprewitt',\n 'vprewitt',\n 'prewitt_h',\n 'prewitt_v',\n 'roberts',\n 'roberts_positive_diagonal',\n 'roberts_negative_diagonal',\n 'roberts_pos_diag',\n 'roberts_neg_diag',\n 'denoise_tv_chambolle',\n 'denoise_bilateral',\n 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor_filter',\n 'threshold_adaptive',\n 'threshold_otsu',\n 'threshold_yen',\n 'threshold_isodata',\n 'rank']\n", "path": "skimage/filter/__init__.py"}]}
| 2,215 | 254 |
gh_patches_debug_23827
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-709
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Celery exceptions cause error in capture_stacktrace
The latest version of celery, 5.2.1, raises some exceptions in an unexpected way.
For this particular case, `tb` was a string of value:
```
'Traceback (most recent call last):\n File "/app/.heroku/python/lib/python3.9/site-packages/billiard/pool.py", line 366, in workloop\n put((READY, (job, i, result, inqW_fd)))\n File "/app/.heroku/python/lib/python3.9/site-packages/billiard/queues.py", line 366, in put\n self.send_payload(ForkingPickler.dumps(obj))\n File "/app/.heroku/python/lib/python3.9/site-packages/billiard/reduction.py", line 56, in dumps\n cls(buf, protocol).dump(obj)\nbilliard.pool.MaybeEncodingError: Error sending res...'
```
Stacktrace of error
```
AttributeError: 'str' object has no attribute 'tb_frame'
File "celery/utils/dispatch/signal.py", line 276, in send
response = receiver(signal=self, sender=sender, **named)
File "scout_apm/celery.py", line 114, in task_failure_callback
ErrorMonitor.send(
File "scout_apm/core/error.py", line 69, in send
for frame in capture_stacktrace(traceback)
File "scout_apm/core/backtrace.py", line 132, in capture_stacktrace
return list(reversed(list(itertools.islice(walker, LIMIT))))
File "scout_apm/core/backtrace.py", line 75, in stacktrace_walker
for frame, lineno in traceback.walk_tb(tb):
File "traceback.py", line 312, in walk_tb
yield tb.tb_frame, tb.tb_lineno
```
</issue>
<code>
[start of src/scout_apm/celery.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5 import logging
6
7 from celery.signals import before_task_publish, task_failure, task_postrun, task_prerun
8
9 try:
10 import django
11
12 if django.VERSION < (3, 1):
13 from django.views.debug import get_safe_settings
14 else:
15 from django.views.debug import SafeExceptionReporterFilter
16
17 def get_safe_settings():
18 return SafeExceptionReporterFilter().get_safe_settings()
19
20
21 except ImportError:
22 # Django not installed
23 get_safe_settings = None
24
25 import scout_apm.core
26 from scout_apm.compat import datetime_to_timestamp
27 from scout_apm.core.config import scout_config
28 from scout_apm.core.error import ErrorMonitor
29 from scout_apm.core.tracked_request import TrackedRequest
30
31 logger = logging.getLogger(__name__)
32
33
34 def before_task_publish_callback(headers=None, properties=None, **kwargs):
35 if "scout_task_start" not in headers:
36 headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
37
38
39 def task_prerun_callback(task=None, **kwargs):
40 tracked_request = TrackedRequest.instance()
41 tracked_request.is_real_request = True
42
43 start = getattr(task.request, "scout_task_start", None)
44 if start is not None:
45 now = datetime_to_timestamp(dt.datetime.utcnow())
46 try:
47 queue_time = now - start
48 except TypeError:
49 pass
50 else:
51 tracked_request.tag("queue_time", queue_time)
52
53 task_id = getattr(task.request, "id", None)
54 if task_id:
55 tracked_request.tag("task_id", task_id)
56 parent_task_id = getattr(task.request, "parent_id", None)
57 if parent_task_id:
58 tracked_request.tag("parent_task_id", parent_task_id)
59
60 delivery_info = task.request.delivery_info
61 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
62 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
63 tracked_request.tag("priority", delivery_info.get("priority", "unknown"))
64 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
65 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
66
67 tracked_request.start_span(operation=("Job/" + task.name))
68
69
70 def task_postrun_callback(task=None, **kwargs):
71 tracked_request = TrackedRequest.instance()
72 tracked_request.stop_span()
73
74
75 def task_failure_callback(
76 sender,
77 task_id=None,
78 exception=None,
79 args=None,
80 kwargs=None,
81 traceback=None,
82 **remaining
83 ):
84 tracked_request = TrackedRequest.instance()
85 tracked_request.tag("error", "true")
86
87 custom_controller = sender.name
88 custom_params = {
89 "celery": {
90 "task_id": task_id,
91 "args": args,
92 "kwargs": kwargs,
93 }
94 }
95
96 # Look up the django settings if populated.
97 environment = None
98 if get_safe_settings:
99 try:
100 environment = get_safe_settings()
101 except django.core.exceptions.ImproperlyConfigured as exc:
102 # Django not setup correctly
103 logger.debug(
104 "Celery integration does not have django configured properly: %r", exc
105 )
106 pass
107 except Exception as exc:
108 logger.debug(
109 "Celery task_failure callback exception: %r", exc, exc_info=exc
110 )
111 pass
112
113 exc_info = (exception.__class__, exception, traceback)
114 ErrorMonitor.send(
115 exc_info,
116 environment=environment,
117 custom_params=custom_params,
118 custom_controller=custom_controller,
119 )
120
121
122 def install(app=None):
123 if app is not None:
124 copy_configuration(app)
125
126 installed = scout_apm.core.install()
127 if not installed:
128 return
129
130 before_task_publish.connect(before_task_publish_callback)
131 task_prerun.connect(task_prerun_callback)
132 task_failure.connect(task_failure_callback)
133 task_postrun.connect(task_postrun_callback)
134
135
136 def copy_configuration(app):
137 prefix = "scout_"
138 prefix_len = len(prefix)
139
140 to_set = {}
141 for key, value in app.conf.items():
142 key_lower = key.lower()
143 if key_lower.startswith(prefix) and len(key_lower) > prefix_len:
144 scout_key = key_lower[prefix_len:]
145 to_set[scout_key] = value
146
147 scout_config.set(**to_set)
148
149
150 def uninstall():
151 before_task_publish.disconnect(before_task_publish_callback)
152 task_prerun.disconnect(task_prerun_callback)
153 task_postrun.disconnect(task_postrun_callback)
154 task_failure.disconnect(task_failure_callback)
155
[end of src/scout_apm/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -23,7 +23,7 @@
get_safe_settings = None
import scout_apm.core
-from scout_apm.compat import datetime_to_timestamp
+from scout_apm.compat import datetime_to_timestamp, string_type
from scout_apm.core.config import scout_config
from scout_apm.core.error import ErrorMonitor
from scout_apm.core.tracked_request import TrackedRequest
@@ -79,6 +79,7 @@
args=None,
kwargs=None,
traceback=None,
+ einfo=None,
**remaining
):
tracked_request = TrackedRequest.instance()
@@ -110,6 +111,12 @@
)
pass
+ # Celery occassionally will send the traceback as a string rather
+ # than a Stack trace object as the docs indicate. In that case,
+ # fall back to the billiard ExceptionInfo instance
+ traceback = (
+ traceback if traceback and not isinstance(traceback, string_type) else einfo.tb
+ )
exc_info = (exception.__class__, exception, traceback)
ErrorMonitor.send(
exc_info,
|
{"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -23,7 +23,7 @@\n get_safe_settings = None\n \n import scout_apm.core\n-from scout_apm.compat import datetime_to_timestamp\n+from scout_apm.compat import datetime_to_timestamp, string_type\n from scout_apm.core.config import scout_config\n from scout_apm.core.error import ErrorMonitor\n from scout_apm.core.tracked_request import TrackedRequest\n@@ -79,6 +79,7 @@\n args=None,\n kwargs=None,\n traceback=None,\n+ einfo=None,\n **remaining\n ):\n tracked_request = TrackedRequest.instance()\n@@ -110,6 +111,12 @@\n )\n pass\n \n+ # Celery occassionally will send the traceback as a string rather\n+ # than a Stack trace object as the docs indicate. In that case,\n+ # fall back to the billiard ExceptionInfo instance\n+ traceback = (\n+ traceback if traceback and not isinstance(traceback, string_type) else einfo.tb\n+ )\n exc_info = (exception.__class__, exception, traceback)\n ErrorMonitor.send(\n exc_info,\n", "issue": "Celery exceptions cause error in capture_stacktrace\nThe latest version of celery, 5.2.1, raises some exceptions in an unexpected way.\r\n\r\nFor this particular case, `tb` was a string of value:\r\n\r\n```\r\n'Traceback (most recent call last):\\n File \"/app/.heroku/python/lib/python3.9/site-packages/billiard/pool.py\", line 366, in workloop\\n put((READY, (job, i, result, inqW_fd)))\\n File \"/app/.heroku/python/lib/python3.9/site-packages/billiard/queues.py\", line 366, in put\\n self.send_payload(ForkingPickler.dumps(obj))\\n File \"/app/.heroku/python/lib/python3.9/site-packages/billiard/reduction.py\", line 56, in dumps\\n cls(buf, protocol).dump(obj)\\nbilliard.pool.MaybeEncodingError: Error sending res...'\r\n```\r\n\r\nStacktrace of error\r\n```\r\nAttributeError: 'str' object has no attribute 'tb_frame'\r\n File \"celery/utils/dispatch/signal.py\", line 276, in send\r\n response = receiver(signal=self, sender=sender, **named)\r\n File \"scout_apm/celery.py\", line 114, in task_failure_callback\r\n ErrorMonitor.send(\r\n File \"scout_apm/core/error.py\", line 69, in send\r\n for frame in capture_stacktrace(traceback)\r\n File \"scout_apm/core/backtrace.py\", line 132, in capture_stacktrace\r\n return list(reversed(list(itertools.islice(walker, LIMIT))))\r\n File \"scout_apm/core/backtrace.py\", line 75, in stacktrace_walker\r\n for frame, lineno in traceback.walk_tb(tb):\r\n File \"traceback.py\", line 312, in walk_tb\r\n yield tb.tb_frame, tb.tb_lineno\r\n```\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\nimport logging\n\nfrom celery.signals import before_task_publish, task_failure, task_postrun, task_prerun\n\ntry:\n import django\n\n if django.VERSION < (3, 1):\n from django.views.debug import get_safe_settings\n else:\n from django.views.debug import SafeExceptionReporterFilter\n\n def get_safe_settings():\n return SafeExceptionReporterFilter().get_safe_settings()\n\n\nexcept ImportError:\n # Django not installed\n get_safe_settings = None\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.error import ErrorMonitor\nfrom scout_apm.core.tracked_request import TrackedRequest\n\nlogger = logging.getLogger(__name__)\n\n\ndef before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"priority\", delivery_info.get(\"priority\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef task_failure_callback(\n sender,\n task_id=None,\n exception=None,\n args=None,\n kwargs=None,\n traceback=None,\n **remaining\n):\n tracked_request = TrackedRequest.instance()\n tracked_request.tag(\"error\", \"true\")\n\n custom_controller = sender.name\n custom_params = {\n \"celery\": {\n \"task_id\": task_id,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n }\n\n # Look up the django settings if populated.\n environment = None\n if get_safe_settings:\n try:\n environment = get_safe_settings()\n except django.core.exceptions.ImproperlyConfigured as exc:\n # Django not setup correctly\n logger.debug(\n \"Celery integration does not have django configured properly: %r\", exc\n )\n pass\n except Exception as exc:\n logger.debug(\n \"Celery task_failure callback exception: %r\", exc, exc_info=exc\n )\n pass\n\n exc_info = (exception.__class__, exception, traceback)\n ErrorMonitor.send(\n exc_info,\n environment=environment,\n custom_params=custom_params,\n custom_controller=custom_controller,\n )\n\n\ndef install(app=None):\n if app is not None:\n copy_configuration(app)\n\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_task_publish_callback)\n task_prerun.connect(task_prerun_callback)\n task_failure.connect(task_failure_callback)\n task_postrun.connect(task_postrun_callback)\n\n\ndef copy_configuration(app):\n prefix = \"scout_\"\n prefix_len = len(prefix)\n\n to_set = {}\n for key, value in app.conf.items():\n key_lower = key.lower()\n if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n scout_key = key_lower[prefix_len:]\n to_set[scout_key] = value\n\n scout_config.set(**to_set)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_task_publish_callback)\n task_prerun.disconnect(task_prerun_callback)\n task_postrun.disconnect(task_postrun_callback)\n task_failure.disconnect(task_failure_callback)\n", "path": "src/scout_apm/celery.py"}]}
| 2,327 | 291 |
gh_patches_debug_21243
|
rasdani/github-patches
|
git_diff
|
pypa__pip-9207
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Revert "Redact auth from URL in UpdatingDefaultsHelpFormatter"
Reverts pypa/pip#9163
</issue>
<code>
[start of src/pip/_internal/cli/parser.py]
1 """Base option parser setup"""
2
3 # The following comment should be removed at some point in the future.
4 # mypy: disallow-untyped-defs=False
5
6 from __future__ import absolute_import
7
8 import logging
9 import optparse
10 import sys
11 import textwrap
12 from distutils.util import strtobool
13
14 from pip._vendor.contextlib2 import suppress
15 from pip._vendor.six import string_types
16
17 from pip._internal.cli.status_codes import UNKNOWN_ERROR
18 from pip._internal.configuration import Configuration, ConfigurationError
19 from pip._internal.utils.compat import get_terminal_size
20 from pip._internal.utils.misc import redact_auth_from_url
21
22 logger = logging.getLogger(__name__)
23
24
25 class PrettyHelpFormatter(optparse.IndentedHelpFormatter):
26 """A prettier/less verbose help formatter for optparse."""
27
28 def __init__(self, *args, **kwargs):
29 # help position must be aligned with __init__.parseopts.description
30 kwargs['max_help_position'] = 30
31 kwargs['indent_increment'] = 1
32 kwargs['width'] = get_terminal_size()[0] - 2
33 optparse.IndentedHelpFormatter.__init__(self, *args, **kwargs)
34
35 def format_option_strings(self, option):
36 return self._format_option_strings(option)
37
38 def _format_option_strings(self, option, mvarfmt=' <{}>', optsep=', '):
39 """
40 Return a comma-separated list of option strings and metavars.
41
42 :param option: tuple of (short opt, long opt), e.g: ('-f', '--format')
43 :param mvarfmt: metavar format string
44 :param optsep: separator
45 """
46 opts = []
47
48 if option._short_opts:
49 opts.append(option._short_opts[0])
50 if option._long_opts:
51 opts.append(option._long_opts[0])
52 if len(opts) > 1:
53 opts.insert(1, optsep)
54
55 if option.takes_value():
56 metavar = option.metavar or option.dest.lower()
57 opts.append(mvarfmt.format(metavar.lower()))
58
59 return ''.join(opts)
60
61 def format_heading(self, heading):
62 if heading == 'Options':
63 return ''
64 return heading + ':\n'
65
66 def format_usage(self, usage):
67 """
68 Ensure there is only one newline between usage and the first heading
69 if there is no description.
70 """
71 msg = '\nUsage: {}\n'.format(
72 self.indent_lines(textwrap.dedent(usage), " "))
73 return msg
74
75 def format_description(self, description):
76 # leave full control over description to us
77 if description:
78 if hasattr(self.parser, 'main'):
79 label = 'Commands'
80 else:
81 label = 'Description'
82 # some doc strings have initial newlines, some don't
83 description = description.lstrip('\n')
84 # some doc strings have final newlines and spaces, some don't
85 description = description.rstrip()
86 # dedent, then reindent
87 description = self.indent_lines(textwrap.dedent(description), " ")
88 description = '{}:\n{}\n'.format(label, description)
89 return description
90 else:
91 return ''
92
93 def format_epilog(self, epilog):
94 # leave full control over epilog to us
95 if epilog:
96 return epilog
97 else:
98 return ''
99
100 def indent_lines(self, text, indent):
101 new_lines = [indent + line for line in text.split('\n')]
102 return "\n".join(new_lines)
103
104
105 class UpdatingDefaultsHelpFormatter(PrettyHelpFormatter):
106 """Custom help formatter for use in ConfigOptionParser.
107
108 This is updates the defaults before expanding them, allowing
109 them to show up correctly in the help listing.
110
111 Also redact auth from url type options
112 """
113
114 def expand_default(self, option):
115 default_value = None
116 if self.parser is not None:
117 self.parser._update_defaults(self.parser.defaults)
118 default_value = self.parser.defaults.get(option.dest)
119 help_text = optparse.IndentedHelpFormatter.expand_default(self, option)
120
121 if default_value and option.metavar == 'URL':
122 help_text = help_text.replace(
123 default_value, redact_auth_from_url(default_value))
124
125 return help_text
126
127
128 class CustomOptionParser(optparse.OptionParser):
129
130 def insert_option_group(self, idx, *args, **kwargs):
131 """Insert an OptionGroup at a given position."""
132 group = self.add_option_group(*args, **kwargs)
133
134 self.option_groups.pop()
135 self.option_groups.insert(idx, group)
136
137 return group
138
139 @property
140 def option_list_all(self):
141 """Get a list of all options, including those in option groups."""
142 res = self.option_list[:]
143 for i in self.option_groups:
144 res.extend(i.option_list)
145
146 return res
147
148
149 class ConfigOptionParser(CustomOptionParser):
150 """Custom option parser which updates its defaults by checking the
151 configuration files and environmental variables"""
152
153 def __init__(self, *args, **kwargs):
154 self.name = kwargs.pop('name')
155
156 isolated = kwargs.pop("isolated", False)
157 self.config = Configuration(isolated)
158
159 assert self.name
160 optparse.OptionParser.__init__(self, *args, **kwargs)
161
162 def check_default(self, option, key, val):
163 try:
164 return option.check_value(key, val)
165 except optparse.OptionValueError as exc:
166 print("An error occurred during configuration: {}".format(exc))
167 sys.exit(3)
168
169 def _get_ordered_configuration_items(self):
170 # Configuration gives keys in an unordered manner. Order them.
171 override_order = ["global", self.name, ":env:"]
172
173 # Pool the options into different groups
174 section_items = {name: [] for name in override_order}
175 for section_key, val in self.config.items():
176 # ignore empty values
177 if not val:
178 logger.debug(
179 "Ignoring configuration key '%s' as it's value is empty.",
180 section_key
181 )
182 continue
183
184 section, key = section_key.split(".", 1)
185 if section in override_order:
186 section_items[section].append((key, val))
187
188 # Yield each group in their override order
189 for section in override_order:
190 for key, val in section_items[section]:
191 yield key, val
192
193 def _update_defaults(self, defaults):
194 """Updates the given defaults with values from the config files and
195 the environ. Does a little special handling for certain types of
196 options (lists)."""
197
198 # Accumulate complex default state.
199 self.values = optparse.Values(self.defaults)
200 late_eval = set()
201 # Then set the options with those values
202 for key, val in self._get_ordered_configuration_items():
203 # '--' because configuration supports only long names
204 option = self.get_option('--' + key)
205
206 # Ignore options not present in this parser. E.g. non-globals put
207 # in [global] by users that want them to apply to all applicable
208 # commands.
209 if option is None:
210 continue
211
212 if option.action in ('store_true', 'store_false'):
213 try:
214 val = strtobool(val)
215 except ValueError:
216 self.error(
217 '{} is not a valid value for {} option, ' # noqa
218 'please specify a boolean value like yes/no, '
219 'true/false or 1/0 instead.'.format(val, key)
220 )
221 elif option.action == 'count':
222 with suppress(ValueError):
223 val = strtobool(val)
224 with suppress(ValueError):
225 val = int(val)
226 if not isinstance(val, int) or val < 0:
227 self.error(
228 '{} is not a valid value for {} option, ' # noqa
229 'please instead specify either a non-negative integer '
230 'or a boolean value like yes/no or false/true '
231 'which is equivalent to 1/0.'.format(val, key)
232 )
233 elif option.action == 'append':
234 val = val.split()
235 val = [self.check_default(option, key, v) for v in val]
236 elif option.action == 'callback':
237 late_eval.add(option.dest)
238 opt_str = option.get_opt_string()
239 val = option.convert_value(opt_str, val)
240 # From take_action
241 args = option.callback_args or ()
242 kwargs = option.callback_kwargs or {}
243 option.callback(option, opt_str, val, self, *args, **kwargs)
244 else:
245 val = self.check_default(option, key, val)
246
247 defaults[option.dest] = val
248
249 for key in late_eval:
250 defaults[key] = getattr(self.values, key)
251 self.values = None
252 return defaults
253
254 def get_default_values(self):
255 """Overriding to make updating the defaults after instantiation of
256 the option parser possible, _update_defaults() does the dirty work."""
257 if not self.process_default_values:
258 # Old, pre-Optik 1.5 behaviour.
259 return optparse.Values(self.defaults)
260
261 # Load the configuration, or error out in case of an error
262 try:
263 self.config.load()
264 except ConfigurationError as err:
265 self.exit(UNKNOWN_ERROR, str(err))
266
267 defaults = self._update_defaults(self.defaults.copy()) # ours
268 for option in self._get_all_options():
269 default = defaults.get(option.dest)
270 if isinstance(default, string_types):
271 opt_str = option.get_opt_string()
272 defaults[option.dest] = option.check_value(opt_str, default)
273 return optparse.Values(defaults)
274
275 def error(self, msg):
276 self.print_usage(sys.stderr)
277 self.exit(UNKNOWN_ERROR, "{}\n".format(msg))
278
[end of src/pip/_internal/cli/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/cli/parser.py b/src/pip/_internal/cli/parser.py
--- a/src/pip/_internal/cli/parser.py
+++ b/src/pip/_internal/cli/parser.py
@@ -112,15 +112,23 @@
"""
def expand_default(self, option):
- default_value = None
+ default_values = None
if self.parser is not None:
self.parser._update_defaults(self.parser.defaults)
- default_value = self.parser.defaults.get(option.dest)
+ default_values = self.parser.defaults.get(option.dest)
help_text = optparse.IndentedHelpFormatter.expand_default(self, option)
- if default_value and option.metavar == 'URL':
- help_text = help_text.replace(
- default_value, redact_auth_from_url(default_value))
+ if default_values and option.metavar == 'URL':
+ if isinstance(default_values, string_types):
+ default_values = [default_values]
+
+ # If its not a list, we should abort and just return the help text
+ if not isinstance(default_values, list):
+ default_values = []
+
+ for val in default_values:
+ help_text = help_text.replace(
+ val, redact_auth_from_url(val))
return help_text
|
{"golden_diff": "diff --git a/src/pip/_internal/cli/parser.py b/src/pip/_internal/cli/parser.py\n--- a/src/pip/_internal/cli/parser.py\n+++ b/src/pip/_internal/cli/parser.py\n@@ -112,15 +112,23 @@\n \"\"\"\n \n def expand_default(self, option):\n- default_value = None\n+ default_values = None\n if self.parser is not None:\n self.parser._update_defaults(self.parser.defaults)\n- default_value = self.parser.defaults.get(option.dest)\n+ default_values = self.parser.defaults.get(option.dest)\n help_text = optparse.IndentedHelpFormatter.expand_default(self, option)\n \n- if default_value and option.metavar == 'URL':\n- help_text = help_text.replace(\n- default_value, redact_auth_from_url(default_value))\n+ if default_values and option.metavar == 'URL':\n+ if isinstance(default_values, string_types):\n+ default_values = [default_values]\n+\n+ # If its not a list, we should abort and just return the help text\n+ if not isinstance(default_values, list):\n+ default_values = []\n+\n+ for val in default_values:\n+ help_text = help_text.replace(\n+ val, redact_auth_from_url(val))\n \n return help_text\n", "issue": "Revert \"Redact auth from URL in UpdatingDefaultsHelpFormatter\"\nReverts pypa/pip#9163\n", "before_files": [{"content": "\"\"\"Base option parser setup\"\"\"\n\n# The following comment should be removed at some point in the future.\n# mypy: disallow-untyped-defs=False\n\nfrom __future__ import absolute_import\n\nimport logging\nimport optparse\nimport sys\nimport textwrap\nfrom distutils.util import strtobool\n\nfrom pip._vendor.contextlib2 import suppress\nfrom pip._vendor.six import string_types\n\nfrom pip._internal.cli.status_codes import UNKNOWN_ERROR\nfrom pip._internal.configuration import Configuration, ConfigurationError\nfrom pip._internal.utils.compat import get_terminal_size\nfrom pip._internal.utils.misc import redact_auth_from_url\n\nlogger = logging.getLogger(__name__)\n\n\nclass PrettyHelpFormatter(optparse.IndentedHelpFormatter):\n \"\"\"A prettier/less verbose help formatter for optparse.\"\"\"\n\n def __init__(self, *args, **kwargs):\n # help position must be aligned with __init__.parseopts.description\n kwargs['max_help_position'] = 30\n kwargs['indent_increment'] = 1\n kwargs['width'] = get_terminal_size()[0] - 2\n optparse.IndentedHelpFormatter.__init__(self, *args, **kwargs)\n\n def format_option_strings(self, option):\n return self._format_option_strings(option)\n\n def _format_option_strings(self, option, mvarfmt=' <{}>', optsep=', '):\n \"\"\"\n Return a comma-separated list of option strings and metavars.\n\n :param option: tuple of (short opt, long opt), e.g: ('-f', '--format')\n :param mvarfmt: metavar format string\n :param optsep: separator\n \"\"\"\n opts = []\n\n if option._short_opts:\n opts.append(option._short_opts[0])\n if option._long_opts:\n opts.append(option._long_opts[0])\n if len(opts) > 1:\n opts.insert(1, optsep)\n\n if option.takes_value():\n metavar = option.metavar or option.dest.lower()\n opts.append(mvarfmt.format(metavar.lower()))\n\n return ''.join(opts)\n\n def format_heading(self, heading):\n if heading == 'Options':\n return ''\n return heading + ':\\n'\n\n def format_usage(self, usage):\n \"\"\"\n Ensure there is only one newline between usage and the first heading\n if there is no description.\n \"\"\"\n msg = '\\nUsage: {}\\n'.format(\n self.indent_lines(textwrap.dedent(usage), \" \"))\n return msg\n\n def format_description(self, description):\n # leave full control over description to us\n if description:\n if hasattr(self.parser, 'main'):\n label = 'Commands'\n else:\n label = 'Description'\n # some doc strings have initial newlines, some don't\n description = description.lstrip('\\n')\n # some doc strings have final newlines and spaces, some don't\n description = description.rstrip()\n # dedent, then reindent\n description = self.indent_lines(textwrap.dedent(description), \" \")\n description = '{}:\\n{}\\n'.format(label, description)\n return description\n else:\n return ''\n\n def format_epilog(self, epilog):\n # leave full control over epilog to us\n if epilog:\n return epilog\n else:\n return ''\n\n def indent_lines(self, text, indent):\n new_lines = [indent + line for line in text.split('\\n')]\n return \"\\n\".join(new_lines)\n\n\nclass UpdatingDefaultsHelpFormatter(PrettyHelpFormatter):\n \"\"\"Custom help formatter for use in ConfigOptionParser.\n\n This is updates the defaults before expanding them, allowing\n them to show up correctly in the help listing.\n\n Also redact auth from url type options\n \"\"\"\n\n def expand_default(self, option):\n default_value = None\n if self.parser is not None:\n self.parser._update_defaults(self.parser.defaults)\n default_value = self.parser.defaults.get(option.dest)\n help_text = optparse.IndentedHelpFormatter.expand_default(self, option)\n\n if default_value and option.metavar == 'URL':\n help_text = help_text.replace(\n default_value, redact_auth_from_url(default_value))\n\n return help_text\n\n\nclass CustomOptionParser(optparse.OptionParser):\n\n def insert_option_group(self, idx, *args, **kwargs):\n \"\"\"Insert an OptionGroup at a given position.\"\"\"\n group = self.add_option_group(*args, **kwargs)\n\n self.option_groups.pop()\n self.option_groups.insert(idx, group)\n\n return group\n\n @property\n def option_list_all(self):\n \"\"\"Get a list of all options, including those in option groups.\"\"\"\n res = self.option_list[:]\n for i in self.option_groups:\n res.extend(i.option_list)\n\n return res\n\n\nclass ConfigOptionParser(CustomOptionParser):\n \"\"\"Custom option parser which updates its defaults by checking the\n configuration files and environmental variables\"\"\"\n\n def __init__(self, *args, **kwargs):\n self.name = kwargs.pop('name')\n\n isolated = kwargs.pop(\"isolated\", False)\n self.config = Configuration(isolated)\n\n assert self.name\n optparse.OptionParser.__init__(self, *args, **kwargs)\n\n def check_default(self, option, key, val):\n try:\n return option.check_value(key, val)\n except optparse.OptionValueError as exc:\n print(\"An error occurred during configuration: {}\".format(exc))\n sys.exit(3)\n\n def _get_ordered_configuration_items(self):\n # Configuration gives keys in an unordered manner. Order them.\n override_order = [\"global\", self.name, \":env:\"]\n\n # Pool the options into different groups\n section_items = {name: [] for name in override_order}\n for section_key, val in self.config.items():\n # ignore empty values\n if not val:\n logger.debug(\n \"Ignoring configuration key '%s' as it's value is empty.\",\n section_key\n )\n continue\n\n section, key = section_key.split(\".\", 1)\n if section in override_order:\n section_items[section].append((key, val))\n\n # Yield each group in their override order\n for section in override_order:\n for key, val in section_items[section]:\n yield key, val\n\n def _update_defaults(self, defaults):\n \"\"\"Updates the given defaults with values from the config files and\n the environ. Does a little special handling for certain types of\n options (lists).\"\"\"\n\n # Accumulate complex default state.\n self.values = optparse.Values(self.defaults)\n late_eval = set()\n # Then set the options with those values\n for key, val in self._get_ordered_configuration_items():\n # '--' because configuration supports only long names\n option = self.get_option('--' + key)\n\n # Ignore options not present in this parser. E.g. non-globals put\n # in [global] by users that want them to apply to all applicable\n # commands.\n if option is None:\n continue\n\n if option.action in ('store_true', 'store_false'):\n try:\n val = strtobool(val)\n except ValueError:\n self.error(\n '{} is not a valid value for {} option, ' # noqa\n 'please specify a boolean value like yes/no, '\n 'true/false or 1/0 instead.'.format(val, key)\n )\n elif option.action == 'count':\n with suppress(ValueError):\n val = strtobool(val)\n with suppress(ValueError):\n val = int(val)\n if not isinstance(val, int) or val < 0:\n self.error(\n '{} is not a valid value for {} option, ' # noqa\n 'please instead specify either a non-negative integer '\n 'or a boolean value like yes/no or false/true '\n 'which is equivalent to 1/0.'.format(val, key)\n )\n elif option.action == 'append':\n val = val.split()\n val = [self.check_default(option, key, v) for v in val]\n elif option.action == 'callback':\n late_eval.add(option.dest)\n opt_str = option.get_opt_string()\n val = option.convert_value(opt_str, val)\n # From take_action\n args = option.callback_args or ()\n kwargs = option.callback_kwargs or {}\n option.callback(option, opt_str, val, self, *args, **kwargs)\n else:\n val = self.check_default(option, key, val)\n\n defaults[option.dest] = val\n\n for key in late_eval:\n defaults[key] = getattr(self.values, key)\n self.values = None\n return defaults\n\n def get_default_values(self):\n \"\"\"Overriding to make updating the defaults after instantiation of\n the option parser possible, _update_defaults() does the dirty work.\"\"\"\n if not self.process_default_values:\n # Old, pre-Optik 1.5 behaviour.\n return optparse.Values(self.defaults)\n\n # Load the configuration, or error out in case of an error\n try:\n self.config.load()\n except ConfigurationError as err:\n self.exit(UNKNOWN_ERROR, str(err))\n\n defaults = self._update_defaults(self.defaults.copy()) # ours\n for option in self._get_all_options():\n default = defaults.get(option.dest)\n if isinstance(default, string_types):\n opt_str = option.get_opt_string()\n defaults[option.dest] = option.check_value(opt_str, default)\n return optparse.Values(defaults)\n\n def error(self, msg):\n self.print_usage(sys.stderr)\n self.exit(UNKNOWN_ERROR, \"{}\\n\".format(msg))\n", "path": "src/pip/_internal/cli/parser.py"}]}
| 3,389 | 282 |
gh_patches_debug_14153
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-797
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mitmweb can't run
When I run mitmweb,then open 121.0.0.1:8081, the error happened.
```
Traceback (most recent call last):
File "/Users/venv.mitmproxy/lib/python2.7/site-packages/tornado/ioloop.py", line 1020, in _run
return self.callback()
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 141, in tick
flow.FlowMaster.tick(self, self.masterq, timeout=0)
File "/Users/mitmproxy/libmproxy/flow.py", line 812, in tick
return super(FlowMaster, self).tick(q, timeout)
File "/Users/mitmproxy/libmproxy/controller.py", line 108, in tick
self.handle(*msg)
File "/Users/mitmproxy/libmproxy/controller.py", line 129, in handle
m(obj)
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 167, in handle_request
super(WebMaster, self).handle_request(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 990, in handle_request
self.state.add_flow(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 561, in add_flow
self.flows._add(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 469, in _add
view._add(f)
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 23, in _add
data=f.get_state(short=True)
File "/Users/mitmproxy/libmproxy/models/flow.py", line 88, in get_state
d = super(Flow, self).get_state(short)
File "/Users/mitmproxy/libmproxy/stateobject.py", line 32, in get_state
state[attr] = val.get_state(short)
File "/Users/mitmproxy/libmproxy/models/connections.py", line 138, in get_state
address={"address": self.address(),
TypeError: 'NoneType' object is not callable
ERROR:tornado.application:Exception in callback <bound method WebMaster.tick of <libmproxy.web.WebMaster object at 0x10cfe5a90>>
```
Mitmweb can't run
When I run mitmweb,then open 121.0.0.1:8081, the error happened.
```
Traceback (most recent call last):
File "/Users/venv.mitmproxy/lib/python2.7/site-packages/tornado/ioloop.py", line 1020, in _run
return self.callback()
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 141, in tick
flow.FlowMaster.tick(self, self.masterq, timeout=0)
File "/Users/mitmproxy/libmproxy/flow.py", line 812, in tick
return super(FlowMaster, self).tick(q, timeout)
File "/Users/mitmproxy/libmproxy/controller.py", line 108, in tick
self.handle(*msg)
File "/Users/mitmproxy/libmproxy/controller.py", line 129, in handle
m(obj)
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 167, in handle_request
super(WebMaster, self).handle_request(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 990, in handle_request
self.state.add_flow(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 561, in add_flow
self.flows._add(f)
File "/Users/mitmproxy/libmproxy/flow.py", line 469, in _add
view._add(f)
File "/Users/mitmproxy/libmproxy/web/__init__.py", line 23, in _add
data=f.get_state(short=True)
File "/Users/mitmproxy/libmproxy/models/flow.py", line 88, in get_state
d = super(Flow, self).get_state(short)
File "/Users/mitmproxy/libmproxy/stateobject.py", line 32, in get_state
state[attr] = val.get_state(short)
File "/Users/mitmproxy/libmproxy/models/connections.py", line 138, in get_state
address={"address": self.address(),
TypeError: 'NoneType' object is not callable
ERROR:tornado.application:Exception in callback <bound method WebMaster.tick of <libmproxy.web.WebMaster object at 0x10cfe5a90>>
```
</issue>
<code>
[start of libmproxy/models/connections.py]
1 from __future__ import (absolute_import, print_function, division)
2
3 import copy
4 import os
5
6 from netlib import tcp, certutils
7 from .. import stateobject, utils
8
9
10 class ClientConnection(tcp.BaseHandler, stateobject.StateObject):
11 def __init__(self, client_connection, address, server):
12 # Eventually, this object is restored from state. We don't have a
13 # connection then.
14 if client_connection:
15 super(ClientConnection, self).__init__(client_connection, address, server)
16 else:
17 self.connection = None
18 self.server = None
19 self.wfile = None
20 self.rfile = None
21 self.address = None
22 self.clientcert = None
23 self.ssl_established = None
24
25 self.timestamp_start = utils.timestamp()
26 self.timestamp_end = None
27 self.timestamp_ssl_setup = None
28 self.protocol = None
29
30 def __nonzero__(self):
31 return bool(self.connection) and not self.finished
32
33 def __repr__(self):
34 return "<ClientConnection: {ssl}{host}:{port}>".format(
35 ssl="[ssl] " if self.ssl_established else "",
36 host=self.address.host,
37 port=self.address.port
38 )
39
40 @property
41 def tls_established(self):
42 return self.ssl_established
43
44 _stateobject_attributes = dict(
45 ssl_established=bool,
46 timestamp_start=float,
47 timestamp_end=float,
48 timestamp_ssl_setup=float
49 )
50
51 def get_state(self, short=False):
52 d = super(ClientConnection, self).get_state(short)
53 d.update(
54 address={
55 "address": self.address(),
56 "use_ipv6": self.address.use_ipv6},
57 clientcert=self.cert.to_pem() if self.clientcert else None)
58 return d
59
60 def load_state(self, state):
61 super(ClientConnection, self).load_state(state)
62 self.address = tcp.Address(
63 **state["address"]) if state["address"] else None
64 self.clientcert = certutils.SSLCert.from_pem(
65 state["clientcert"]) if state["clientcert"] else None
66
67 def copy(self):
68 return copy.copy(self)
69
70 def send(self, message):
71 if isinstance(message, list):
72 message = b''.join(message)
73 self.wfile.write(message)
74 self.wfile.flush()
75
76 @classmethod
77 def from_state(cls, state):
78 f = cls(None, tuple(), None)
79 f.load_state(state)
80 return f
81
82 def convert_to_ssl(self, *args, **kwargs):
83 super(ClientConnection, self).convert_to_ssl(*args, **kwargs)
84 self.timestamp_ssl_setup = utils.timestamp()
85
86 def finish(self):
87 super(ClientConnection, self).finish()
88 self.timestamp_end = utils.timestamp()
89
90
91 class ServerConnection(tcp.TCPClient, stateobject.StateObject):
92 def __init__(self, address):
93 tcp.TCPClient.__init__(self, address)
94
95 self.via = None
96 self.timestamp_start = None
97 self.timestamp_end = None
98 self.timestamp_tcp_setup = None
99 self.timestamp_ssl_setup = None
100 self.protocol = None
101
102 def __nonzero__(self):
103 return bool(self.connection) and not self.finished
104
105 def __repr__(self):
106 if self.ssl_established and self.sni:
107 ssl = "[ssl: {0}] ".format(self.sni)
108 elif self.ssl_established:
109 ssl = "[ssl] "
110 else:
111 ssl = ""
112 return "<ServerConnection: {ssl}{host}:{port}>".format(
113 ssl=ssl,
114 host=self.address.host,
115 port=self.address.port
116 )
117
118 @property
119 def tls_established(self):
120 return self.ssl_established
121
122 _stateobject_attributes = dict(
123 timestamp_start=float,
124 timestamp_end=float,
125 timestamp_tcp_setup=float,
126 timestamp_ssl_setup=float,
127 address=tcp.Address,
128 source_address=tcp.Address,
129 cert=certutils.SSLCert,
130 ssl_established=bool,
131 sni=str
132 )
133 _stateobject_long_attributes = {"cert"}
134
135 def get_state(self, short=False):
136 d = super(ServerConnection, self).get_state(short)
137 d.update(
138 address={"address": self.address(),
139 "use_ipv6": self.address.use_ipv6},
140 source_address=({"address": self.source_address(),
141 "use_ipv6": self.source_address.use_ipv6} if self.source_address else None),
142 cert=self.cert.to_pem() if self.cert else None
143 )
144 return d
145
146 def load_state(self, state):
147 super(ServerConnection, self).load_state(state)
148
149 self.address = tcp.Address(
150 **state["address"]) if state["address"] else None
151 self.source_address = tcp.Address(
152 **state["source_address"]) if state["source_address"] else None
153 self.cert = certutils.SSLCert.from_pem(
154 state["cert"]) if state["cert"] else None
155
156 @classmethod
157 def from_state(cls, state):
158 f = cls(tuple())
159 f.load_state(state)
160 return f
161
162 def copy(self):
163 return copy.copy(self)
164
165 def connect(self):
166 self.timestamp_start = utils.timestamp()
167 tcp.TCPClient.connect(self)
168 self.timestamp_tcp_setup = utils.timestamp()
169
170 def send(self, message):
171 if isinstance(message, list):
172 message = b''.join(message)
173 self.wfile.write(message)
174 self.wfile.flush()
175
176 def establish_ssl(self, clientcerts, sni, **kwargs):
177 clientcert = None
178 if clientcerts:
179 path = os.path.join(
180 clientcerts,
181 self.address.host.encode("idna")) + ".pem"
182 if os.path.exists(path):
183 clientcert = path
184
185 self.convert_to_ssl(cert=clientcert, sni=sni, **kwargs)
186 self.sni = sni
187 self.timestamp_ssl_setup = utils.timestamp()
188
189 def finish(self):
190 tcp.TCPClient.finish(self)
191 self.timestamp_end = utils.timestamp()
192
193
194 ServerConnection._stateobject_attributes["via"] = ServerConnection
195
[end of libmproxy/models/connections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libmproxy/models/connections.py b/libmproxy/models/connections.py
--- a/libmproxy/models/connections.py
+++ b/libmproxy/models/connections.py
@@ -135,8 +135,8 @@
def get_state(self, short=False):
d = super(ServerConnection, self).get_state(short)
d.update(
- address={"address": self.address(),
- "use_ipv6": self.address.use_ipv6},
+ address=({"address": self.address(),
+ "use_ipv6": self.address.use_ipv6} if self.address else {}),
source_address=({"address": self.source_address(),
"use_ipv6": self.source_address.use_ipv6} if self.source_address else None),
cert=self.cert.to_pem() if self.cert else None
|
{"golden_diff": "diff --git a/libmproxy/models/connections.py b/libmproxy/models/connections.py\n--- a/libmproxy/models/connections.py\n+++ b/libmproxy/models/connections.py\n@@ -135,8 +135,8 @@\n def get_state(self, short=False):\n d = super(ServerConnection, self).get_state(short)\n d.update(\n- address={\"address\": self.address(),\n- \"use_ipv6\": self.address.use_ipv6},\n+ address=({\"address\": self.address(),\n+ \"use_ipv6\": self.address.use_ipv6} if self.address else {}), \n source_address=({\"address\": self.source_address(),\n \"use_ipv6\": self.source_address.use_ipv6} if self.source_address else None),\n cert=self.cert.to_pem() if self.cert else None\n", "issue": "Mitmweb can't run\nWhen I run mitmweb,then open 121.0.0.1:8081, the error happened.\n\n```\nTraceback (most recent call last):\n File \"/Users/venv.mitmproxy/lib/python2.7/site-packages/tornado/ioloop.py\", line 1020, in _run\n return self.callback()\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 141, in tick\n flow.FlowMaster.tick(self, self.masterq, timeout=0)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 812, in tick\n return super(FlowMaster, self).tick(q, timeout)\n File \"/Users/mitmproxy/libmproxy/controller.py\", line 108, in tick\n self.handle(*msg)\n File \"/Users/mitmproxy/libmproxy/controller.py\", line 129, in handle\n m(obj)\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 167, in handle_request\n super(WebMaster, self).handle_request(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 990, in handle_request\n self.state.add_flow(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 561, in add_flow\n self.flows._add(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 469, in _add\n view._add(f)\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 23, in _add\n data=f.get_state(short=True)\n File \"/Users/mitmproxy/libmproxy/models/flow.py\", line 88, in get_state\n d = super(Flow, self).get_state(short)\n File \"/Users/mitmproxy/libmproxy/stateobject.py\", line 32, in get_state\n state[attr] = val.get_state(short)\n File \"/Users/mitmproxy/libmproxy/models/connections.py\", line 138, in get_state\n address={\"address\": self.address(),\nTypeError: 'NoneType' object is not callable\nERROR:tornado.application:Exception in callback <bound method WebMaster.tick of <libmproxy.web.WebMaster object at 0x10cfe5a90>>\n```\n\nMitmweb can't run\nWhen I run mitmweb,then open 121.0.0.1:8081, the error happened.\n\n```\nTraceback (most recent call last):\n File \"/Users/venv.mitmproxy/lib/python2.7/site-packages/tornado/ioloop.py\", line 1020, in _run\n return self.callback()\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 141, in tick\n flow.FlowMaster.tick(self, self.masterq, timeout=0)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 812, in tick\n return super(FlowMaster, self).tick(q, timeout)\n File \"/Users/mitmproxy/libmproxy/controller.py\", line 108, in tick\n self.handle(*msg)\n File \"/Users/mitmproxy/libmproxy/controller.py\", line 129, in handle\n m(obj)\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 167, in handle_request\n super(WebMaster, self).handle_request(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 990, in handle_request\n self.state.add_flow(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 561, in add_flow\n self.flows._add(f)\n File \"/Users/mitmproxy/libmproxy/flow.py\", line 469, in _add\n view._add(f)\n File \"/Users/mitmproxy/libmproxy/web/__init__.py\", line 23, in _add\n data=f.get_state(short=True)\n File \"/Users/mitmproxy/libmproxy/models/flow.py\", line 88, in get_state\n d = super(Flow, self).get_state(short)\n File \"/Users/mitmproxy/libmproxy/stateobject.py\", line 32, in get_state\n state[attr] = val.get_state(short)\n File \"/Users/mitmproxy/libmproxy/models/connections.py\", line 138, in get_state\n address={\"address\": self.address(),\nTypeError: 'NoneType' object is not callable\nERROR:tornado.application:Exception in callback <bound method WebMaster.tick of <libmproxy.web.WebMaster object at 0x10cfe5a90>>\n```\n\n", "before_files": [{"content": "from __future__ import (absolute_import, print_function, division)\n\nimport copy\nimport os\n\nfrom netlib import tcp, certutils\nfrom .. import stateobject, utils\n\n\nclass ClientConnection(tcp.BaseHandler, stateobject.StateObject):\n def __init__(self, client_connection, address, server):\n # Eventually, this object is restored from state. We don't have a\n # connection then.\n if client_connection:\n super(ClientConnection, self).__init__(client_connection, address, server)\n else:\n self.connection = None\n self.server = None\n self.wfile = None\n self.rfile = None\n self.address = None\n self.clientcert = None\n self.ssl_established = None\n\n self.timestamp_start = utils.timestamp()\n self.timestamp_end = None\n self.timestamp_ssl_setup = None\n self.protocol = None\n\n def __nonzero__(self):\n return bool(self.connection) and not self.finished\n\n def __repr__(self):\n return \"<ClientConnection: {ssl}{host}:{port}>\".format(\n ssl=\"[ssl] \" if self.ssl_established else \"\",\n host=self.address.host,\n port=self.address.port\n )\n\n @property\n def tls_established(self):\n return self.ssl_established\n\n _stateobject_attributes = dict(\n ssl_established=bool,\n timestamp_start=float,\n timestamp_end=float,\n timestamp_ssl_setup=float\n )\n\n def get_state(self, short=False):\n d = super(ClientConnection, self).get_state(short)\n d.update(\n address={\n \"address\": self.address(),\n \"use_ipv6\": self.address.use_ipv6},\n clientcert=self.cert.to_pem() if self.clientcert else None)\n return d\n\n def load_state(self, state):\n super(ClientConnection, self).load_state(state)\n self.address = tcp.Address(\n **state[\"address\"]) if state[\"address\"] else None\n self.clientcert = certutils.SSLCert.from_pem(\n state[\"clientcert\"]) if state[\"clientcert\"] else None\n\n def copy(self):\n return copy.copy(self)\n\n def send(self, message):\n if isinstance(message, list):\n message = b''.join(message)\n self.wfile.write(message)\n self.wfile.flush()\n\n @classmethod\n def from_state(cls, state):\n f = cls(None, tuple(), None)\n f.load_state(state)\n return f\n\n def convert_to_ssl(self, *args, **kwargs):\n super(ClientConnection, self).convert_to_ssl(*args, **kwargs)\n self.timestamp_ssl_setup = utils.timestamp()\n\n def finish(self):\n super(ClientConnection, self).finish()\n self.timestamp_end = utils.timestamp()\n\n\nclass ServerConnection(tcp.TCPClient, stateobject.StateObject):\n def __init__(self, address):\n tcp.TCPClient.__init__(self, address)\n\n self.via = None\n self.timestamp_start = None\n self.timestamp_end = None\n self.timestamp_tcp_setup = None\n self.timestamp_ssl_setup = None\n self.protocol = None\n\n def __nonzero__(self):\n return bool(self.connection) and not self.finished\n\n def __repr__(self):\n if self.ssl_established and self.sni:\n ssl = \"[ssl: {0}] \".format(self.sni)\n elif self.ssl_established:\n ssl = \"[ssl] \"\n else:\n ssl = \"\"\n return \"<ServerConnection: {ssl}{host}:{port}>\".format(\n ssl=ssl,\n host=self.address.host,\n port=self.address.port\n )\n\n @property\n def tls_established(self):\n return self.ssl_established\n\n _stateobject_attributes = dict(\n timestamp_start=float,\n timestamp_end=float,\n timestamp_tcp_setup=float,\n timestamp_ssl_setup=float,\n address=tcp.Address,\n source_address=tcp.Address,\n cert=certutils.SSLCert,\n ssl_established=bool,\n sni=str\n )\n _stateobject_long_attributes = {\"cert\"}\n\n def get_state(self, short=False):\n d = super(ServerConnection, self).get_state(short)\n d.update(\n address={\"address\": self.address(),\n \"use_ipv6\": self.address.use_ipv6},\n source_address=({\"address\": self.source_address(),\n \"use_ipv6\": self.source_address.use_ipv6} if self.source_address else None),\n cert=self.cert.to_pem() if self.cert else None\n )\n return d\n\n def load_state(self, state):\n super(ServerConnection, self).load_state(state)\n\n self.address = tcp.Address(\n **state[\"address\"]) if state[\"address\"] else None\n self.source_address = tcp.Address(\n **state[\"source_address\"]) if state[\"source_address\"] else None\n self.cert = certutils.SSLCert.from_pem(\n state[\"cert\"]) if state[\"cert\"] else None\n\n @classmethod\n def from_state(cls, state):\n f = cls(tuple())\n f.load_state(state)\n return f\n\n def copy(self):\n return copy.copy(self)\n\n def connect(self):\n self.timestamp_start = utils.timestamp()\n tcp.TCPClient.connect(self)\n self.timestamp_tcp_setup = utils.timestamp()\n\n def send(self, message):\n if isinstance(message, list):\n message = b''.join(message)\n self.wfile.write(message)\n self.wfile.flush()\n\n def establish_ssl(self, clientcerts, sni, **kwargs):\n clientcert = None\n if clientcerts:\n path = os.path.join(\n clientcerts,\n self.address.host.encode(\"idna\")) + \".pem\"\n if os.path.exists(path):\n clientcert = path\n\n self.convert_to_ssl(cert=clientcert, sni=sni, **kwargs)\n self.sni = sni\n self.timestamp_ssl_setup = utils.timestamp()\n\n def finish(self):\n tcp.TCPClient.finish(self)\n self.timestamp_end = utils.timestamp()\n\n\nServerConnection._stateobject_attributes[\"via\"] = ServerConnection\n", "path": "libmproxy/models/connections.py"}]}
| 3,409 | 177 |
gh_patches_debug_17763
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-2794
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dovecot: support for zstd/lz4 compression is not compiled in
## Environment & Versions
### Environment
- [x] docker-compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
1.9
## Description
When `COMPRESSION` in `mailu.env` is set to `zstd` or `lz4` and a new mail arrives, the imap container logs something like this:
```
Jan 04 23:32:46 lmtp([email protected])<32616><kRbzEw7L1GFofwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd
Jan 04 23:32:46 indexer-worker([email protected])<32619><D5ZaIcXUkKDAqMsD:Z6zPGQ7L1GFrfwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd
```
```
Jan 04 23:43:03 imap([email protected])<178><2CwwXMnUwMXAqMsE>: Error: zlib_save: Support not compiled in for handler: lz4
Jan 04 23:43:03 indexer-worker([email protected])<180><B4qhJXfN1GGvAAAAflj66Q:/8frM3fN1GG0AAAAflj66Q>: Error: zlib_save: Support not compiled in for handler: lz4
```
As far as I can tell the Dovecot package from Alpine is probably not compiled with zstd/lz4 support, but I'm not sure since the build log of the last Alpine package returns 404 :roll_eyes:
This is __not__ breaking anything, mails will be received, but they won't be compressed.
## Replication Steps
1. in `mailu.env` set `COMPRESSION=zstd` or `COMPRESSION=lz4`
2. `docker-compose down && docker-compose up`
3. send yourself a mail
4. watch the logs of the imap container
## Expected behaviour
With `COMPRESSION=zstd`/`COMPRESSION=lz4` set, mails should be compressed with zstd/lz4 and then stored.
## Workaround
Set `COMPRESSION=gz` if you need compressed mail storage.
</issue>
<code>
[start of core/postfix/start.py]
1 #!/usr/bin/env python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import sys
8 import re
9
10 from podop import run_server
11 from socrate import system, conf
12
13 system.set_env(log_filters=[
14 r'the Postfix mail system is running\: \d+$',
15 r'(dis)?connect from localhost\[(\:\:1|127\.0\.0\.1)\]( quit=1 commands=1)?$',
16 r'haproxy read\: short protocol header\: QUIT$',
17 r'discarding EHLO keywords\: PIPELINING$',
18 ], log_file=os.environ.get('POSTFIX_LOG_FILE'))
19
20 os.system("flock -n /queue/pid/master.pid rm /queue/pid/master.pid")
21
22 def start_podop():
23 system.drop_privs_to('postfix')
24 os.makedirs('/dev/shm/postfix',mode=0o700, exist_ok=True)
25 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
26 # TODO: Remove verbosity setting from Podop?
27 run_server(0, "postfix", "/tmp/podop.socket", [
28 ("transport", "url", url + "transport/§"),
29 ("alias", "url", url + "alias/§"),
30 ("dane", "url", url + "dane/§"),
31 ("domain", "url", url + "domain/§"),
32 ("mailbox", "url", url + "mailbox/§"),
33 ("recipientmap", "url", url + "recipient/map/§"),
34 ("sendermap", "url", url + "sender/map/§"),
35 ("senderlogin", "url", url + "sender/login/§"),
36 ("senderrate", "url", url + "sender/rate/§")
37 ])
38
39 def start_mta_sts_daemon():
40 os.chmod("/root/", 0o755) # read access to /root/.netrc required
41 system.drop_privs_to('postfix')
42 from postfix_mta_sts_resolver import daemon
43 daemon.main()
44
45 def is_valid_postconf_line(line):
46 return not line.startswith("#") \
47 and not line == ''
48
49 # Actual startup script
50 os.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'
51
52 # Postfix requires IPv6 addresses to be wrapped in square brackets
53 if 'RELAYNETS' in os.environ:
54 os.environ["RELAYNETS"] = re.sub(r'([0-9a-fA-F]+:[0-9a-fA-F:]+)/', '[\\1]/', os.environ["RELAYNETS"])
55
56 for postfix_file in glob.glob("/conf/*.cf"):
57 conf.jinja(postfix_file, os.environ, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
58
59 if os.path.exists("/overrides/postfix.cf"):
60 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
61 if is_valid_postconf_line(line):
62 os.system('postconf -e "{}"'.format(line))
63
64 if os.path.exists("/overrides/postfix.master"):
65 for line in open("/overrides/postfix.master").read().strip().split("\n"):
66 if is_valid_postconf_line(line):
67 os.system('postconf -Me "{}"'.format(line))
68
69 for map_file in glob.glob("/overrides/*.map"):
70 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
71 shutil.copyfile(map_file, destination)
72 os.system("postmap {}".format(destination))
73 os.remove(destination)
74
75 if os.path.exists("/overrides/mta-sts-daemon.yml"):
76 shutil.copyfile("/overrides/mta-sts-daemon.yml", "/etc/mta-sts-daemon.yml")
77 else:
78 conf.jinja("/conf/mta-sts-daemon.yml", os.environ, "/etc/mta-sts-daemon.yml")
79
80 for policy in ['tls_policy', 'transport']:
81 if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):
82 open(f'/etc/postfix/{policy}.map', 'a').close()
83 os.system(f'postmap /etc/postfix/{policy}.map')
84
85 if "RELAYUSER" in os.environ:
86 path = "/etc/postfix/sasl_passwd"
87 conf.jinja("/conf/sasl_passwd", os.environ, path)
88 os.system("postmap {}".format(path))
89
90 # Configure logrotate and start crond
91 if os.environ.get('POSTFIX_LOG_FILE'):
92 conf.jinja("/conf/logrotate.conf", os.environ, "/etc/logrotate.d/postfix.conf")
93 os.system("/usr/sbin/crond")
94 if os.path.exists("/overrides/logrotate.conf"):
95 shutil.copyfile("/overrides/logrotate.conf", "/etc/logrotate.d/postfix.conf")
96
97 # Run Podop and Postfix
98 multiprocessing.Process(target=start_podop).start()
99 multiprocessing.Process(target=start_mta_sts_daemon).start()
100 os.system("/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing")
101 # Before starting postfix, we need to check permissions on /queue
102 # in the event that postfix,postdrop id have changed
103 os.system("postfix set-permissions")
104 os.system("postfix start-fg")
105
[end of core/postfix/start.py]
[start of core/dovecot/start.py]
1 #!/usr/bin/env python3
2
3 import os
4 import glob
5 import multiprocessing
6 import logging as log
7 import sys
8
9 from podop import run_server
10 from socrate import system, conf
11
12 system.set_env(log_filters=r'waitpid\(\) returned unknown PID \d+$')
13
14 def start_podop():
15 system.drop_privs_to('mail')
16 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/dovecot/§"
17 run_server(0, "dovecot", "/tmp/podop.socket", [
18 ("quota", "url", url ),
19 ("auth", "url", url),
20 ("sieve", "url", url),
21 ])
22
23 # Actual startup script
24 for dovecot_file in glob.glob("/conf/*.conf"):
25 conf.jinja(dovecot_file, os.environ, os.path.join("/etc/dovecot", os.path.basename(dovecot_file)))
26
27 os.makedirs("/conf/bin", exist_ok=True)
28 for script_file in glob.glob("/conf/*.script"):
29 out_file = os.path.join("/conf/bin/", os.path.basename(script_file).replace('.script',''))
30 conf.jinja(script_file, os.environ, out_file)
31 os.chmod(out_file, 0o555)
32
33 # Run Podop, then postfix
34 os.system("chown mail:mail /mail")
35 os.system("chown -R mail:mail /var/lib/dovecot /conf")
36
37 multiprocessing.Process(target=start_podop).start()
38 os.system("dovecot -c /etc/dovecot/dovecot.conf -F")
39
[end of core/dovecot/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dovecot/start.py b/core/dovecot/start.py
--- a/core/dovecot/start.py
+++ b/core/dovecot/start.py
@@ -9,7 +9,7 @@
from podop import run_server
from socrate import system, conf
-system.set_env(log_filters=r'waitpid\(\) returned unknown PID \d+$')
+system.set_env(log_filters=r'Error\: SSL context initialization failed, disabling SSL\: Can\'t load SSL certificate \(ssl_cert setting\)\: The certificate is empty$')
def start_podop():
system.drop_privs_to('mail')
diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -11,7 +11,6 @@
from socrate import system, conf
system.set_env(log_filters=[
- r'the Postfix mail system is running\: \d+$',
r'(dis)?connect from localhost\[(\:\:1|127\.0\.0\.1)\]( quit=1 commands=1)?$',
r'haproxy read\: short protocol header\: QUIT$',
r'discarding EHLO keywords\: PIPELINING$',
|
{"golden_diff": "diff --git a/core/dovecot/start.py b/core/dovecot/start.py\n--- a/core/dovecot/start.py\n+++ b/core/dovecot/start.py\n@@ -9,7 +9,7 @@\n from podop import run_server\n from socrate import system, conf\n \n-system.set_env(log_filters=r'waitpid\\(\\) returned unknown PID \\d+$')\n+system.set_env(log_filters=r'Error\\: SSL context initialization failed, disabling SSL\\: Can\\'t load SSL certificate \\(ssl_cert setting\\)\\: The certificate is empty$')\n \n def start_podop():\n system.drop_privs_to('mail')\ndiff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -11,7 +11,6 @@\n from socrate import system, conf\n \n system.set_env(log_filters=[\n- r'the Postfix mail system is running\\: \\d+$',\n r'(dis)?connect from localhost\\[(\\:\\:1|127\\.0\\.0\\.1)\\]( quit=1 commands=1)?$',\n r'haproxy read\\: short protocol header\\: QUIT$',\n r'discarding EHLO keywords\\: PIPELINING$',\n", "issue": "dovecot: support for zstd/lz4 compression is not compiled in\n## Environment & Versions\r\n### Environment\r\n - [x] docker-compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n1.9\r\n\r\n## Description\r\nWhen `COMPRESSION` in `mailu.env` is set to `zstd` or `lz4` and a new mail arrives, the imap container logs something like this:\r\n```\r\nJan 04 23:32:46 lmtp([email protected])<32616><kRbzEw7L1GFofwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd\r\nJan 04 23:32:46 indexer-worker([email protected])<32619><D5ZaIcXUkKDAqMsD:Z6zPGQ7L1GFrfwAADGH2HQ>: Error: zlib_save: Support not compiled in for handler: zstd\r\n```\r\n\r\n```\r\nJan 04 23:43:03 imap([email protected])<178><2CwwXMnUwMXAqMsE>: Error: zlib_save: Support not compiled in for handler: lz4\r\nJan 04 23:43:03 indexer-worker([email protected])<180><B4qhJXfN1GGvAAAAflj66Q:/8frM3fN1GG0AAAAflj66Q>: Error: zlib_save: Support not compiled in for handler: lz4\r\n```\r\n\r\nAs far as I can tell the Dovecot package from Alpine is probably not compiled with zstd/lz4 support, but I'm not sure since the build log of the last Alpine package returns 404 :roll_eyes:\r\n\r\nThis is __not__ breaking anything, mails will be received, but they won't be compressed.\r\n\r\n## Replication Steps\r\n1. in `mailu.env` set `COMPRESSION=zstd` or `COMPRESSION=lz4`\r\n2. `docker-compose down && docker-compose up`\r\n3. send yourself a mail\r\n4. watch the logs of the imap container\r\n\r\n## Expected behaviour\r\nWith `COMPRESSION=zstd`/`COMPRESSION=lz4` set, mails should be compressed with zstd/lz4 and then stored.\r\n\r\n## Workaround\r\nSet `COMPRESSION=gz` if you need compressed mail storage.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport sys\nimport re\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=[\n r'the Postfix mail system is running\\: \\d+$',\n r'(dis)?connect from localhost\\[(\\:\\:1|127\\.0\\.0\\.1)\\]( quit=1 commands=1)?$',\n r'haproxy read\\: short protocol header\\: QUIT$',\n r'discarding EHLO keywords\\: PIPELINING$',\n ], log_file=os.environ.get('POSTFIX_LOG_FILE'))\n\nos.system(\"flock -n /queue/pid/master.pid rm /queue/pid/master.pid\")\n\ndef start_podop():\n system.drop_privs_to('postfix')\n os.makedirs('/dev/shm/postfix',mode=0o700, exist_ok=True)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n (\"transport\", \"url\", url + \"transport/\u00a7\"),\n (\"alias\", \"url\", url + \"alias/\u00a7\"),\n (\"dane\", \"url\", url + \"dane/\u00a7\"),\n (\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"recipientmap\", \"url\", url + \"recipient/map/\u00a7\"),\n (\"sendermap\", \"url\", url + \"sender/map/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\"),\n (\"senderrate\", \"url\", url + \"sender/rate/\u00a7\")\n ])\n\ndef start_mta_sts_daemon():\n os.chmod(\"/root/\", 0o755) # read access to /root/.netrc required\n system.drop_privs_to('postfix')\n from postfix_mta_sts_resolver import daemon\n daemon.main()\n\ndef is_valid_postconf_line(line):\n return not line.startswith(\"#\") \\\n and not line == ''\n\n# Actual startup script\nos.environ['DEFER_ON_TLS_ERROR'] = os.environ['DEFER_ON_TLS_ERROR'] if 'DEFER_ON_TLS_ERROR' in os.environ else 'True'\n\n# Postfix requires IPv6 addresses to be wrapped in square brackets\nif 'RELAYNETS' in os.environ:\n os.environ[\"RELAYNETS\"] = re.sub(r'([0-9a-fA-F]+:[0-9a-fA-F:]+)/', '[\\\\1]/', os.environ[\"RELAYNETS\"])\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n conf.jinja(postfix_file, os.environ, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n if is_valid_postconf_line(line):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif os.path.exists(\"/overrides/mta-sts-daemon.yml\"):\n shutil.copyfile(\"/overrides/mta-sts-daemon.yml\", \"/etc/mta-sts-daemon.yml\")\nelse:\n conf.jinja(\"/conf/mta-sts-daemon.yml\", os.environ, \"/etc/mta-sts-daemon.yml\")\n\nfor policy in ['tls_policy', 'transport']:\n if not os.path.exists(f'/etc/postfix/{policy}.map.lmdb'):\n open(f'/etc/postfix/{policy}.map', 'a').close()\n os.system(f'postmap /etc/postfix/{policy}.map')\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n conf.jinja(\"/conf/sasl_passwd\", os.environ, path)\n os.system(\"postmap {}\".format(path))\n\n# Configure logrotate and start crond\nif os.environ.get('POSTFIX_LOG_FILE'):\n conf.jinja(\"/conf/logrotate.conf\", os.environ, \"/etc/logrotate.d/postfix.conf\")\n os.system(\"/usr/sbin/crond\")\n if os.path.exists(\"/overrides/logrotate.conf\"):\n shutil.copyfile(\"/overrides/logrotate.conf\", \"/etc/logrotate.d/postfix.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nmultiprocessing.Process(target=start_mta_sts_daemon).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n# Before starting postfix, we need to check permissions on /queue\n# in the event that postfix,postdrop id have changed\nos.system(\"postfix set-permissions\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}, {"content": "#!/usr/bin/env python3\n\nimport os\nimport glob\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nsystem.set_env(log_filters=r'waitpid\\(\\) returned unknown PID \\d+$')\n\ndef start_podop():\n system.drop_privs_to('mail')\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/dovecot/\u00a7\"\n run_server(0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", url ),\n\t\t(\"auth\", \"url\", url),\n\t\t(\"sieve\", \"url\", url),\n ])\n\n# Actual startup script\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n conf.jinja(dovecot_file, os.environ, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\nos.makedirs(\"/conf/bin\", exist_ok=True)\nfor script_file in glob.glob(\"/conf/*.script\"):\n out_file = os.path.join(\"/conf/bin/\", os.path.basename(script_file).replace('.script',''))\n conf.jinja(script_file, os.environ, out_file)\n os.chmod(out_file, 0o555)\n\n# Run Podop, then postfix\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\n\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"dovecot -c /etc/dovecot/dovecot.conf -F\")\n", "path": "core/dovecot/start.py"}]}
| 2,857 | 272 |
gh_patches_debug_56612
|
rasdani/github-patches
|
git_diff
|
spacetelescope__jwql-677
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update Bokeh to latest version
I remember there was some reason that we were holding off on upgrading Bokeh from 1.3.4. However, Bokeh is now up to version 2.2.1 I believe. We should look into upgrading the version used for JWQL in order to take advantage of new features and so that we minimize the number of plots created under 1.3.4 which may need to be tweaked to work under the new version.
For example, one difference I ran into today was that the keyword "legend", which is used in 1.3.4 to denote the string printed in the legend for a particular element, has been changed to "legend_label" in version 2.2.1.
</issue>
<code>
[start of setup.py]
1 import numpy as np
2 from setuptools import setup
3 from setuptools import find_packages
4
5 VERSION = '0.24.0'
6
7 AUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '
8 AUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'
9
10 DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'
11
12 DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']
13
14 REQUIRES = [
15 'asdf>=2.3.3',
16 'astropy>=3.2.1',
17 'astroquery>=0.3.9',
18 'authlib',
19 'bokeh>=1.0,<1.4',
20 'codecov',
21 'crds',
22 'cryptography',
23 'django',
24 'flake8',
25 'inflection',
26 'ipython',
27 'jinja2',
28 'jsonschema',
29 'jwedb>=0.0.3',
30 'jwst',
31 'matplotlib',
32 'nodejs',
33 'numpy',
34 'numpydoc',
35 'pandas',
36 'psycopg2',
37 'pysiaf',
38 'pytest',
39 'pytest-cov',
40 'scipy',
41 'sphinx',
42 'sqlalchemy',
43 'stsci_rtd_theme',
44 'twine',
45 'wtforms'
46 ]
47
48 setup(
49 name='jwql',
50 version=VERSION,
51 description=DESCRIPTION,
52 url='https://github.com/spacetelescope/jwql.git',
53 author=AUTHORS,
54 author_email='[email protected]',
55 license='BSD',
56 keywords=['astronomy', 'python'],
57 classifiers=['Programming Language :: Python'],
58 packages=find_packages(),
59 install_requires=REQUIRES,
60 dependency_links=DEPENDENCY_LINKS,
61 include_package_data=True,
62 include_dirs=[np.get_include()],
63 )
64
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
'astropy>=3.2.1',
'astroquery>=0.3.9',
'authlib',
- 'bokeh>=1.0,<1.4',
+ 'bokeh',
'codecov',
'crds',
'cryptography',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n- 'bokeh>=1.0,<1.4',\n+ 'bokeh',\n 'codecov',\n 'crds',\n 'cryptography',\n", "issue": "Update Bokeh to latest version\nI remember there was some reason that we were holding off on upgrading Bokeh from 1.3.4. However, Bokeh is now up to version 2.2.1 I believe. We should look into upgrading the version used for JWQL in order to take advantage of new features and so that we minimize the number of plots created under 1.3.4 which may need to be tweaked to work under the new version.\r\n\r\nFor example, one difference I ran into today was that the keyword \"legend\", which is used in 1.3.4 to denote the string printed in the legend for a particular element, has been changed to \"legend_label\" in version 2.2.1.\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.24.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'cryptography',\n 'django',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'jwst',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine',\n 'wtforms'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}]}
| 1,254 | 94 |
gh_patches_debug_826
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-4557
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
/openlibrary/openlibrary/templates/lists/widget.html: error in processing template: TypeError: Object of type Nothing is not JSON serializable (falling back to default template)
Patron is reporting the following error:
`/openlibrary/openlibrary/templates/lists/widget.html: error in processing template: TypeError: Object of type Nothing is not JSON serializable (falling back to default template)`
### Evidence / Screenshot (if possible)
### Relevant url?
https://openlibrary.org/works/OL24171550W/Kelebihan_Amalan_Bulan_Rejab_Sya%E2%80%99ban_Ramadhan
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ... https://openlibrary.org/works/OL24171550W/Kelebihan_Amalan_Bulan_Rejab_Sya%E2%80%99ban_Ramadhan
2. Do ... view error.
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual:
* Expected:
### Details
- **Logged in (Y/N)?**
- **Browser type/version?**
- **Operating system?**
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
</issue>
<code>
[start of openlibrary/core/helpers.py]
1 """Generic helper functions to use in the templates and the webapp.
2 """
3 import web
4 from datetime import datetime
5 import re
6
7 import six
8 from six.moves.urllib.parse import urlsplit
9
10 if six.PY2: # See #4525 json.dump(indent) MUST be an int on PY2
11 import simplejson as json
12 else:
13 import json
14
15 import babel
16 import babel.core
17 import babel.dates
18 import babel.numbers
19
20 try:
21 import genshi
22 import genshi.filters
23 except ImportError:
24 genshi = None
25
26 try:
27 from bs4 import BeautifulSoup
28 except ImportError:
29 BeautifulSoup = None
30
31 from infogami import config
32
33 # handy utility to parse ISO date strings
34 from infogami.infobase.utils import parse_datetime
35 from infogami.utils.view import safeint
36
37 # TODO: i18n should be moved to core or infogami
38 from openlibrary.i18n import gettext as _ # noqa: F401
39
40 __all__ = [
41 "sanitize",
42 "json_encode",
43 "safesort",
44 "days_since", "datestr", "format_date",
45 "sprintf", "cond", "commify", "truncate", "datetimestr_utc",
46 "urlsafe", "texsafe",
47 "percentage", "affiliate_id", "bookreader_host",
48 "private_collections", "private_collection_in",
49
50 # functions imported from elsewhere
51 "parse_datetime", "safeint"
52 ]
53 __docformat__ = "restructuredtext en"
54
55 def sanitize(html, encoding='utf8'):
56 """Removes unsafe tags and attributes from html and adds
57 ``rel="nofollow"`` attribute to all external links.
58 Using encoding=None if passing unicode strings e.g. for Python 3.
59 encoding="utf8" matches default format for earlier versions of Genshi
60 https://genshi.readthedocs.io/en/latest/upgrade/#upgrading-from-genshi-0-6-x-to-the-development-version
61 """
62
63 # Can't sanitize unless genshi module is available
64 if genshi is None:
65 return html
66
67 def get_nofollow(name, event):
68 attrs = event[1][1]
69 href = attrs.get('href', '')
70
71 if href:
72 # add rel=nofollow to all absolute links
73 _, host, _, _, _ = urlsplit(href)
74 if host:
75 return 'nofollow'
76
77 try:
78 html = genshi.HTML(html, encoding=encoding)
79
80 # except (genshi.ParseError, UnicodeDecodeError, UnicodeError) as e:
81 # don't catch Unicode errors so we can tell if we're getting bytes
82 except genshi.ParseError:
83 if BeautifulSoup:
84 # Bad html. Tidy it up using BeautifulSoup
85 html = str(BeautifulSoup(html, "lxml"))
86 try:
87 html = genshi.HTML(html)
88 except Exception:
89 # Failed to sanitize.
90 # We can't do any better than returning the original HTML, without sanitizing.
91 return html
92 else:
93 raise
94
95 stream = html \
96 | genshi.filters.HTMLSanitizer() \
97 | genshi.filters.Transformer("//a").attr("rel", get_nofollow)
98 return stream.render()
99
100
101 def json_encode(d, **kw):
102 """Same as json.dumps.
103 """
104 return json.dumps(d, **kw)
105
106
107 def safesort(iterable, key=None, reverse=False):
108 """Sorts heterogeneous of objects without raising errors.
109
110 Sorting heterogeneous objects sometimes causes error. For example,
111 datetime and Nones don't go well together. This function takes special
112 care to make that work.
113 """
114 key = key or (lambda x: x)
115 def safekey(x):
116 k = key(x)
117 return (k.__class__.__name__, k)
118 return sorted(iterable, key=safekey, reverse=reverse)
119
120
121 def days_since(then, now=None):
122 delta = then - (now or datetime.now())
123 return abs(delta.days)
124
125
126 def datestr(then, now=None, lang=None, relative=True):
127 """Internationalized version of web.datestr."""
128 lang = lang or web.ctx.get('lang') or "en"
129 if relative:
130 if now is None:
131 now = datetime.now()
132 delta = then - now
133 if abs(delta.days) < 4: # Threshold from web.py
134 return babel.dates.format_timedelta(delta,
135 add_direction=True,
136 locale=_get_babel_locale(lang))
137 return format_date(then, lang=lang)
138
139
140 def datetimestr_utc(then):
141 return then.strftime("%Y-%m-%dT%H:%M:%SZ")
142
143 def format_date(date, lang=None):
144 lang = lang or web.ctx.get('lang') or "en"
145 locale = _get_babel_locale(lang)
146 return babel.dates.format_date(date, format="long", locale=locale)
147
148 def _get_babel_locale(lang):
149 try:
150 return babel.Locale(lang)
151 except babel.core.UnknownLocaleError:
152 return babel.Locale("en")
153
154
155 def sprintf(s, *a, **kw):
156 """Handy utility for string replacements.
157
158 >>> sprintf('hello %s', 'python')
159 'hello python'
160 >>> sprintf('hello %(name)s', name='python')
161 'hello python'
162 """
163 args = kw or a
164 if args:
165 return s % args
166 else:
167 return s
168
169
170 def cond(pred, true_value, false_value=""):
171 """Lisp style cond function.
172
173 Hanly to use instead of if-else expression.
174 """
175 if pred:
176 return true_value
177 else:
178 return false_value
179
180
181 def commify(number, lang=None):
182 """localized version of web.commify"""
183 try:
184 lang = lang or web.ctx.get("lang") or "en"
185 return babel.numbers.format_number(int(number), lang)
186 except:
187 return six.text_type(number)
188
189
190 def truncate(text, limit):
191 """Truncate text and add ellipses if it longer than specified limit."""
192 if not text:
193 return ''
194 if len(text) <= limit:
195 return text
196 return text[:limit] + "..."
197
198
199 def urlsafe(path):
200 """Replaces the unsafe chars from path with underscores.
201 """
202 return _get_safepath_re().sub('_', path).strip('_')[:100]
203
204 @web.memoize
205 def _get_safepath_re():
206 """Make regular expression that matches all unsafe chars."""
207 # unsafe chars according to RFC 2396
208 reserved = ";/?:@&=+$,"
209 delims = '<>#%"'
210 unwise = "{}|\\^[]`"
211 space = ' \n\r'
212
213 unsafe = reserved + delims + unwise + space
214 pattern = '[%s]+' % "".join(re.escape(c) for c in unsafe)
215 return re.compile(pattern)
216
217
218 def get_coverstore_url():
219 """Returns the base url of coverstore by looking at the config."""
220 return config.get('coverstore_url', 'https://covers.openlibrary.org').rstrip('/')
221
222
223 _texsafe_map = {
224 '"': r'\textquotedbl{}',
225 '#': r'\#',
226 '$': r'\$',
227 '%': r'\%',
228 '&': r'\&',
229 '<': r'\textless{}',
230 '>': r'\textgreater{}',
231 '\\': r'\textbackslash{}',
232 '^': r'\^{}',
233 '_': r'\_{}',
234 '{': r'\{',
235 '}': r'\}',
236 '|': r'\textbar{}',
237 '~': r'\~{}',
238 }
239
240 _texsafe_re = None
241
242 def texsafe(text):
243 """Escapes the special characters in the given text for using it in tex type setting.
244
245 Tex (or Latex) uses some characters in the ascii character range for
246 special notations. These characters must be escaped when occur in the
247 regular text. This function escapes those special characters.
248
249 The list of special characters and the latex command to typeset them can
250 be found in `The Comprehensive LaTeX Symbol List`_.
251
252 .. _The Comprehensive LaTeX Symbol List: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf
253 """
254 global _texsafe_re
255 if _texsafe_re is None:
256 pattern = "[%s]" % re.escape("".join(list(_texsafe_map)))
257 _texsafe_re = re.compile(pattern)
258
259 return _texsafe_re.sub(lambda m: _texsafe_map[m.group(0)], text)
260
261 def percentage(value, total):
262 """Computes percentage.
263
264 >>> percentage(1, 10)
265 10.0
266 >>> percentage(0, 0)
267 0.0
268 """
269 return (value * 100.0) / total if total else 0.0
270
271 def uniq(values, key=None):
272 """Returns the unique entries from the given values in the original order.
273
274 The value of the optional `key` parameter should be a function that takes
275 a single argument and returns a key to test the uniqueness.
276 """
277 key = key or (lambda x: x)
278 s = set()
279 result = []
280 for v in values:
281 k = key(v)
282 if k not in s:
283 s.add(k)
284 result.append(v)
285 return result
286
287 def affiliate_id(affiliate):
288 return config.get('affiliate_ids', {}).get(affiliate, '')
289
290 def bookreader_host():
291 return config.get('bookreader_host', '')
292
293 def private_collections():
294 """Collections which are lendable but should not be linked from OL
295 TODO: Remove when we can handle institutional books"""
296 return ['georgetown-university-law-library-rr']
297
298 def private_collection_in(collections):
299 return any(x in private_collections() for x in collections)
300
301 def _get_helpers():
302 _globals = globals()
303 return web.storage((k, _globals[k]) for k in __all__)
304
305
306 ## This must be at the end of this module
307 helpers = _get_helpers()
308
[end of openlibrary/core/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/core/helpers.py b/openlibrary/core/helpers.py
--- a/openlibrary/core/helpers.py
+++ b/openlibrary/core/helpers.py
@@ -101,7 +101,7 @@
def json_encode(d, **kw):
"""Same as json.dumps.
"""
- return json.dumps(d, **kw)
+ return json.dumps(d or {}, **kw)
def safesort(iterable, key=None, reverse=False):
|
{"golden_diff": "diff --git a/openlibrary/core/helpers.py b/openlibrary/core/helpers.py\n--- a/openlibrary/core/helpers.py\n+++ b/openlibrary/core/helpers.py\n@@ -101,7 +101,7 @@\n def json_encode(d, **kw):\n \"\"\"Same as json.dumps.\n \"\"\"\n- return json.dumps(d, **kw)\n+ return json.dumps(d or {}, **kw)\n \n \n def safesort(iterable, key=None, reverse=False):\n", "issue": "/openlibrary/openlibrary/templates/lists/widget.html: error in processing template: TypeError: Object of type Nothing is not JSON serializable (falling back to default template)\nPatron is reporting the following error:\r\n`/openlibrary/openlibrary/templates/lists/widget.html: error in processing template: TypeError: Object of type Nothing is not JSON serializable (falling back to default template)`\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n### Relevant url?\r\nhttps://openlibrary.org/works/OL24171550W/Kelebihan_Amalan_Bulan_Rejab_Sya%E2%80%99ban_Ramadhan\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ... https://openlibrary.org/works/OL24171550W/Kelebihan_Amalan_Bulan_Rejab_Sya%E2%80%99ban_Ramadhan\r\n2. Do ... view error.\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: \r\n* Expected: \r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?** \r\n- **Browser type/version?** \r\n- **Operating system?** \r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "\"\"\"Generic helper functions to use in the templates and the webapp.\n\"\"\"\nimport web\nfrom datetime import datetime\nimport re\n\nimport six\nfrom six.moves.urllib.parse import urlsplit\n\nif six.PY2: # See #4525 json.dump(indent) MUST be an int on PY2\n import simplejson as json\nelse:\n import json\n\nimport babel\nimport babel.core\nimport babel.dates\nimport babel.numbers\n\ntry:\n import genshi\n import genshi.filters\nexcept ImportError:\n genshi = None\n\ntry:\n from bs4 import BeautifulSoup\nexcept ImportError:\n BeautifulSoup = None\n\nfrom infogami import config\n\n# handy utility to parse ISO date strings\nfrom infogami.infobase.utils import parse_datetime\nfrom infogami.utils.view import safeint\n\n# TODO: i18n should be moved to core or infogami\nfrom openlibrary.i18n import gettext as _ # noqa: F401\n\n__all__ = [\n \"sanitize\",\n \"json_encode\",\n \"safesort\",\n \"days_since\", \"datestr\", \"format_date\",\n \"sprintf\", \"cond\", \"commify\", \"truncate\", \"datetimestr_utc\",\n \"urlsafe\", \"texsafe\",\n \"percentage\", \"affiliate_id\", \"bookreader_host\",\n \"private_collections\", \"private_collection_in\",\n\n # functions imported from elsewhere\n \"parse_datetime\", \"safeint\"\n]\n__docformat__ = \"restructuredtext en\"\n\ndef sanitize(html, encoding='utf8'):\n \"\"\"Removes unsafe tags and attributes from html and adds\n ``rel=\"nofollow\"`` attribute to all external links.\n Using encoding=None if passing unicode strings e.g. for Python 3.\n encoding=\"utf8\" matches default format for earlier versions of Genshi\n https://genshi.readthedocs.io/en/latest/upgrade/#upgrading-from-genshi-0-6-x-to-the-development-version\n \"\"\"\n\n # Can't sanitize unless genshi module is available\n if genshi is None:\n return html\n\n def get_nofollow(name, event):\n attrs = event[1][1]\n href = attrs.get('href', '')\n\n if href:\n # add rel=nofollow to all absolute links\n _, host, _, _, _ = urlsplit(href)\n if host:\n return 'nofollow'\n\n try:\n html = genshi.HTML(html, encoding=encoding)\n\n # except (genshi.ParseError, UnicodeDecodeError, UnicodeError) as e:\n # don't catch Unicode errors so we can tell if we're getting bytes\n except genshi.ParseError:\n if BeautifulSoup:\n # Bad html. Tidy it up using BeautifulSoup\n html = str(BeautifulSoup(html, \"lxml\"))\n try:\n html = genshi.HTML(html)\n except Exception:\n # Failed to sanitize.\n # We can't do any better than returning the original HTML, without sanitizing.\n return html\n else:\n raise\n\n stream = html \\\n | genshi.filters.HTMLSanitizer() \\\n | genshi.filters.Transformer(\"//a\").attr(\"rel\", get_nofollow)\n return stream.render()\n\n\ndef json_encode(d, **kw):\n \"\"\"Same as json.dumps.\n \"\"\"\n return json.dumps(d, **kw)\n\n\ndef safesort(iterable, key=None, reverse=False):\n \"\"\"Sorts heterogeneous of objects without raising errors.\n\n Sorting heterogeneous objects sometimes causes error. For example,\n datetime and Nones don't go well together. This function takes special\n care to make that work.\n \"\"\"\n key = key or (lambda x: x)\n def safekey(x):\n k = key(x)\n return (k.__class__.__name__, k)\n return sorted(iterable, key=safekey, reverse=reverse)\n\n\ndef days_since(then, now=None):\n delta = then - (now or datetime.now())\n return abs(delta.days)\n\n\ndef datestr(then, now=None, lang=None, relative=True):\n \"\"\"Internationalized version of web.datestr.\"\"\"\n lang = lang or web.ctx.get('lang') or \"en\"\n if relative:\n if now is None:\n now = datetime.now()\n delta = then - now\n if abs(delta.days) < 4: # Threshold from web.py\n return babel.dates.format_timedelta(delta,\n add_direction=True,\n locale=_get_babel_locale(lang))\n return format_date(then, lang=lang)\n\n\ndef datetimestr_utc(then):\n return then.strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n\ndef format_date(date, lang=None):\n lang = lang or web.ctx.get('lang') or \"en\"\n locale = _get_babel_locale(lang)\n return babel.dates.format_date(date, format=\"long\", locale=locale)\n\ndef _get_babel_locale(lang):\n try:\n return babel.Locale(lang)\n except babel.core.UnknownLocaleError:\n return babel.Locale(\"en\")\n\n\ndef sprintf(s, *a, **kw):\n \"\"\"Handy utility for string replacements.\n\n >>> sprintf('hello %s', 'python')\n 'hello python'\n >>> sprintf('hello %(name)s', name='python')\n 'hello python'\n \"\"\"\n args = kw or a\n if args:\n return s % args\n else:\n return s\n\n\ndef cond(pred, true_value, false_value=\"\"):\n \"\"\"Lisp style cond function.\n\n Hanly to use instead of if-else expression.\n \"\"\"\n if pred:\n return true_value\n else:\n return false_value\n\n\ndef commify(number, lang=None):\n \"\"\"localized version of web.commify\"\"\"\n try:\n lang = lang or web.ctx.get(\"lang\") or \"en\"\n return babel.numbers.format_number(int(number), lang)\n except:\n return six.text_type(number)\n\n\ndef truncate(text, limit):\n \"\"\"Truncate text and add ellipses if it longer than specified limit.\"\"\"\n if not text:\n return ''\n if len(text) <= limit:\n return text\n return text[:limit] + \"...\"\n\n\ndef urlsafe(path):\n \"\"\"Replaces the unsafe chars from path with underscores.\n \"\"\"\n return _get_safepath_re().sub('_', path).strip('_')[:100]\n\[email protected]\ndef _get_safepath_re():\n \"\"\"Make regular expression that matches all unsafe chars.\"\"\"\n # unsafe chars according to RFC 2396\n reserved = \";/?:@&=+$,\"\n delims = '<>#%\"'\n unwise = \"{}|\\\\^[]`\"\n space = ' \\n\\r'\n\n unsafe = reserved + delims + unwise + space\n pattern = '[%s]+' % \"\".join(re.escape(c) for c in unsafe)\n return re.compile(pattern)\n\n\ndef get_coverstore_url():\n \"\"\"Returns the base url of coverstore by looking at the config.\"\"\"\n return config.get('coverstore_url', 'https://covers.openlibrary.org').rstrip('/')\n\n\n_texsafe_map = {\n '\"': r'\\textquotedbl{}',\n '#': r'\\#',\n '$': r'\\$',\n '%': r'\\%',\n '&': r'\\&',\n '<': r'\\textless{}',\n '>': r'\\textgreater{}',\n '\\\\': r'\\textbackslash{}',\n '^': r'\\^{}',\n '_': r'\\_{}',\n '{': r'\\{',\n '}': r'\\}',\n '|': r'\\textbar{}',\n '~': r'\\~{}',\n}\n\n_texsafe_re = None\n\ndef texsafe(text):\n \"\"\"Escapes the special characters in the given text for using it in tex type setting.\n\n Tex (or Latex) uses some characters in the ascii character range for\n special notations. These characters must be escaped when occur in the\n regular text. This function escapes those special characters.\n\n The list of special characters and the latex command to typeset them can\n be found in `The Comprehensive LaTeX Symbol List`_.\n\n .. _The Comprehensive LaTeX Symbol List: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf\n \"\"\"\n global _texsafe_re\n if _texsafe_re is None:\n pattern = \"[%s]\" % re.escape(\"\".join(list(_texsafe_map)))\n _texsafe_re = re.compile(pattern)\n\n return _texsafe_re.sub(lambda m: _texsafe_map[m.group(0)], text)\n\ndef percentage(value, total):\n \"\"\"Computes percentage.\n\n >>> percentage(1, 10)\n 10.0\n >>> percentage(0, 0)\n 0.0\n \"\"\"\n return (value * 100.0) / total if total else 0.0\n\ndef uniq(values, key=None):\n \"\"\"Returns the unique entries from the given values in the original order.\n\n The value of the optional `key` parameter should be a function that takes\n a single argument and returns a key to test the uniqueness.\n \"\"\"\n key = key or (lambda x: x)\n s = set()\n result = []\n for v in values:\n k = key(v)\n if k not in s:\n s.add(k)\n result.append(v)\n return result\n\ndef affiliate_id(affiliate):\n return config.get('affiliate_ids', {}).get(affiliate, '')\n\ndef bookreader_host():\n return config.get('bookreader_host', '')\n\ndef private_collections():\n \"\"\"Collections which are lendable but should not be linked from OL\n TODO: Remove when we can handle institutional books\"\"\"\n return ['georgetown-university-law-library-rr']\n\ndef private_collection_in(collections):\n return any(x in private_collections() for x in collections)\n\ndef _get_helpers():\n _globals = globals()\n return web.storage((k, _globals[k]) for k in __all__)\n\n\n## This must be at the end of this module\nhelpers = _get_helpers()\n", "path": "openlibrary/core/helpers.py"}]}
| 3,915 | 98 |
gh_patches_debug_23985
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-6539
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pyi_rth_pkgutil.py hook might not work under MacOS in onefile mode
## Description of the issue
I'm using `iter_modules` in our cross-platform OpenSource Project [Specter-Desktop](https://github.com/cryptoadvance/specter-desktop) ([spec-file](https://github.com/cryptoadvance/specter-desktop/blob/master/pyinstaller/specterd.spec)).
For that, the [pyi_rth_pkgutil.py-hook](https://github.com/pyinstaller/pyinstaller/blob/v4.8/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py) is quite useful and solved my issue on the linux-platform. However, i have an issue with the MacOS-build at runtime:
```
$ ./dist/specterd server --config=DevelopmentConfig
Initializing HWI...
[2022-01-24 11:49:07,827] INFO in server: Configuration: cryptoadvance.specter.config.DevelopmentConfig
[2022-01-24 11:49:07,828] INFO in specter_migrator: Initiated MigDataManager(/Users/thomasmuller/.specter_dev/migration_data.json events:3 execs:2 )
Traceback (most recent call last):
File "specterd.py", line 4, in <module>
entry_point()
File "click/core.py", line 829, in __call__
File "click/core.py", line 782, in main
File "click/core.py", line 1259, in invoke
File "click/core.py", line 1066, in invoke
File "click/core.py", line 610, in invoke
File "cryptoadvance/specter/cli/cli_server.py", line 114, in server
File "cryptoadvance/specter/server.py", line 104, in init_app
File "cryptoadvance/specter/util/specter_migrator.py", line 125, in execute_migrations
File "cryptoadvance/specter/util/specter_migrator.py", line 107, in plan_migration
File "cryptoadvance/specter/util/specter_migrator.py", line 161, in get_migration_classes
File "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py", line 59, in _pyi_pkgutil_iter_modules
AssertionError
[20771] Failed to execute script 'specterd' due to unhandled exception!
(.env) thomasmuller @ Thomass-Air.fritz.box ➜ pyinstaller git:(service_swan) ✗
```
So the relevant code is this:
```
SYS_PREFIX = sys._MEIPASS + os.path.sep
SYS_PREFIXLEN = len(SYS_PREFIX)
# Only single path is supported, and it must start with sys._MEIPASS.
pkg_path = os.path.normpath(path[0])
assert pkg_path.startswith(SYS_PREFIX)
```
So i inserted some debugging code and figured out that:
```
SYS_PREFIX: /var/folders/bd/k__jfmz112sb1_dyz4rxwfzh0000gn/T/
pkg_path: /private/var/folders/bd/k__jfmz112sb1_dyz4rxwfzh0000gn/T/_MEIBqFxGW/cryptoadvance/specter/util/migrations
```
So in combination with the comment above, it seems that MacOS is using "multiple path" and so MacOS is not supported here?
On the other hand, this [link](https://difyel.com/apple/macos/macos-directory-structure/#Other_directories) is claiming that `private/var` is only a symlink to `/var`. So i added this and this seem to have solved it?
```
if pkg_path.startswith("/private"):
pkg_path = pkg_path[8:]
assert pkg_path.startswith(SYS_PREFIX)
```
Is this a valid fix? I might make a PR if someone can give me some guidance on this.
### Context information (for bug reports)
* Output of `pyinstaller --version`: ```4.8```
* Version of Python: <!-- e.g. 3.9.7 -->
* Platform: OS X
* How you installed Python: brew
* Did you also try this on another platform? Does it work there? --> Yes, works on Linux
</issue>
<code>
[start of PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2021, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11 #
12 # This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list
13 # sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive
14 # build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).
15 #
16 # The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to
17 # FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while
18 # PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter
19 # into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension
20 # modules) in addition to PYZ-embedded ones.
21 #
22 # Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of
23 # original implementation with contents of PYZ archive from FrozenImporter's TOC.
24
25 import os
26 import pkgutil
27 import sys
28
29 from pyimod03_importers import FrozenImporter
30
31 _orig_pkgutil_iter_modules = pkgutil.iter_modules
32
33
34 def _pyi_pkgutil_iter_modules(path=None, prefix=''):
35 # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary
36 # extensions and compiled pyc modules in noarchive debug builds).
37 yield from _orig_pkgutil_iter_modules(path, prefix)
38
39 # Find the instance of PyInstaller's FrozenImporter.
40 for importer in pkgutil.iter_importers():
41 if isinstance(importer, FrozenImporter):
42 break
43 else:
44 return
45
46 if path is None:
47 # Search for all top-level packages/modules. These will have no dots in their entry names.
48 for entry in importer.toc:
49 if entry.count('.') != 0:
50 continue
51 is_pkg = importer.is_package(entry)
52 yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)
53 else:
54 # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
55 SYS_PREFIX = sys._MEIPASS + os.path.sep
56 SYS_PREFIXLEN = len(SYS_PREFIX)
57
58 for pkg_path in path:
59 pkg_path = os.path.normpath(pkg_path)
60 if not pkg_path.startswith(SYS_PREFIX):
61 # if the path does not start with sys._MEIPASS then it cannot be a bundled package.
62 continue
63 # Construct package prefix from path...
64 pkg_prefix = pkg_path[SYS_PREFIXLEN:]
65 pkg_prefix = pkg_prefix.replace(os.path.sep, '.')
66 # ... and ensure it ends with a dot (so we can directly filter out the package itself).
67 if not pkg_prefix.endswith('.'):
68 pkg_prefix += '.'
69 pkg_prefix_len = len(pkg_prefix)
70
71 for entry in importer.toc:
72 if not entry.startswith(pkg_prefix):
73 continue
74 name = entry[pkg_prefix_len:]
75 if name.count('.') != 0:
76 continue
77 is_pkg = importer.is_package(entry)
78 yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)
79
80
81 pkgutil.iter_modules = _pyi_pkgutil_iter_modules
82
[end of PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
@@ -52,11 +52,16 @@
yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)
else:
# Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.
- SYS_PREFIX = sys._MEIPASS + os.path.sep
+ #
+ # Use os.path.realpath() to fully resolve any symbolic links in sys._MEIPASS, in order to avoid path mis-matches
+ # when the given search paths also contain symbolic links and are already fully resolved. See #6537 for an
+ # example of such a problem with onefile build on macOS, where the temporary directory is placed under /var,
+ # which is actually a symbolic link to /private/var.
+ SYS_PREFIX = os.path.realpath(sys._MEIPASS) + os.path.sep
SYS_PREFIXLEN = len(SYS_PREFIX)
for pkg_path in path:
- pkg_path = os.path.normpath(pkg_path)
+ pkg_path = os.path.realpath(pkg_path) # Fully resolve the given path, in case it contains symbolic links.
if not pkg_path.startswith(SYS_PREFIX):
# if the path does not start with sys._MEIPASS then it cannot be a bundled package.
continue
|
{"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n@@ -52,11 +52,16 @@\n yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)\n else:\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n- SYS_PREFIX = sys._MEIPASS + os.path.sep\n+ #\n+ # Use os.path.realpath() to fully resolve any symbolic links in sys._MEIPASS, in order to avoid path mis-matches\n+ # when the given search paths also contain symbolic links and are already fully resolved. See #6537 for an\n+ # example of such a problem with onefile build on macOS, where the temporary directory is placed under /var,\n+ # which is actually a symbolic link to /private/var.\n+ SYS_PREFIX = os.path.realpath(sys._MEIPASS) + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n \n for pkg_path in path:\n- pkg_path = os.path.normpath(pkg_path)\n+ pkg_path = os.path.realpath(pkg_path) # Fully resolve the given path, in case it contains symbolic links.\n if not pkg_path.startswith(SYS_PREFIX):\n # if the path does not start with sys._MEIPASS then it cannot be a bundled package.\n continue\n", "issue": "pyi_rth_pkgutil.py hook might not work under MacOS in onefile mode\n## Description of the issue\r\nI'm using `iter_modules` in our cross-platform OpenSource Project [Specter-Desktop](https://github.com/cryptoadvance/specter-desktop) ([spec-file](https://github.com/cryptoadvance/specter-desktop/blob/master/pyinstaller/specterd.spec)).\r\n\r\nFor that, the [pyi_rth_pkgutil.py-hook](https://github.com/pyinstaller/pyinstaller/blob/v4.8/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py) is quite useful and solved my issue on the linux-platform. However, i have an issue with the MacOS-build at runtime:\r\n```\r\n$ ./dist/specterd server --config=DevelopmentConfig\r\nInitializing HWI...\r\n[2022-01-24 11:49:07,827] INFO in server: Configuration: cryptoadvance.specter.config.DevelopmentConfig\r\n[2022-01-24 11:49:07,828] INFO in specter_migrator: Initiated MigDataManager(/Users/thomasmuller/.specter_dev/migration_data.json events:3 execs:2 )\r\nTraceback (most recent call last):\r\n File \"specterd.py\", line 4, in <module>\r\n entry_point()\r\n File \"click/core.py\", line 829, in __call__\r\n File \"click/core.py\", line 782, in main\r\n File \"click/core.py\", line 1259, in invoke\r\n File \"click/core.py\", line 1066, in invoke\r\n File \"click/core.py\", line 610, in invoke\r\n File \"cryptoadvance/specter/cli/cli_server.py\", line 114, in server\r\n File \"cryptoadvance/specter/server.py\", line 104, in init_app\r\n File \"cryptoadvance/specter/util/specter_migrator.py\", line 125, in execute_migrations\r\n File \"cryptoadvance/specter/util/specter_migrator.py\", line 107, in plan_migration\r\n File \"cryptoadvance/specter/util/specter_migrator.py\", line 161, in get_migration_classes\r\n File \"PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\", line 59, in _pyi_pkgutil_iter_modules\r\nAssertionError\r\n[20771] Failed to execute script 'specterd' due to unhandled exception!\r\n(.env) thomasmuller @ Thomass-Air.fritz.box \u279c pyinstaller git:(service_swan) \u2717\r\n```\r\n\r\nSo the relevant code is this:\r\n```\r\n SYS_PREFIX = sys._MEIPASS + os.path.sep\r\n SYS_PREFIXLEN = len(SYS_PREFIX)\r\n # Only single path is supported, and it must start with sys._MEIPASS.\r\n pkg_path = os.path.normpath(path[0])\r\n assert pkg_path.startswith(SYS_PREFIX)\r\n```\r\nSo i inserted some debugging code and figured out that:\r\n\r\n```\r\nSYS_PREFIX: /var/folders/bd/k__jfmz112sb1_dyz4rxwfzh0000gn/T/\r\npkg_path: /private/var/folders/bd/k__jfmz112sb1_dyz4rxwfzh0000gn/T/_MEIBqFxGW/cryptoadvance/specter/util/migrations\r\n```\r\nSo in combination with the comment above, it seems that MacOS is using \"multiple path\" and so MacOS is not supported here?\r\nOn the other hand, this [link](https://difyel.com/apple/macos/macos-directory-structure/#Other_directories) is claiming that `private/var` is only a symlink to `/var`. So i added this and this seem to have solved it?\r\n```\r\n if pkg_path.startswith(\"/private\"):\r\n pkg_path = pkg_path[8:]\r\n assert pkg_path.startswith(SYS_PREFIX)\r\n```\r\nIs this a valid fix? I might make a PR if someone can give me some guidance on this.\r\n\r\n### Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: ```4.8```\r\n* Version of Python: <!-- e.g. 3.9.7 -->\r\n* Platform: OS X\r\n* How you installed Python: brew\r\n* Did you also try this on another platform? Does it work there? --> Yes, works on Linux\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's FrozenImporter to list\n# sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in noarchive\n# build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# FrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's FrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning FrozenImporter\n# into path entry finder, would seemingly require the latter to support on-filesystem resources (e.g., extension\n# modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from FrozenImporter's TOC.\n\nimport os\nimport pkgutil\nimport sys\n\nfrom pyimod03_importers import FrozenImporter\n\n_orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n\ndef _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both binary\n # extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's FrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, FrozenImporter):\n break\n else:\n return\n\n if path is None:\n # Search for all top-level packages/modules. These will have no dots in their entry names.\n for entry in importer.toc:\n if entry.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + entry, is_pkg)\n else:\n # Declare SYS_PREFIX locally, to avoid clash with eponymous global symbol from pyi_rth_pkgutil hook.\n SYS_PREFIX = sys._MEIPASS + os.path.sep\n SYS_PREFIXLEN = len(SYS_PREFIX)\n\n for pkg_path in path:\n pkg_path = os.path.normpath(pkg_path)\n if not pkg_path.startswith(SYS_PREFIX):\n # if the path does not start with sys._MEIPASS then it cannot be a bundled package.\n continue\n # Construct package prefix from path...\n pkg_prefix = pkg_path[SYS_PREFIXLEN:]\n pkg_prefix = pkg_prefix.replace(os.path.sep, '.')\n # ... and ensure it ends with a dot (so we can directly filter out the package itself).\n if not pkg_prefix.endswith('.'):\n pkg_prefix += '.'\n pkg_prefix_len = len(pkg_prefix)\n\n for entry in importer.toc:\n if not entry.startswith(pkg_prefix):\n continue\n name = entry[pkg_prefix_len:]\n if name.count('.') != 0:\n continue\n is_pkg = importer.is_package(entry)\n yield pkgutil.ModuleInfo(importer, prefix + name, is_pkg)\n\n\npkgutil.iter_modules = _pyi_pkgutil_iter_modules\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}]}
| 2,475 | 349 |
gh_patches_debug_26224
|
rasdani/github-patches
|
git_diff
|
mirumee__ariadne-24
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`add_resolve_functions_to_schema` should support Scalars parse_value and parse_literal
Currently Ariadne Scalar support is limited to serializing python types to JSON before returning them to client, but we also want to support using custom scalars for input.
Our `add_resolve_functions_to_scalar` utility could support following use-cases:
Code below results in one-way only scalar:
- `type_defs = {'Scalar': {'serialize': callable}}`
And this code results in two-way scalar:
- `type_defs = {'Scalar': {'serialize': callable, 'parse_value': callable, 'parse_literal': callable}}` - explicit syntax for two-directional scalar.
</issue>
<code>
[start of ariadne/resolvers.py]
1 from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema
2 from graphql.execution.base import ResolveInfo
3
4
5 def resolve_parent_field(parent, name: str):
6 if isinstance(parent, dict):
7 return parent.get(name)
8 return getattr(parent, name, None)
9
10
11 def default_resolver(parent, info: ResolveInfo):
12 return resolve_parent_field(parent, info.field_name)
13
14
15 def resolve_to(name: str):
16 def resolver(parent, *_):
17 return resolve_parent_field(parent, name)
18
19 return resolver
20
21
22 def add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):
23 for type_name, type_object in schema.get_type_map().items():
24 if isinstance(type_object, GraphQLObjectType):
25 add_resolve_functions_to_object(type_name, type_object, resolvers)
26 if isinstance(type_object, GraphQLScalarType):
27 add_resolve_function_to_scalar(type_name, type_object, resolvers)
28
29
30 def add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):
31 type_resolver = resolvers.get(name, {})
32 for field_name, field_object in obj.fields.items():
33 field_resolver = type_resolver.get(field_name, default_resolver)
34 field_object.resolver = field_resolver
35
36
37 def add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):
38 serializer = resolvers.get(name, obj.serialize)
39 obj.serialize = serializer
40
[end of ariadne/resolvers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ariadne/resolvers.py b/ariadne/resolvers.py
--- a/ariadne/resolvers.py
+++ b/ariadne/resolvers.py
@@ -24,16 +24,24 @@
if isinstance(type_object, GraphQLObjectType):
add_resolve_functions_to_object(type_name, type_object, resolvers)
if isinstance(type_object, GraphQLScalarType):
- add_resolve_function_to_scalar(type_name, type_object, resolvers)
+ add_resolve_functions_to_scalar(type_name, type_object, resolvers)
def add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):
- type_resolver = resolvers.get(name, {})
+ type_resolvers = resolvers.get(name, {})
for field_name, field_object in obj.fields.items():
- field_resolver = type_resolver.get(field_name, default_resolver)
+ field_resolver = type_resolvers.get(field_name, default_resolver)
field_object.resolver = field_resolver
-def add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):
- serializer = resolvers.get(name, obj.serialize)
- obj.serialize = serializer
+def add_resolve_functions_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):
+ scalar_resolvers = resolvers.get(name, {})
+
+ serialize = scalar_resolvers.get("serialize", obj.serialize)
+ obj.serialize = serialize
+
+ parse_literal = scalar_resolvers.get("parse_literal", obj.parse_literal)
+ obj.parse_literal = parse_literal
+
+ parse_value = scalar_resolvers.get("parse_value", obj.parse_value)
+ obj.parse_value = parse_value
|
{"golden_diff": "diff --git a/ariadne/resolvers.py b/ariadne/resolvers.py\n--- a/ariadne/resolvers.py\n+++ b/ariadne/resolvers.py\n@@ -24,16 +24,24 @@\n if isinstance(type_object, GraphQLObjectType):\n add_resolve_functions_to_object(type_name, type_object, resolvers)\n if isinstance(type_object, GraphQLScalarType):\n- add_resolve_function_to_scalar(type_name, type_object, resolvers)\n+ add_resolve_functions_to_scalar(type_name, type_object, resolvers)\n \n \n def add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):\n- type_resolver = resolvers.get(name, {})\n+ type_resolvers = resolvers.get(name, {})\n for field_name, field_object in obj.fields.items():\n- field_resolver = type_resolver.get(field_name, default_resolver)\n+ field_resolver = type_resolvers.get(field_name, default_resolver)\n field_object.resolver = field_resolver\n \n \n-def add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n- serializer = resolvers.get(name, obj.serialize)\n- obj.serialize = serializer\n+def add_resolve_functions_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n+ scalar_resolvers = resolvers.get(name, {})\n+\n+ serialize = scalar_resolvers.get(\"serialize\", obj.serialize)\n+ obj.serialize = serialize\n+\n+ parse_literal = scalar_resolvers.get(\"parse_literal\", obj.parse_literal)\n+ obj.parse_literal = parse_literal\n+\n+ parse_value = scalar_resolvers.get(\"parse_value\", obj.parse_value)\n+ obj.parse_value = parse_value\n", "issue": "`add_resolve_functions_to_schema` should support Scalars parse_value and parse_literal\nCurrently Ariadne Scalar support is limited to serializing python types to JSON before returning them to client, but we also want to support using custom scalars for input.\r\n\r\nOur `add_resolve_functions_to_scalar` utility could support following use-cases:\r\n\r\nCode below results in one-way only scalar:\r\n\r\n- `type_defs = {'Scalar': {'serialize': callable}}`\r\n\r\nAnd this code results in two-way scalar:\r\n\r\n- `type_defs = {'Scalar': {'serialize': callable, 'parse_value': callable, 'parse_literal': callable}}` - explicit syntax for two-directional scalar.\r\n\n", "before_files": [{"content": "from graphql import GraphQLObjectType, GraphQLScalarType, GraphQLSchema\nfrom graphql.execution.base import ResolveInfo\n\n\ndef resolve_parent_field(parent, name: str):\n if isinstance(parent, dict):\n return parent.get(name)\n return getattr(parent, name, None)\n\n\ndef default_resolver(parent, info: ResolveInfo):\n return resolve_parent_field(parent, info.field_name)\n\n\ndef resolve_to(name: str):\n def resolver(parent, *_):\n return resolve_parent_field(parent, name)\n\n return resolver\n\n\ndef add_resolve_functions_to_schema(schema: GraphQLSchema, resolvers: dict):\n for type_name, type_object in schema.get_type_map().items():\n if isinstance(type_object, GraphQLObjectType):\n add_resolve_functions_to_object(type_name, type_object, resolvers)\n if isinstance(type_object, GraphQLScalarType):\n add_resolve_function_to_scalar(type_name, type_object, resolvers)\n\n\ndef add_resolve_functions_to_object(name: str, obj: GraphQLObjectType, resolvers: dict):\n type_resolver = resolvers.get(name, {})\n for field_name, field_object in obj.fields.items():\n field_resolver = type_resolver.get(field_name, default_resolver)\n field_object.resolver = field_resolver\n\n\ndef add_resolve_function_to_scalar(name: str, obj: GraphQLObjectType, resolvers: dict):\n serializer = resolvers.get(name, obj.serialize)\n obj.serialize = serializer\n", "path": "ariadne/resolvers.py"}]}
| 1,039 | 363 |
gh_patches_debug_25909
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-13144
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OIDC: Create an audience route
Creating this as a reminder for myself.
The OIDC minting routes should include an `/_/oidc/audience` endpoint that just returns the OIDC audience expected by any particular Warehouse instance.
xref #12465
</issue>
<code>
[start of warehouse/oidc/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import time
14
15 from pydantic import BaseModel, StrictStr, ValidationError
16 from pyramid.view import view_config
17 from sqlalchemy import func
18
19 from warehouse.admin.flags import AdminFlagValue
20 from warehouse.email import send_pending_oidc_publisher_invalidated_email
21 from warehouse.events.tags import EventTag
22 from warehouse.macaroons import caveats
23 from warehouse.macaroons.interfaces import IMacaroonService
24 from warehouse.oidc.interfaces import IOIDCPublisherService
25 from warehouse.oidc.models import PendingOIDCPublisher
26 from warehouse.packaging.interfaces import IProjectService
27 from warehouse.packaging.models import ProjectFactory
28 from warehouse.rate_limiting.interfaces import IRateLimiter
29
30
31 class TokenPayload(BaseModel):
32 token: StrictStr
33
34
35 def _ratelimiters(request):
36 return {
37 "user.oidc": request.find_service(
38 IRateLimiter, name="user_oidc.publisher.register"
39 ),
40 "ip.oidc": request.find_service(
41 IRateLimiter, name="ip_oidc.publisher.register"
42 ),
43 }
44
45
46 @view_config(
47 route_name="oidc.mint_token",
48 require_methods=["POST"],
49 renderer="json",
50 require_csrf=False,
51 has_translations=False,
52 )
53 def mint_token_from_oidc(request):
54 def _invalid(errors):
55 request.response.status = 422
56 return {"message": "Token request failed", "errors": errors}
57
58 oidc_enabled = request.registry.settings[
59 "warehouse.oidc.enabled"
60 ] and not request.flags.enabled(AdminFlagValue.DISALLOW_OIDC)
61 if not oidc_enabled:
62 return _invalid(
63 errors=[
64 {
65 "code": "not-enabled",
66 "description": "OIDC functionality not enabled",
67 }
68 ]
69 )
70
71 try:
72 payload = TokenPayload.parse_raw(request.body)
73 unverified_jwt = payload.token
74 except ValidationError as exc:
75 return _invalid(errors=[{"code": "invalid-payload", "description": str(exc)}])
76
77 # For the time being, GitHub is our only OIDC publisher.
78 # In the future, this should locate the correct service based on an
79 # identifier in the request body.
80 oidc_service = request.find_service(IOIDCPublisherService, name="github")
81 claims = oidc_service.verify_jwt_signature(unverified_jwt)
82 if not claims:
83 return _invalid(
84 errors=[
85 {"code": "invalid-token", "description": "malformed or invalid token"}
86 ]
87 )
88
89 # First, try to find a pending publisher.
90 pending_publisher = oidc_service.find_publisher(claims, pending=True)
91 if pending_publisher is not None:
92 factory = ProjectFactory(request)
93
94 # If the project already exists, this pending publisher is no longer
95 # valid and needs to be removed.
96 # NOTE: This is mostly a sanity check, since we dispose of invalidated
97 # pending publishers below.
98 if pending_publisher.project_name in factory:
99 request.db.delete(pending_publisher)
100 return _invalid(
101 errors=[
102 {
103 "code": "invalid-pending-publisher",
104 "description": "valid token, but project already exists",
105 }
106 ]
107 )
108
109 # Create the new project, and reify the pending publisher against it.
110 project_service = request.find_service(IProjectService)
111 new_project = project_service.create_project(
112 pending_publisher.project_name,
113 pending_publisher.added_by,
114 ratelimited=False,
115 )
116 oidc_service.reify_pending_publisher(pending_publisher, new_project)
117
118 # Successfully converting a pending publisher into a normal publisher
119 # is a positive signal, so we reset the associated ratelimits.
120 ratelimiters = _ratelimiters(request)
121 ratelimiters["user.oidc"].clear(pending_publisher.added_by.id)
122 ratelimiters["ip.oidc"].clear(request.remote_addr)
123
124 # There might be other pending publishers for the same project name,
125 # which we've now invalidated by creating the project. These would
126 # be disposed of on use, but we explicitly dispose of them here while
127 # also sending emails to their owners.
128 stale_pending_publishers = (
129 request.db.query(PendingOIDCPublisher)
130 .filter(
131 func.normalize_pep426_name(PendingOIDCPublisher.project_name)
132 == func.normalize_pep426_name(pending_publisher.project_name)
133 )
134 .all()
135 )
136 for stale_publisher in stale_pending_publishers:
137 send_pending_oidc_publisher_invalidated_email(
138 request,
139 stale_publisher.added_by,
140 project_name=stale_publisher.project_name,
141 )
142 request.db.delete(stale_publisher)
143
144 # We either don't have a pending OIDC publisher, or we *did*
145 # have one and we've just converted it. Either way, look for a full publisher
146 # to actually do the macaroon minting with.
147 publisher = oidc_service.find_publisher(claims, pending=False)
148 if not publisher:
149 return _invalid(
150 errors=[
151 {
152 "code": "invalid-publisher",
153 "description": "valid token, but no corresponding publisher",
154 }
155 ]
156 )
157
158 # At this point, we've verified that the given JWT is valid for the given
159 # project. All we need to do is mint a new token.
160 # NOTE: For OIDC-minted API tokens, the Macaroon's description string
161 # is purely an implementation detail and is not displayed to the user.
162 macaroon_service = request.find_service(IMacaroonService, context=None)
163 not_before = int(time.time())
164 expires_at = not_before + 900
165 serialized, dm = macaroon_service.create_macaroon(
166 request.domain,
167 f"OpenID token: {publisher.publisher_url} ({not_before})",
168 [
169 caveats.OIDCPublisher(oidc_publisher_id=str(publisher.id)),
170 caveats.ProjectID(project_ids=[str(p.id) for p in publisher.projects]),
171 caveats.Expiration(expires_at=expires_at, not_before=not_before),
172 ],
173 oidc_publisher_id=publisher.id,
174 )
175 for project in publisher.projects:
176 project.record_event(
177 tag=EventTag.Project.ShortLivedAPITokenAdded,
178 ip_address=request.remote_addr,
179 additional={
180 "expires": expires_at,
181 "publisher_name": publisher.publisher_name,
182 "publisher_url": publisher.publisher_url,
183 },
184 )
185 return {"success": True, "token": serialized}
186
[end of warehouse/oidc/views.py]
[start of warehouse/oidc/__init__.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from warehouse.oidc.interfaces import IOIDCPublisherService
14 from warehouse.oidc.services import OIDCPublisherServiceFactory
15 from warehouse.oidc.utils import GITHUB_OIDC_ISSUER_URL
16
17
18 def includeme(config):
19 oidc_publisher_service_class = config.maybe_dotted(
20 config.registry.settings["oidc.backend"]
21 )
22
23 config.register_service_factory(
24 OIDCPublisherServiceFactory(
25 publisher="github",
26 issuer_url=GITHUB_OIDC_ISSUER_URL,
27 service_class=oidc_publisher_service_class,
28 ),
29 IOIDCPublisherService,
30 name="github",
31 )
32
33 # During deployments, we separate auth routes into their own subdomain
34 # to simplify caching exclusion.
35 auth = config.get_settings().get("auth.domain")
36
37 config.add_route("oidc.mint_token", "/_/oidc/github/mint-token", domain=auth)
38
[end of warehouse/oidc/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/oidc/__init__.py b/warehouse/oidc/__init__.py
--- a/warehouse/oidc/__init__.py
+++ b/warehouse/oidc/__init__.py
@@ -34,4 +34,5 @@
# to simplify caching exclusion.
auth = config.get_settings().get("auth.domain")
- config.add_route("oidc.mint_token", "/_/oidc/github/mint-token", domain=auth)
+ config.add_route("oidc.audience", "/_/oidc/audience", domain=auth)
+ config.add_route("oidc.github.mint_token", "/_/oidc/github/mint-token", domain=auth)
diff --git a/warehouse/oidc/views.py b/warehouse/oidc/views.py
--- a/warehouse/oidc/views.py
+++ b/warehouse/oidc/views.py
@@ -13,6 +13,7 @@
import time
from pydantic import BaseModel, StrictStr, ValidationError
+from pyramid.response import Response
from pyramid.view import view_config
from sqlalchemy import func
@@ -44,7 +45,26 @@
@view_config(
- route_name="oidc.mint_token",
+ route_name="oidc.audience",
+ require_methods=["GET"],
+ renderer="json",
+ require_csrf=False,
+ has_translations=False,
+)
+def oidc_audience(request):
+ oidc_enabled = request.registry.settings[
+ "warehouse.oidc.enabled"
+ ] and not request.flags.enabled(AdminFlagValue.DISALLOW_OIDC)
+
+ if not oidc_enabled:
+ return Response(status=403, json={"message": "OIDC functionality not enabled"})
+
+ audience = request.registry.settings["warehouse.oidc.audience"]
+ return {"audience": audience}
+
+
+@view_config(
+ route_name="oidc.github.mint_token",
require_methods=["POST"],
renderer="json",
require_csrf=False,
|
{"golden_diff": "diff --git a/warehouse/oidc/__init__.py b/warehouse/oidc/__init__.py\n--- a/warehouse/oidc/__init__.py\n+++ b/warehouse/oidc/__init__.py\n@@ -34,4 +34,5 @@\n # to simplify caching exclusion.\n auth = config.get_settings().get(\"auth.domain\")\n \n- config.add_route(\"oidc.mint_token\", \"/_/oidc/github/mint-token\", domain=auth)\n+ config.add_route(\"oidc.audience\", \"/_/oidc/audience\", domain=auth)\n+ config.add_route(\"oidc.github.mint_token\", \"/_/oidc/github/mint-token\", domain=auth)\ndiff --git a/warehouse/oidc/views.py b/warehouse/oidc/views.py\n--- a/warehouse/oidc/views.py\n+++ b/warehouse/oidc/views.py\n@@ -13,6 +13,7 @@\n import time\n \n from pydantic import BaseModel, StrictStr, ValidationError\n+from pyramid.response import Response\n from pyramid.view import view_config\n from sqlalchemy import func\n \n@@ -44,7 +45,26 @@\n \n \n @view_config(\n- route_name=\"oidc.mint_token\",\n+ route_name=\"oidc.audience\",\n+ require_methods=[\"GET\"],\n+ renderer=\"json\",\n+ require_csrf=False,\n+ has_translations=False,\n+)\n+def oidc_audience(request):\n+ oidc_enabled = request.registry.settings[\n+ \"warehouse.oidc.enabled\"\n+ ] and not request.flags.enabled(AdminFlagValue.DISALLOW_OIDC)\n+\n+ if not oidc_enabled:\n+ return Response(status=403, json={\"message\": \"OIDC functionality not enabled\"})\n+\n+ audience = request.registry.settings[\"warehouse.oidc.audience\"]\n+ return {\"audience\": audience}\n+\n+\n+@view_config(\n+ route_name=\"oidc.github.mint_token\",\n require_methods=[\"POST\"],\n renderer=\"json\",\n require_csrf=False,\n", "issue": "OIDC: Create an audience route\nCreating this as a reminder for myself.\r\n\r\nThe OIDC minting routes should include an `/_/oidc/audience` endpoint that just returns the OIDC audience expected by any particular Warehouse instance.\r\n\r\nxref #12465 \n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport time\n\nfrom pydantic import BaseModel, StrictStr, ValidationError\nfrom pyramid.view import view_config\nfrom sqlalchemy import func\n\nfrom warehouse.admin.flags import AdminFlagValue\nfrom warehouse.email import send_pending_oidc_publisher_invalidated_email\nfrom warehouse.events.tags import EventTag\nfrom warehouse.macaroons import caveats\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.oidc.interfaces import IOIDCPublisherService\nfrom warehouse.oidc.models import PendingOIDCPublisher\nfrom warehouse.packaging.interfaces import IProjectService\nfrom warehouse.packaging.models import ProjectFactory\nfrom warehouse.rate_limiting.interfaces import IRateLimiter\n\n\nclass TokenPayload(BaseModel):\n token: StrictStr\n\n\ndef _ratelimiters(request):\n return {\n \"user.oidc\": request.find_service(\n IRateLimiter, name=\"user_oidc.publisher.register\"\n ),\n \"ip.oidc\": request.find_service(\n IRateLimiter, name=\"ip_oidc.publisher.register\"\n ),\n }\n\n\n@view_config(\n route_name=\"oidc.mint_token\",\n require_methods=[\"POST\"],\n renderer=\"json\",\n require_csrf=False,\n has_translations=False,\n)\ndef mint_token_from_oidc(request):\n def _invalid(errors):\n request.response.status = 422\n return {\"message\": \"Token request failed\", \"errors\": errors}\n\n oidc_enabled = request.registry.settings[\n \"warehouse.oidc.enabled\"\n ] and not request.flags.enabled(AdminFlagValue.DISALLOW_OIDC)\n if not oidc_enabled:\n return _invalid(\n errors=[\n {\n \"code\": \"not-enabled\",\n \"description\": \"OIDC functionality not enabled\",\n }\n ]\n )\n\n try:\n payload = TokenPayload.parse_raw(request.body)\n unverified_jwt = payload.token\n except ValidationError as exc:\n return _invalid(errors=[{\"code\": \"invalid-payload\", \"description\": str(exc)}])\n\n # For the time being, GitHub is our only OIDC publisher.\n # In the future, this should locate the correct service based on an\n # identifier in the request body.\n oidc_service = request.find_service(IOIDCPublisherService, name=\"github\")\n claims = oidc_service.verify_jwt_signature(unverified_jwt)\n if not claims:\n return _invalid(\n errors=[\n {\"code\": \"invalid-token\", \"description\": \"malformed or invalid token\"}\n ]\n )\n\n # First, try to find a pending publisher.\n pending_publisher = oidc_service.find_publisher(claims, pending=True)\n if pending_publisher is not None:\n factory = ProjectFactory(request)\n\n # If the project already exists, this pending publisher is no longer\n # valid and needs to be removed.\n # NOTE: This is mostly a sanity check, since we dispose of invalidated\n # pending publishers below.\n if pending_publisher.project_name in factory:\n request.db.delete(pending_publisher)\n return _invalid(\n errors=[\n {\n \"code\": \"invalid-pending-publisher\",\n \"description\": \"valid token, but project already exists\",\n }\n ]\n )\n\n # Create the new project, and reify the pending publisher against it.\n project_service = request.find_service(IProjectService)\n new_project = project_service.create_project(\n pending_publisher.project_name,\n pending_publisher.added_by,\n ratelimited=False,\n )\n oidc_service.reify_pending_publisher(pending_publisher, new_project)\n\n # Successfully converting a pending publisher into a normal publisher\n # is a positive signal, so we reset the associated ratelimits.\n ratelimiters = _ratelimiters(request)\n ratelimiters[\"user.oidc\"].clear(pending_publisher.added_by.id)\n ratelimiters[\"ip.oidc\"].clear(request.remote_addr)\n\n # There might be other pending publishers for the same project name,\n # which we've now invalidated by creating the project. These would\n # be disposed of on use, but we explicitly dispose of them here while\n # also sending emails to their owners.\n stale_pending_publishers = (\n request.db.query(PendingOIDCPublisher)\n .filter(\n func.normalize_pep426_name(PendingOIDCPublisher.project_name)\n == func.normalize_pep426_name(pending_publisher.project_name)\n )\n .all()\n )\n for stale_publisher in stale_pending_publishers:\n send_pending_oidc_publisher_invalidated_email(\n request,\n stale_publisher.added_by,\n project_name=stale_publisher.project_name,\n )\n request.db.delete(stale_publisher)\n\n # We either don't have a pending OIDC publisher, or we *did*\n # have one and we've just converted it. Either way, look for a full publisher\n # to actually do the macaroon minting with.\n publisher = oidc_service.find_publisher(claims, pending=False)\n if not publisher:\n return _invalid(\n errors=[\n {\n \"code\": \"invalid-publisher\",\n \"description\": \"valid token, but no corresponding publisher\",\n }\n ]\n )\n\n # At this point, we've verified that the given JWT is valid for the given\n # project. All we need to do is mint a new token.\n # NOTE: For OIDC-minted API tokens, the Macaroon's description string\n # is purely an implementation detail and is not displayed to the user.\n macaroon_service = request.find_service(IMacaroonService, context=None)\n not_before = int(time.time())\n expires_at = not_before + 900\n serialized, dm = macaroon_service.create_macaroon(\n request.domain,\n f\"OpenID token: {publisher.publisher_url} ({not_before})\",\n [\n caveats.OIDCPublisher(oidc_publisher_id=str(publisher.id)),\n caveats.ProjectID(project_ids=[str(p.id) for p in publisher.projects]),\n caveats.Expiration(expires_at=expires_at, not_before=not_before),\n ],\n oidc_publisher_id=publisher.id,\n )\n for project in publisher.projects:\n project.record_event(\n tag=EventTag.Project.ShortLivedAPITokenAdded,\n ip_address=request.remote_addr,\n additional={\n \"expires\": expires_at,\n \"publisher_name\": publisher.publisher_name,\n \"publisher_url\": publisher.publisher_url,\n },\n )\n return {\"success\": True, \"token\": serialized}\n", "path": "warehouse/oidc/views.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom warehouse.oidc.interfaces import IOIDCPublisherService\nfrom warehouse.oidc.services import OIDCPublisherServiceFactory\nfrom warehouse.oidc.utils import GITHUB_OIDC_ISSUER_URL\n\n\ndef includeme(config):\n oidc_publisher_service_class = config.maybe_dotted(\n config.registry.settings[\"oidc.backend\"]\n )\n\n config.register_service_factory(\n OIDCPublisherServiceFactory(\n publisher=\"github\",\n issuer_url=GITHUB_OIDC_ISSUER_URL,\n service_class=oidc_publisher_service_class,\n ),\n IOIDCPublisherService,\n name=\"github\",\n )\n\n # During deployments, we separate auth routes into their own subdomain\n # to simplify caching exclusion.\n auth = config.get_settings().get(\"auth.domain\")\n\n config.add_route(\"oidc.mint_token\", \"/_/oidc/github/mint-token\", domain=auth)\n", "path": "warehouse/oidc/__init__.py"}]}
| 2,952 | 447 |
gh_patches_debug_26572
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-1926
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Converting from half precision to `bfloat16` in Pearson correlation coefficient cause numerical errors
## 🐛 Bug
The lack of half-precision op support on CPUs has lead to the decision to convert to `bfloat16` in the calculation of Pearson correlation coefficient. (#1813)
However this leads to broadcasting errors when the values are (meaningfully) small.
### To Reproduce
```py
>>> import torch
>>> a = torch.rand((32,), dtype=torch.half)
>>> b = torch.rand((32,), dtype=torch.half)
>>> from torchmetrics.functional import pearson_corrcoef
>>> pearson_corrcoef(a, b)
tensor(0.2240)
>>> pearson_corrcoef(a * 1e-1, b * 1e-1)
tensor(0.2251)
>>> pearson_corrcoef(a * 1e-2, b * 1e-2)
tensor(0.2213)
>>> pearson_corrcoef(a * 1e-3, b * 1e-3)
tensor(0.)
>>> pearson_corrcoef(a * 1e-4, b * 1e-4)
tensor(nan)
```
Which generally makes sense since there are only 5 exponential bits in `float16`, and some are lost in the operations. However this is not obvious during debugging since 0.001 is not *that* small...
### Expected behavior
A warning when the dynamic range is problematic, such as [what SciPy does](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html#scipy-stats-pearsonr).
Or there may be some way to preserve more precision?
### Environment
- TorchMetrics version (pypi v1.0.1):
- Python & PyTorch Version (3.10, 2.0.1):
- Any other relevant information such as OS (tested on Linux_x64 and macOS_arm64):
</issue>
<code>
[start of src/torchmetrics/functional/regression/pearson.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Tuple
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
20 from torchmetrics.utilities.checks import _check_same_shape
21
22
23 def _pearson_corrcoef_update(
24 preds: Tensor,
25 target: Tensor,
26 mean_x: Tensor,
27 mean_y: Tensor,
28 var_x: Tensor,
29 var_y: Tensor,
30 corr_xy: Tensor,
31 n_prior: Tensor,
32 num_outputs: int,
33 ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:
34 """Update and returns variables required to compute Pearson Correlation Coefficient.
35
36 Check for same shape of input tensors.
37
38 Args:
39 preds: estimated scores
40 target: ground truth scores
41 mean_x: current mean estimate of x tensor
42 mean_y: current mean estimate of y tensor
43 var_x: current variance estimate of x tensor
44 var_y: current variance estimate of y tensor
45 corr_xy: current covariance estimate between x and y tensor
46 n_prior: current number of observed observations
47 num_outputs: Number of outputs in multioutput setting
48
49 """
50 # Data checking
51 _check_same_shape(preds, target)
52 _check_data_shape_to_num_outputs(preds, target, num_outputs)
53 cond = n_prior.mean() > 0
54
55 n_obs = preds.shape[0]
56 if cond:
57 mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)
58 my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)
59 else:
60 mx_new = preds.mean(0)
61 my_new = target.mean(0)
62
63 n_prior += n_obs
64
65 if cond:
66 var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)
67 var_y += ((target - my_new) * (target - mean_y)).sum(0)
68
69 else:
70 var_x += preds.var(0) * (n_obs - 1)
71 var_y += target.var(0) * (n_obs - 1)
72 corr_xy += ((preds - mx_new) * (target - mean_y)).sum(0)
73 mean_x = mx_new
74 mean_y = my_new
75
76 return mean_x, mean_y, var_x, var_y, corr_xy, n_prior
77
78
79 def _pearson_corrcoef_compute(
80 var_x: Tensor,
81 var_y: Tensor,
82 corr_xy: Tensor,
83 nb: Tensor,
84 ) -> Tensor:
85 """Compute the final pearson correlation based on accumulated statistics.
86
87 Args:
88 var_x: variance estimate of x tensor
89 var_y: variance estimate of y tensor
90 corr_xy: covariance estimate between x and y tensor
91 nb: number of observations
92
93 """
94 var_x /= nb - 1
95 var_y /= nb - 1
96 corr_xy /= nb - 1
97 # if var_x, var_y is float16 and on cpu, make it bfloat16 as sqrt is not supported for float16
98 # on cpu, remove this after https://github.com/pytorch/pytorch/issues/54774 is fixed
99 if var_x.dtype == torch.float16 and var_x.device == torch.device("cpu"):
100 var_x = var_x.bfloat16()
101 var_y = var_y.bfloat16()
102
103 corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()
104 return torch.clamp(corrcoef, -1.0, 1.0)
105
106
107 def pearson_corrcoef(preds: Tensor, target: Tensor) -> Tensor:
108 """Compute pearson correlation coefficient.
109
110 Args:
111 preds: estimated scores
112 target: ground truth scores
113
114 Example (single output regression):
115 >>> from torchmetrics.functional.regression import pearson_corrcoef
116 >>> target = torch.tensor([3, -0.5, 2, 7])
117 >>> preds = torch.tensor([2.5, 0.0, 2, 8])
118 >>> pearson_corrcoef(preds, target)
119 tensor(0.9849)
120
121 Example (multi output regression):
122 >>> from torchmetrics.functional.regression import pearson_corrcoef
123 >>> target = torch.tensor([[3, -0.5], [2, 7]])
124 >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])
125 >>> pearson_corrcoef(preds, target)
126 tensor([1., 1.])
127
128 """
129 d = preds.shape[1] if preds.ndim == 2 else 1
130 _temp = torch.zeros(d, dtype=preds.dtype, device=preds.device)
131 mean_x, mean_y, var_x = _temp.clone(), _temp.clone(), _temp.clone()
132 var_y, corr_xy, nb = _temp.clone(), _temp.clone(), _temp.clone()
133 _, _, var_x, var_y, corr_xy, nb = _pearson_corrcoef_update(
134 preds, target, mean_x, mean_y, var_x, var_y, corr_xy, nb, num_outputs=1 if preds.ndim == 1 else preds.shape[-1]
135 )
136 return _pearson_corrcoef_compute(var_x, var_y, corr_xy, nb)
137
[end of src/torchmetrics/functional/regression/pearson.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/torchmetrics/functional/regression/pearson.py b/src/torchmetrics/functional/regression/pearson.py
--- a/src/torchmetrics/functional/regression/pearson.py
+++ b/src/torchmetrics/functional/regression/pearson.py
@@ -11,12 +11,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+import math
from typing import Tuple
import torch
from torch import Tensor
from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs
+from torchmetrics.utilities import rank_zero_warn
from torchmetrics.utilities.checks import _check_same_shape
@@ -100,6 +102,15 @@
var_x = var_x.bfloat16()
var_y = var_y.bfloat16()
+ bound = math.sqrt(torch.finfo(var_x.dtype).eps)
+ if (var_x < bound).any() or (var_y < bound).any():
+ rank_zero_warn(
+ "The variance of predictions or target is close to zero. This can cause instability in Pearson correlation"
+ "coefficient, leading to wrong results. Consider re-scaling the input if possible or computing using a"
+ f"larger dtype (currently using {var_x.dtype}).",
+ UserWarning,
+ )
+
corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()
return torch.clamp(corrcoef, -1.0, 1.0)
|
{"golden_diff": "diff --git a/src/torchmetrics/functional/regression/pearson.py b/src/torchmetrics/functional/regression/pearson.py\n--- a/src/torchmetrics/functional/regression/pearson.py\n+++ b/src/torchmetrics/functional/regression/pearson.py\n@@ -11,12 +11,14 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+import math\n from typing import Tuple\n \n import torch\n from torch import Tensor\n \n from torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\n+from torchmetrics.utilities import rank_zero_warn\n from torchmetrics.utilities.checks import _check_same_shape\n \n \n@@ -100,6 +102,15 @@\n var_x = var_x.bfloat16()\n var_y = var_y.bfloat16()\n \n+ bound = math.sqrt(torch.finfo(var_x.dtype).eps)\n+ if (var_x < bound).any() or (var_y < bound).any():\n+ rank_zero_warn(\n+ \"The variance of predictions or target is close to zero. This can cause instability in Pearson correlation\"\n+ \"coefficient, leading to wrong results. Consider re-scaling the input if possible or computing using a\"\n+ f\"larger dtype (currently using {var_x.dtype}).\",\n+ UserWarning,\n+ )\n+\n corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()\n return torch.clamp(corrcoef, -1.0, 1.0)\n", "issue": "Converting from half precision to `bfloat16` in Pearson correlation coefficient cause numerical errors\n## \ud83d\udc1b Bug\r\n\r\nThe lack of half-precision op support on CPUs has lead to the decision to convert to `bfloat16` in the calculation of Pearson correlation coefficient. (#1813)\r\n\r\nHowever this leads to broadcasting errors when the values are (meaningfully) small.\r\n\r\n\r\n### To Reproduce\r\n\r\n```py\r\n>>> import torch\r\n>>> a = torch.rand((32,), dtype=torch.half)\r\n>>> b = torch.rand((32,), dtype=torch.half)\r\n>>> from torchmetrics.functional import pearson_corrcoef\r\n>>> pearson_corrcoef(a, b)\r\ntensor(0.2240)\r\n>>> pearson_corrcoef(a * 1e-1, b * 1e-1)\r\ntensor(0.2251)\r\n>>> pearson_corrcoef(a * 1e-2, b * 1e-2)\r\ntensor(0.2213)\r\n>>> pearson_corrcoef(a * 1e-3, b * 1e-3)\r\ntensor(0.)\r\n>>> pearson_corrcoef(a * 1e-4, b * 1e-4)\r\ntensor(nan)\r\n```\r\n\r\nWhich generally makes sense since there are only 5 exponential bits in `float16`, and some are lost in the operations. However this is not obvious during debugging since 0.001 is not *that* small...\r\n\r\n### Expected behavior\r\n\r\nA warning when the dynamic range is problematic, such as [what SciPy does](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html#scipy-stats-pearsonr).\r\n\r\nOr there may be some way to preserve more precision?\r\n\r\n### Environment\r\n\r\n- TorchMetrics version (pypi v1.0.1):\r\n- Python & PyTorch Version (3.10, 2.0.1):\r\n- Any other relevant information such as OS (tested on Linux_x64 and macOS_arm64):\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Tuple\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.regression.utils import _check_data_shape_to_num_outputs\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _pearson_corrcoef_update(\n preds: Tensor,\n target: Tensor,\n mean_x: Tensor,\n mean_y: Tensor,\n var_x: Tensor,\n var_y: Tensor,\n corr_xy: Tensor,\n n_prior: Tensor,\n num_outputs: int,\n) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor]:\n \"\"\"Update and returns variables required to compute Pearson Correlation Coefficient.\n\n Check for same shape of input tensors.\n\n Args:\n preds: estimated scores\n target: ground truth scores\n mean_x: current mean estimate of x tensor\n mean_y: current mean estimate of y tensor\n var_x: current variance estimate of x tensor\n var_y: current variance estimate of y tensor\n corr_xy: current covariance estimate between x and y tensor\n n_prior: current number of observed observations\n num_outputs: Number of outputs in multioutput setting\n\n \"\"\"\n # Data checking\n _check_same_shape(preds, target)\n _check_data_shape_to_num_outputs(preds, target, num_outputs)\n cond = n_prior.mean() > 0\n\n n_obs = preds.shape[0]\n if cond:\n mx_new = (n_prior * mean_x + preds.sum(0)) / (n_prior + n_obs)\n my_new = (n_prior * mean_y + target.sum(0)) / (n_prior + n_obs)\n else:\n mx_new = preds.mean(0)\n my_new = target.mean(0)\n\n n_prior += n_obs\n\n if cond:\n var_x += ((preds - mx_new) * (preds - mean_x)).sum(0)\n var_y += ((target - my_new) * (target - mean_y)).sum(0)\n\n else:\n var_x += preds.var(0) * (n_obs - 1)\n var_y += target.var(0) * (n_obs - 1)\n corr_xy += ((preds - mx_new) * (target - mean_y)).sum(0)\n mean_x = mx_new\n mean_y = my_new\n\n return mean_x, mean_y, var_x, var_y, corr_xy, n_prior\n\n\ndef _pearson_corrcoef_compute(\n var_x: Tensor,\n var_y: Tensor,\n corr_xy: Tensor,\n nb: Tensor,\n) -> Tensor:\n \"\"\"Compute the final pearson correlation based on accumulated statistics.\n\n Args:\n var_x: variance estimate of x tensor\n var_y: variance estimate of y tensor\n corr_xy: covariance estimate between x and y tensor\n nb: number of observations\n\n \"\"\"\n var_x /= nb - 1\n var_y /= nb - 1\n corr_xy /= nb - 1\n # if var_x, var_y is float16 and on cpu, make it bfloat16 as sqrt is not supported for float16\n # on cpu, remove this after https://github.com/pytorch/pytorch/issues/54774 is fixed\n if var_x.dtype == torch.float16 and var_x.device == torch.device(\"cpu\"):\n var_x = var_x.bfloat16()\n var_y = var_y.bfloat16()\n\n corrcoef = (corr_xy / (var_x * var_y).sqrt()).squeeze()\n return torch.clamp(corrcoef, -1.0, 1.0)\n\n\ndef pearson_corrcoef(preds: Tensor, target: Tensor) -> Tensor:\n \"\"\"Compute pearson correlation coefficient.\n\n Args:\n preds: estimated scores\n target: ground truth scores\n\n Example (single output regression):\n >>> from torchmetrics.functional.regression import pearson_corrcoef\n >>> target = torch.tensor([3, -0.5, 2, 7])\n >>> preds = torch.tensor([2.5, 0.0, 2, 8])\n >>> pearson_corrcoef(preds, target)\n tensor(0.9849)\n\n Example (multi output regression):\n >>> from torchmetrics.functional.regression import pearson_corrcoef\n >>> target = torch.tensor([[3, -0.5], [2, 7]])\n >>> preds = torch.tensor([[2.5, 0.0], [2, 8]])\n >>> pearson_corrcoef(preds, target)\n tensor([1., 1.])\n\n \"\"\"\n d = preds.shape[1] if preds.ndim == 2 else 1\n _temp = torch.zeros(d, dtype=preds.dtype, device=preds.device)\n mean_x, mean_y, var_x = _temp.clone(), _temp.clone(), _temp.clone()\n var_y, corr_xy, nb = _temp.clone(), _temp.clone(), _temp.clone()\n _, _, var_x, var_y, corr_xy, nb = _pearson_corrcoef_update(\n preds, target, mean_x, mean_y, var_x, var_y, corr_xy, nb, num_outputs=1 if preds.ndim == 1 else preds.shape[-1]\n )\n return _pearson_corrcoef_compute(var_x, var_y, corr_xy, nb)\n", "path": "src/torchmetrics/functional/regression/pearson.py"}]}
| 2,570 | 350 |
gh_patches_debug_36546
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-698
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error downloading eBird_observation data.
The URL doesn't work anymore.
</issue>
<code>
[start of try_install_all.py]
1 """Attempt to install all datasets into all database management systems
2
3 This module, when run, attempts to install datasets from all Retriever scripts
4 in the /scripts folder (except for those listed in IGNORE), for each engine in
5 ENGINE_LIST() from __init__.py. In other words, it runs trys to install using
6 all possible combinations of database platform and script and checks to
7 see if there are any errors. It does not check the values in the database.
8
9 """
10 from __future__ import print_function
11 from __future__ import absolute_import
12 import os
13 import sys
14 from imp import reload
15 from retriever.lib.tools import choose_engine
16 from retriever import MODULE_LIST, ENGINE_LIST, SCRIPT_LIST
17
18 reload(sys)
19 if hasattr(sys, 'setdefaultencoding'):
20 sys.setdefaultencoding('latin-1')
21
22 MODULE_LIST = MODULE_LIST()
23 ENGINE_LIST = ENGINE_LIST()
24 if len(sys.argv) > 1:
25 ENGINE_LIST = [
26 e for e in ENGINE_LIST
27 if e.name in sys.argv[1:] or
28 e.abbreviation in sys.argv[1:]
29 ]
30 SCRIPT_LIST = SCRIPT_LIST()
31 TEST_ENGINES = {}
32 IGNORE = ["AvianBodyMass", "FIA", "Bioclim", "PRISM", "vertnet","NPN", "mammsupertree", "eBirdOD"]
33 IGNORE = [dataset.lower() for dataset in IGNORE]
34
35 for engine in ENGINE_LIST:
36 opts = {}
37 print("** %s **" % engine.name)
38 opts["engine"] = engine.abbreviation
39
40 try:
41 TEST_ENGINES[engine.abbreviation] = choose_engine(opts)
42 TEST_ENGINES[engine.abbreviation].get_input()
43 TEST_ENGINES[engine.abbreviation].get_cursor()
44 except:
45 TEST_ENGINES[engine.abbreviation] = None
46 pass
47
48 errors = []
49 for module in MODULE_LIST:
50 for (key, value) in list(TEST_ENGINES.items()):
51 if module.SCRIPT.shortname.lower() not in IGNORE:
52 if value != None:
53 print("==>", module.__name__, value.name, "..........", module.SCRIPT.shortname)
54 try:
55 module.SCRIPT.download(value)
56 except KeyboardInterrupt:
57 pass
58 except Exception as e:
59 print("ERROR.")
60 errors.append((key, module.__name__, e))
61 else:
62 errors.append((key, "No connection detected......" + module.SCRIPT.shortname))
63
64 print('')
65 if errors:
66 print("Engine, Dataset, Error")
67 for error in errors:
68 print(error)
69 else:
70 print("All tests passed")
71
[end of try_install_all.py]
[start of scripts/eBird_observation.py]
1 #retriever
2 """Data Retriever script for the eBird Observation Dataset"""
3
4 from retriever.lib.templates import Script
5 from retriever.lib.models import Table
6
7
8 class main(Script):
9 def __init__(self, **kwargs):
10 Script.__init__(self, **kwargs)
11 self.name = "eBird Observation Dataset"
12 self.shortname = "eBirdOD"
13 self.ref = "http://ebird.org/content/ebird/news/gbif/"
14 self.urls = {"main": "https://dataone.ornith.cornell.edu/metacat/d1/mn/v1/object/CLOEODDATA.05192014.1"}
15 self.retriever_minimum_version = '2.0.dev'
16 self.version = '1.0'
17 self.description = "A collection of observations from birders through portals managed and maintained by local partner conservation organizations"
18
19 def download(self, engine=None, debug=False):
20 data_file_name = "eBird_Observation_Dataset_2013.csv"
21 Script.download(self, engine, debug)
22 self.engine.download_files_from_archive(self.urls["main"],
23 [data_file_name],
24 filetype='gz')
25 table = (Table("main", delimiter=","))
26 table.columns=[("BASISOFRECORD",("char", )),
27 ("INSTITUTIONCODE",("char", )),
28 ("COLLECTIONCODE",("char", )),
29 ("CATALOGNUMBER",("char", )),
30 ("OCCURRENCEID",("char", )),
31 ("RECORDEDBY",("char", )),
32 ("YEAR",("int", )),
33 ("MONTH",("int", )),
34 ("DAY",("int", )),
35 ("COUNTRY",("char", )),
36 ("STATEPROVINCE",("char", )),
37 ("COUNTY",("char", )),
38 ("DECIMALLATITUDE",("double", )),
39 ("DECIMALLONGITUDE",("double", )),
40 ("LOCALITY",("char", )),
41 ("KINGDOM",("char", )),
42 ("PHYLUM",("char", )),
43 ("CLASS",("char", )),
44 ("SPORDER",("char", )),
45 ("FAMILY",("char", )),
46 ("GENUS",("char", )),
47 ("SPECIFICEPITHET",("char", )),
48 ("SCIENTIFICNAME",("char", )),
49 ("VERNACULARNAME",("char", )),
50 ("INDIVIDUALCOUNT",("int", ))]
51 engine.table = table
52 engine.create_table()
53 engine.insert_data_from_file(engine.format_filename(data_file_name))
54 return engine
55
56 SCRIPT = main()
57
[end of scripts/eBird_observation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/eBird_observation.py b/scripts/eBird_observation.py
deleted file mode 100644
--- a/scripts/eBird_observation.py
+++ /dev/null
@@ -1,56 +0,0 @@
-#retriever
-"""Data Retriever script for the eBird Observation Dataset"""
-
-from retriever.lib.templates import Script
-from retriever.lib.models import Table
-
-
-class main(Script):
- def __init__(self, **kwargs):
- Script.__init__(self, **kwargs)
- self.name = "eBird Observation Dataset"
- self.shortname = "eBirdOD"
- self.ref = "http://ebird.org/content/ebird/news/gbif/"
- self.urls = {"main": "https://dataone.ornith.cornell.edu/metacat/d1/mn/v1/object/CLOEODDATA.05192014.1"}
- self.retriever_minimum_version = '2.0.dev'
- self.version = '1.0'
- self.description = "A collection of observations from birders through portals managed and maintained by local partner conservation organizations"
-
- def download(self, engine=None, debug=False):
- data_file_name = "eBird_Observation_Dataset_2013.csv"
- Script.download(self, engine, debug)
- self.engine.download_files_from_archive(self.urls["main"],
- [data_file_name],
- filetype='gz')
- table = (Table("main", delimiter=","))
- table.columns=[("BASISOFRECORD",("char", )),
- ("INSTITUTIONCODE",("char", )),
- ("COLLECTIONCODE",("char", )),
- ("CATALOGNUMBER",("char", )),
- ("OCCURRENCEID",("char", )),
- ("RECORDEDBY",("char", )),
- ("YEAR",("int", )),
- ("MONTH",("int", )),
- ("DAY",("int", )),
- ("COUNTRY",("char", )),
- ("STATEPROVINCE",("char", )),
- ("COUNTY",("char", )),
- ("DECIMALLATITUDE",("double", )),
- ("DECIMALLONGITUDE",("double", )),
- ("LOCALITY",("char", )),
- ("KINGDOM",("char", )),
- ("PHYLUM",("char", )),
- ("CLASS",("char", )),
- ("SPORDER",("char", )),
- ("FAMILY",("char", )),
- ("GENUS",("char", )),
- ("SPECIFICEPITHET",("char", )),
- ("SCIENTIFICNAME",("char", )),
- ("VERNACULARNAME",("char", )),
- ("INDIVIDUALCOUNT",("int", ))]
- engine.table = table
- engine.create_table()
- engine.insert_data_from_file(engine.format_filename(data_file_name))
- return engine
-
-SCRIPT = main()
diff --git a/try_install_all.py b/try_install_all.py
--- a/try_install_all.py
+++ b/try_install_all.py
@@ -29,7 +29,7 @@
]
SCRIPT_LIST = SCRIPT_LIST()
TEST_ENGINES = {}
-IGNORE = ["AvianBodyMass", "FIA", "Bioclim", "PRISM", "vertnet","NPN", "mammsupertree", "eBirdOD"]
+IGNORE = ["AvianBodyMass", "FIA", "Bioclim", "PRISM", "vertnet","NPN", "mammsupertree"]
IGNORE = [dataset.lower() for dataset in IGNORE]
for engine in ENGINE_LIST:
|
{"golden_diff": "diff --git a/scripts/eBird_observation.py b/scripts/eBird_observation.py\ndeleted file mode 100644\n--- a/scripts/eBird_observation.py\n+++ /dev/null\n@@ -1,56 +0,0 @@\n-#retriever\n-\"\"\"Data Retriever script for the eBird Observation Dataset\"\"\"\n-\n-from retriever.lib.templates import Script\n-from retriever.lib.models import Table\n-\n-\n-class main(Script):\n- def __init__(self, **kwargs):\n- Script.__init__(self, **kwargs)\n- self.name = \"eBird Observation Dataset\"\n- self.shortname = \"eBirdOD\"\n- self.ref = \"http://ebird.org/content/ebird/news/gbif/\"\n- self.urls = {\"main\": \"https://dataone.ornith.cornell.edu/metacat/d1/mn/v1/object/CLOEODDATA.05192014.1\"}\n- self.retriever_minimum_version = '2.0.dev'\n- self.version = '1.0'\n- self.description = \"A collection of observations from birders through portals managed and maintained by local partner conservation organizations\"\n-\n- def download(self, engine=None, debug=False):\n- data_file_name = \"eBird_Observation_Dataset_2013.csv\"\n- Script.download(self, engine, debug)\n- self.engine.download_files_from_archive(self.urls[\"main\"],\n- [data_file_name],\n- filetype='gz')\n- table = (Table(\"main\", delimiter=\",\"))\n- table.columns=[(\"BASISOFRECORD\",(\"char\", )),\n- (\"INSTITUTIONCODE\",(\"char\", )),\n- (\"COLLECTIONCODE\",(\"char\", )),\n- (\"CATALOGNUMBER\",(\"char\", )),\n- (\"OCCURRENCEID\",(\"char\", )),\n- (\"RECORDEDBY\",(\"char\", )),\n- (\"YEAR\",(\"int\", )),\n- (\"MONTH\",(\"int\", )),\n- (\"DAY\",(\"int\", )),\n- (\"COUNTRY\",(\"char\", )),\n- (\"STATEPROVINCE\",(\"char\", )),\n- (\"COUNTY\",(\"char\", )),\n- (\"DECIMALLATITUDE\",(\"double\", )),\n- (\"DECIMALLONGITUDE\",(\"double\", )),\n- (\"LOCALITY\",(\"char\", )),\n- (\"KINGDOM\",(\"char\", )),\n- (\"PHYLUM\",(\"char\", )),\n- (\"CLASS\",(\"char\", )),\n- (\"SPORDER\",(\"char\", )),\n- (\"FAMILY\",(\"char\", )),\n- (\"GENUS\",(\"char\", )),\n- (\"SPECIFICEPITHET\",(\"char\", )),\n- (\"SCIENTIFICNAME\",(\"char\", )),\n- (\"VERNACULARNAME\",(\"char\", )),\n- (\"INDIVIDUALCOUNT\",(\"int\", ))]\n- engine.table = table\n- engine.create_table()\n- engine.insert_data_from_file(engine.format_filename(data_file_name))\n- return engine\n-\n-SCRIPT = main()\ndiff --git a/try_install_all.py b/try_install_all.py\n--- a/try_install_all.py\n+++ b/try_install_all.py\n@@ -29,7 +29,7 @@\n ]\n SCRIPT_LIST = SCRIPT_LIST()\n TEST_ENGINES = {}\n-IGNORE = [\"AvianBodyMass\", \"FIA\", \"Bioclim\", \"PRISM\", \"vertnet\",\"NPN\", \"mammsupertree\", \"eBirdOD\"]\n+IGNORE = [\"AvianBodyMass\", \"FIA\", \"Bioclim\", \"PRISM\", \"vertnet\",\"NPN\", \"mammsupertree\"]\n IGNORE = [dataset.lower() for dataset in IGNORE]\n \n for engine in ENGINE_LIST:\n", "issue": "Error downloading eBird_observation data.\nThe URL doesn't work anymore.\n\n", "before_files": [{"content": "\"\"\"Attempt to install all datasets into all database management systems\n\nThis module, when run, attempts to install datasets from all Retriever scripts\nin the /scripts folder (except for those listed in IGNORE), for each engine in\nENGINE_LIST() from __init__.py. In other words, it runs trys to install using\nall possible combinations of database platform and script and checks to\nsee if there are any errors. It does not check the values in the database.\n\n\"\"\"\nfrom __future__ import print_function\nfrom __future__ import absolute_import\nimport os\nimport sys\nfrom imp import reload\nfrom retriever.lib.tools import choose_engine\nfrom retriever import MODULE_LIST, ENGINE_LIST, SCRIPT_LIST\n\nreload(sys)\nif hasattr(sys, 'setdefaultencoding'):\n sys.setdefaultencoding('latin-1')\n\nMODULE_LIST = MODULE_LIST()\nENGINE_LIST = ENGINE_LIST()\nif len(sys.argv) > 1:\n ENGINE_LIST = [\n e for e in ENGINE_LIST\n if e.name in sys.argv[1:] or\n e.abbreviation in sys.argv[1:]\n ]\nSCRIPT_LIST = SCRIPT_LIST()\nTEST_ENGINES = {}\nIGNORE = [\"AvianBodyMass\", \"FIA\", \"Bioclim\", \"PRISM\", \"vertnet\",\"NPN\", \"mammsupertree\", \"eBirdOD\"]\nIGNORE = [dataset.lower() for dataset in IGNORE]\n\nfor engine in ENGINE_LIST:\n opts = {}\n print(\"** %s **\" % engine.name)\n opts[\"engine\"] = engine.abbreviation\n\n try:\n TEST_ENGINES[engine.abbreviation] = choose_engine(opts)\n TEST_ENGINES[engine.abbreviation].get_input()\n TEST_ENGINES[engine.abbreviation].get_cursor()\n except:\n TEST_ENGINES[engine.abbreviation] = None\n pass\n\nerrors = []\nfor module in MODULE_LIST:\n for (key, value) in list(TEST_ENGINES.items()):\n if module.SCRIPT.shortname.lower() not in IGNORE:\n if value != None:\n print(\"==>\", module.__name__, value.name, \"..........\", module.SCRIPT.shortname)\n try:\n module.SCRIPT.download(value)\n except KeyboardInterrupt:\n pass\n except Exception as e:\n print(\"ERROR.\")\n errors.append((key, module.__name__, e))\n else:\n errors.append((key, \"No connection detected......\" + module.SCRIPT.shortname))\n\nprint('')\nif errors:\n print(\"Engine, Dataset, Error\")\n for error in errors:\n print(error)\nelse:\n print(\"All tests passed\")\n", "path": "try_install_all.py"}, {"content": "#retriever\n\"\"\"Data Retriever script for the eBird Observation Dataset\"\"\"\n\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.name = \"eBird Observation Dataset\"\n self.shortname = \"eBirdOD\"\n self.ref = \"http://ebird.org/content/ebird/news/gbif/\"\n self.urls = {\"main\": \"https://dataone.ornith.cornell.edu/metacat/d1/mn/v1/object/CLOEODDATA.05192014.1\"}\n self.retriever_minimum_version = '2.0.dev'\n self.version = '1.0'\n self.description = \"A collection of observations from birders through portals managed and maintained by local partner conservation organizations\"\n\n def download(self, engine=None, debug=False):\n data_file_name = \"eBird_Observation_Dataset_2013.csv\"\n Script.download(self, engine, debug)\n self.engine.download_files_from_archive(self.urls[\"main\"],\n [data_file_name],\n filetype='gz')\n table = (Table(\"main\", delimiter=\",\"))\n table.columns=[(\"BASISOFRECORD\",(\"char\", )),\n (\"INSTITUTIONCODE\",(\"char\", )),\n (\"COLLECTIONCODE\",(\"char\", )),\n (\"CATALOGNUMBER\",(\"char\", )),\n (\"OCCURRENCEID\",(\"char\", )),\n (\"RECORDEDBY\",(\"char\", )),\n (\"YEAR\",(\"int\", )),\n (\"MONTH\",(\"int\", )),\n (\"DAY\",(\"int\", )),\n (\"COUNTRY\",(\"char\", )),\n (\"STATEPROVINCE\",(\"char\", )),\n (\"COUNTY\",(\"char\", )),\n (\"DECIMALLATITUDE\",(\"double\", )),\n (\"DECIMALLONGITUDE\",(\"double\", )),\n (\"LOCALITY\",(\"char\", )),\n (\"KINGDOM\",(\"char\", )),\n (\"PHYLUM\",(\"char\", )),\n (\"CLASS\",(\"char\", )),\n (\"SPORDER\",(\"char\", )),\n (\"FAMILY\",(\"char\", )),\n (\"GENUS\",(\"char\", )),\n (\"SPECIFICEPITHET\",(\"char\", )),\n (\"SCIENTIFICNAME\",(\"char\", )),\n (\"VERNACULARNAME\",(\"char\", )),\n (\"INDIVIDUALCOUNT\",(\"int\", ))]\n engine.table = table\n engine.create_table()\n engine.insert_data_from_file(engine.format_filename(data_file_name))\n return engine\n\nSCRIPT = main()\n", "path": "scripts/eBird_observation.py"}]}
| 1,919 | 820 |
gh_patches_debug_29119
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-3786
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BigQuery: expose public helper method to convert a list of schema fields to/from a list of schema dictionaries (JSON)
I've received some feedback asking how to take a schema from the client library and save it to a JSON file. One reason to do this is the [`bq` command-line tool](https://cloud.google.com/bigquery/bq-command-line-tool#creatingtablefromfile) accepts a schema file, formatted like
```
[
{"name": "name", "type": "string", "mode": "required"},
{"name": "gender", "type": "string", "mode": "nullable"},
{"name": "count", "type": "integer", "mode": "required"}
]
```
Note: this format is the same as the API representation.
It would be great if our client libraries could read/write in this format.
</issue>
<code>
[start of bigquery/google/cloud/bigquery/schema.py]
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Schemas for BigQuery tables / queries."""
16
17
18 class SchemaField(object):
19 """Describe a single field within a table schema.
20
21 :type name: str
22 :param name: the name of the field.
23
24 :type field_type: str
25 :param field_type: the type of the field (one of 'STRING', 'INTEGER',
26 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').
27
28 :type mode: str
29 :param mode: the mode of the field (one of 'NULLABLE', 'REQUIRED',
30 or 'REPEATED').
31
32 :type description: str
33 :param description: optional description for the field.
34
35 :type fields: tuple of :class:`SchemaField`
36 :param fields: subfields (requires ``field_type`` of 'RECORD').
37 """
38 def __init__(self, name, field_type, mode='NULLABLE',
39 description=None, fields=()):
40 self._name = name
41 self._field_type = field_type
42 self._mode = mode
43 self._description = description
44 self._fields = tuple(fields)
45
46 @property
47 def name(self):
48 """str: The name of the field."""
49 return self._name
50
51 @property
52 def field_type(self):
53 """str: The type of the field.
54
55 Will be one of 'STRING', 'INTEGER', 'FLOAT', 'BOOLEAN',
56 'TIMESTAMP' or 'RECORD'.
57 """
58 return self._field_type
59
60 @property
61 def mode(self):
62 """str: The mode of the field.
63
64 Will be one of 'NULLABLE', 'REQUIRED', or 'REPEATED'.
65 """
66 return self._mode
67
68 @property
69 def is_nullable(self):
70 """Check whether 'mode' is 'nullable'."""
71 return self._mode == 'NULLABLE'
72
73 @property
74 def description(self):
75 """Optional[str]: Description for the field."""
76 return self._description
77
78 @property
79 def fields(self):
80 """tuple: Subfields contained in this field.
81
82 If ``field_type`` is not 'RECORD', this property must be
83 empty / unset.
84 """
85 return self._fields
86
87 def _key(self):
88 """A tuple key that unique-ly describes this field.
89
90 Used to compute this instance's hashcode and evaluate equality.
91
92 Returns:
93 tuple: The contents of this :class:`SchemaField`.
94 """
95 return (
96 self._name,
97 self._field_type.lower(),
98 self._mode,
99 self._description,
100 self._fields,
101 )
102
103 def __eq__(self, other):
104 if not isinstance(other, SchemaField):
105 return NotImplemented
106 return self._key() == other._key()
107
108 def __ne__(self, other):
109 return not self == other
110
111 def __hash__(self):
112 return hash(self._key())
113
114 def __repr__(self):
115 return 'SchemaField{}'.format(self._key())
116
[end of bigquery/google/cloud/bigquery/schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bigquery/google/cloud/bigquery/schema.py b/bigquery/google/cloud/bigquery/schema.py
--- a/bigquery/google/cloud/bigquery/schema.py
+++ b/bigquery/google/cloud/bigquery/schema.py
@@ -43,6 +43,25 @@
self._description = description
self._fields = tuple(fields)
+ @classmethod
+ def from_api_repr(cls, api_repr):
+ """Return a ``SchemaField`` object deserialized from a dictionary.
+
+ Args:
+ api_repr (Mapping[str, str]): The serialized representation
+ of the SchemaField, such as what is output by
+ :meth:`to_api_repr`.
+
+ Returns:
+ SchemaField: The ``SchemaField`` object.
+ """
+ return cls(
+ field_type=api_repr['type'].upper(),
+ fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],
+ mode=api_repr['mode'].upper(),
+ name=api_repr['name'],
+ )
+
@property
def name(self):
"""str: The name of the field."""
@@ -84,6 +103,28 @@
"""
return self._fields
+ def to_api_repr(self):
+ """Return a dictionary representing this schema field.
+
+ Returns:
+ dict: A dictionary representing the SchemaField in a serialized
+ form.
+ """
+ # Put together the basic representation. See http://bit.ly/2hOAT5u.
+ answer = {
+ 'mode': self.mode.lower(),
+ 'name': self.name,
+ 'type': self.field_type.lower(),
+ }
+
+ # If this is a RECORD type, then sub-fields are also included,
+ # add this to the serialized representation.
+ if self.field_type.upper() == 'RECORD':
+ answer['fields'] = [f.to_api_repr() for f in self.fields]
+
+ # Done; return the serialized dictionary.
+ return answer
+
def _key(self):
"""A tuple key that unique-ly describes this field.
|
{"golden_diff": "diff --git a/bigquery/google/cloud/bigquery/schema.py b/bigquery/google/cloud/bigquery/schema.py\n--- a/bigquery/google/cloud/bigquery/schema.py\n+++ b/bigquery/google/cloud/bigquery/schema.py\n@@ -43,6 +43,25 @@\n self._description = description\n self._fields = tuple(fields)\n \n+ @classmethod\n+ def from_api_repr(cls, api_repr):\n+ \"\"\"Return a ``SchemaField`` object deserialized from a dictionary.\n+\n+ Args:\n+ api_repr (Mapping[str, str]): The serialized representation\n+ of the SchemaField, such as what is output by\n+ :meth:`to_api_repr`.\n+\n+ Returns:\n+ SchemaField: The ``SchemaField`` object.\n+ \"\"\"\n+ return cls(\n+ field_type=api_repr['type'].upper(),\n+ fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],\n+ mode=api_repr['mode'].upper(),\n+ name=api_repr['name'],\n+ )\n+\n @property\n def name(self):\n \"\"\"str: The name of the field.\"\"\"\n@@ -84,6 +103,28 @@\n \"\"\"\n return self._fields\n \n+ def to_api_repr(self):\n+ \"\"\"Return a dictionary representing this schema field.\n+\n+ Returns:\n+ dict: A dictionary representing the SchemaField in a serialized\n+ form.\n+ \"\"\"\n+ # Put together the basic representation. See http://bit.ly/2hOAT5u.\n+ answer = {\n+ 'mode': self.mode.lower(),\n+ 'name': self.name,\n+ 'type': self.field_type.lower(),\n+ }\n+\n+ # If this is a RECORD type, then sub-fields are also included,\n+ # add this to the serialized representation.\n+ if self.field_type.upper() == 'RECORD':\n+ answer['fields'] = [f.to_api_repr() for f in self.fields]\n+\n+ # Done; return the serialized dictionary.\n+ return answer\n+\n def _key(self):\n \"\"\"A tuple key that unique-ly describes this field.\n", "issue": "BigQuery: expose public helper method to convert a list of schema fields to/from a list of schema dictionaries (JSON)\nI've received some feedback asking how to take a schema from the client library and save it to a JSON file. One reason to do this is the [`bq` command-line tool](https://cloud.google.com/bigquery/bq-command-line-tool#creatingtablefromfile) accepts a schema file, formatted like\r\n\r\n```\r\n[\r\n {\"name\": \"name\", \"type\": \"string\", \"mode\": \"required\"},\r\n {\"name\": \"gender\", \"type\": \"string\", \"mode\": \"nullable\"},\r\n {\"name\": \"count\", \"type\": \"integer\", \"mode\": \"required\"}\r\n]\r\n```\r\n\r\nNote: this format is the same as the API representation.\r\n\r\nIt would be great if our client libraries could read/write in this format.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Schemas for BigQuery tables / queries.\"\"\"\n\n\nclass SchemaField(object):\n \"\"\"Describe a single field within a table schema.\n\n :type name: str\n :param name: the name of the field.\n\n :type field_type: str\n :param field_type: the type of the field (one of 'STRING', 'INTEGER',\n 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').\n\n :type mode: str\n :param mode: the mode of the field (one of 'NULLABLE', 'REQUIRED',\n or 'REPEATED').\n\n :type description: str\n :param description: optional description for the field.\n\n :type fields: tuple of :class:`SchemaField`\n :param fields: subfields (requires ``field_type`` of 'RECORD').\n \"\"\"\n def __init__(self, name, field_type, mode='NULLABLE',\n description=None, fields=()):\n self._name = name\n self._field_type = field_type\n self._mode = mode\n self._description = description\n self._fields = tuple(fields)\n\n @property\n def name(self):\n \"\"\"str: The name of the field.\"\"\"\n return self._name\n\n @property\n def field_type(self):\n \"\"\"str: The type of the field.\n\n Will be one of 'STRING', 'INTEGER', 'FLOAT', 'BOOLEAN',\n 'TIMESTAMP' or 'RECORD'.\n \"\"\"\n return self._field_type\n\n @property\n def mode(self):\n \"\"\"str: The mode of the field.\n\n Will be one of 'NULLABLE', 'REQUIRED', or 'REPEATED'.\n \"\"\"\n return self._mode\n\n @property\n def is_nullable(self):\n \"\"\"Check whether 'mode' is 'nullable'.\"\"\"\n return self._mode == 'NULLABLE'\n\n @property\n def description(self):\n \"\"\"Optional[str]: Description for the field.\"\"\"\n return self._description\n\n @property\n def fields(self):\n \"\"\"tuple: Subfields contained in this field.\n\n If ``field_type`` is not 'RECORD', this property must be\n empty / unset.\n \"\"\"\n return self._fields\n\n def _key(self):\n \"\"\"A tuple key that unique-ly describes this field.\n\n Used to compute this instance's hashcode and evaluate equality.\n\n Returns:\n tuple: The contents of this :class:`SchemaField`.\n \"\"\"\n return (\n self._name,\n self._field_type.lower(),\n self._mode,\n self._description,\n self._fields,\n )\n\n def __eq__(self, other):\n if not isinstance(other, SchemaField):\n return NotImplemented\n return self._key() == other._key()\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash(self._key())\n\n def __repr__(self):\n return 'SchemaField{}'.format(self._key())\n", "path": "bigquery/google/cloud/bigquery/schema.py"}]}
| 1,748 | 468 |
gh_patches_debug_13145
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1159
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🧹 resync SQLoxide
AST to SQL and AST visitor appear to have been added
</issue>
<code>
[start of opteryx/third_party/sqloxide/__init__.py]
1 try:
2 from .sqloxide import parse_sql
3 except ImportError as e: # pragma: no cover
4 print(e)
5 if str(e) != "PyO3 modules may only be initialized once per interpreter process":
6 raise e
7
8 __all__ = ["parse_sql"]
9
[end of opteryx/third_party/sqloxide/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opteryx/third_party/sqloxide/__init__.py b/opteryx/third_party/sqloxide/__init__.py
--- a/opteryx/third_party/sqloxide/__init__.py
+++ b/opteryx/third_party/sqloxide/__init__.py
@@ -1,8 +1,16 @@
-try:
- from .sqloxide import parse_sql
-except ImportError as e: # pragma: no cover
- print(e)
- if str(e) != "PyO3 modules may only be initialized once per interpreter process":
- raise e
-
-__all__ = ["parse_sql"]
+"""
+This module provides an interface to the sqloxide library, which is responsible for parsing SQL,
+restoring the Abstract Syntax Tree (AST), and performing various mutations on expressions and relations.
+
+For more information about sqloxide: https://github.com/wseaton/sqloxide
+
+This module is not from sqloxide, it is written for Opteryx.
+"""
+
+from .sqloxide import mutate_expressions
+from .sqloxide import mutate_relations
+from .sqloxide import parse_sql
+from .sqloxide import restore_ast
+
+# Explicitly define the API of this module for external consumers
+__all__ = ["parse_sql", "restore_ast", "mutate_expressions", "mutate_relations"]
|
{"golden_diff": "diff --git a/opteryx/third_party/sqloxide/__init__.py b/opteryx/third_party/sqloxide/__init__.py\n--- a/opteryx/third_party/sqloxide/__init__.py\n+++ b/opteryx/third_party/sqloxide/__init__.py\n@@ -1,8 +1,16 @@\n-try:\n- from .sqloxide import parse_sql\n-except ImportError as e: # pragma: no cover\n- print(e)\n- if str(e) != \"PyO3 modules may only be initialized once per interpreter process\":\n- raise e\n-\n-__all__ = [\"parse_sql\"]\n+\"\"\"\n+This module provides an interface to the sqloxide library, which is responsible for parsing SQL,\n+restoring the Abstract Syntax Tree (AST), and performing various mutations on expressions and relations.\n+\n+For more information about sqloxide: https://github.com/wseaton/sqloxide\n+\n+This module is not from sqloxide, it is written for Opteryx.\n+\"\"\"\n+\n+from .sqloxide import mutate_expressions\n+from .sqloxide import mutate_relations\n+from .sqloxide import parse_sql\n+from .sqloxide import restore_ast\n+\n+# Explicitly define the API of this module for external consumers\n+__all__ = [\"parse_sql\", \"restore_ast\", \"mutate_expressions\", \"mutate_relations\"]\n", "issue": "\ud83e\uddf9 resync SQLoxide\nAST to SQL and AST visitor appear to have been added \n", "before_files": [{"content": "try:\n from .sqloxide import parse_sql\nexcept ImportError as e: # pragma: no cover\n print(e)\n if str(e) != \"PyO3 modules may only be initialized once per interpreter process\":\n raise e\n\n__all__ = [\"parse_sql\"]\n", "path": "opteryx/third_party/sqloxide/__init__.py"}]}
| 641 | 311 |
gh_patches_debug_40479
|
rasdani/github-patches
|
git_diff
|
pypa__cibuildwheel-204
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Linux support in AppVeyor
FYI
https://www.appveyor.com/blog/2018/03/06/appveyor-for-linux/
https://www.appveyor.com/blog/2018/05/15/appveyor-for-linux-is-generally-available/
https://www.appveyor.com/docs/getting-started-with-appveyor-for-linux/
</issue>
<code>
[start of cibuildwheel/__main__.py]
1 from __future__ import print_function
2 import argparse, os, subprocess, sys, textwrap
3
4 import cibuildwheel
5 import cibuildwheel.linux, cibuildwheel.windows, cibuildwheel.macos
6 from cibuildwheel.environment import parse_environment, EnvironmentParseError
7 from cibuildwheel.util import BuildSelector, Unbuffered
8
9 def get_option_from_environment(option_name, platform=None, default=None):
10 '''
11 Returns an option from the environment, optionally scoped by the platform.
12
13 Example:
14 get_option_from_environment('CIBW_COLOR', platform='macos')
15
16 This will return the value of CIBW_COLOR_MACOS if it exists, otherwise the value of
17 CIBW_COLOR.
18 '''
19 if platform:
20 option = os.environ.get('%s_%s' % (option_name, platform.upper()))
21 if option is not None:
22 return option
23
24 return os.environ.get(option_name, default)
25
26
27 def main():
28 parser = argparse.ArgumentParser(
29 description='Build wheels for all the platforms.',
30 epilog=('Most options are supplied via environment variables. '
31 'See https://github.com/joerick/cibuildwheel#options for info.'))
32
33 parser.add_argument('--platform',
34 choices=['auto', 'linux', 'macos', 'windows'],
35 default=os.environ.get('CIBW_PLATFORM', 'auto'),
36 help=('Platform to build for. For "linux" you need docker running, on Mac '
37 'or Linux. For "macos", you need a Mac machine, and note that this '
38 'script is going to automatically install MacPython on your system, '
39 'so don\'t run on your development machine. For "windows", you need to '
40 'run in Windows, and it will build and test for all versions of '
41 'Python at C:\\PythonXX[-x64]. Default: auto.'))
42 parser.add_argument('--output-dir',
43 default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),
44 help='Destination folder for the wheels.')
45 parser.add_argument('project_dir',
46 default='.',
47 nargs='?',
48 help=('Path to the project that you want wheels for. Default: the current '
49 'directory.'))
50
51 parser.add_argument('--print-build-identifiers',
52 action='store_true',
53 help='Print the build identifiers matched by the current invocation and exit.')
54
55 args = parser.parse_args()
56
57 detect_obsolete_options()
58
59 if args.platform != 'auto':
60 platform = args.platform
61 else:
62 platform = None
63
64 if os.environ.get('TRAVIS_OS_NAME') == 'linux':
65 platform = 'linux'
66 elif os.environ.get('TRAVIS_OS_NAME') == 'osx':
67 platform = 'macos'
68 elif os.environ.get('TRAVIS_OS_NAME') == 'windows':
69 platform = 'windows'
70 elif 'APPVEYOR' in os.environ:
71 platform = 'windows'
72 elif 'BITRISE_BUILD_NUMBER' in os.environ:
73 platform = 'macos'
74 elif os.environ.get('CIRCLECI'):
75 if sys.platform.startswith('linux'):
76 platform = 'linux'
77 elif sys.platform.startswith('darwin'):
78 platform = 'macos'
79 elif 'AZURE_HTTP_USER_AGENT' in os.environ:
80 if os.environ['AGENT_OS'] == 'Linux':
81 platform = 'linux'
82 elif os.environ['AGENT_OS'] == 'Darwin':
83 platform = 'macos'
84 elif os.environ['AGENT_OS'] == 'Windows_NT':
85 platform = 'windows'
86
87 if platform is None:
88 print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '
89 'Travis CI, AppVeyor, and CircleCI are supported. You can run on your development '
90 'machine using the --platform argument. Check --help output for more '
91 'information.',
92 file=sys.stderr)
93 exit(2)
94
95 output_dir = args.output_dir
96 test_command = get_option_from_environment('CIBW_TEST_COMMAND', platform=platform)
97 test_requires = get_option_from_environment('CIBW_TEST_REQUIRES', platform=platform, default='').split()
98 test_extras = get_option_from_environment('CIBW_TEST_EXTRAS', platform=platform, default='')
99 project_dir = args.project_dir
100 before_build = get_option_from_environment('CIBW_BEFORE_BUILD', platform=platform)
101 build_verbosity = get_option_from_environment('CIBW_BUILD_VERBOSITY', platform=platform, default='')
102 build_config, skip_config = os.environ.get('CIBW_BUILD', '*'), os.environ.get('CIBW_SKIP', '')
103 environment_config = get_option_from_environment('CIBW_ENVIRONMENT', platform=platform, default='')
104
105 if test_extras:
106 test_extras = '[{0}]'.format(test_extras)
107
108 try:
109 build_verbosity = min(3, max(-3, int(build_verbosity)))
110 except ValueError:
111 build_verbosity = 0
112
113 try:
114 environment = parse_environment(environment_config)
115 except (EnvironmentParseError, ValueError) as e:
116 print('cibuildwheel: Malformed environment option "%s"' % environment_config, file=sys.stderr)
117 import traceback
118 traceback.print_exc(None, sys.stderr)
119 exit(2)
120
121 build_selector = BuildSelector(build_config, skip_config)
122
123 # Add CIBUILDWHEEL environment variable
124 # This needs to be passed on to the docker container in linux.py
125 os.environ['CIBUILDWHEEL'] = '1'
126
127 if not os.path.exists(os.path.join(project_dir, 'setup.py')):
128 print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)
129 exit(2)
130
131 if args.print_build_identifiers:
132 print_build_identifiers(platform, build_selector)
133 exit(0)
134
135 build_options = dict(
136 project_dir=project_dir,
137 output_dir=output_dir,
138 test_command=test_command,
139 test_requires=test_requires,
140 test_extras=test_extras,
141 before_build=before_build,
142 build_verbosity=build_verbosity,
143 build_selector=build_selector,
144 environment=environment,
145 )
146
147 if platform == 'linux':
148 manylinux_x86_64_image = os.environ.get('CIBW_MANYLINUX_X86_64_IMAGE', 'manylinux2010')
149 manylinux_i686_image = os.environ.get('CIBW_MANYLINUX_I686_IMAGE', 'manylinux2010')
150
151 default_manylinux_images_x86_64 = {'manylinux1': 'quay.io/pypa/manylinux1_x86_64',
152 'manylinux2010': 'quay.io/pypa/manylinux2010_x86_64'}
153 default_manylinux_images_i686 = {'manylinux1': 'quay.io/pypa/manylinux1_i686',
154 'manylinux2010': 'quay.io/pypa/manylinux2010_i686'}
155
156 build_options.update(
157 manylinux_images={'x86_64': default_manylinux_images_x86_64.get(manylinux_x86_64_image) or manylinux_x86_64_image,
158 'i686': default_manylinux_images_i686.get(manylinux_i686_image) or manylinux_i686_image},
159 )
160 elif platform == 'macos':
161 pass
162 elif platform == 'windows':
163 pass
164
165 # Python is buffering by default when running on the CI platforms, giving problems interleaving subprocess call output with unflushed calls to 'print'
166 sys.stdout = Unbuffered(sys.stdout)
167
168 print_preamble(platform, build_options)
169
170 if not os.path.exists(output_dir):
171 os.makedirs(output_dir)
172
173 if platform == 'linux':
174 cibuildwheel.linux.build(**build_options)
175 elif platform == 'windows':
176 cibuildwheel.windows.build(**build_options)
177 elif platform == 'macos':
178 cibuildwheel.macos.build(**build_options)
179 else:
180 raise Exception('Unsupported platform')
181
182
183 def detect_obsolete_options():
184 # Check the old 'MANYLINUX1_*_IMAGE' options
185 for (deprecated, alternative) in [('CIBW_MANYLINUX1_X86_64_IMAGE', 'CIBW_MANYLINUX_X86_64_IMAGE'),
186 ('CIBW_MANYLINUX1_I686_IMAGE', 'CIBW_MANYLINUX_I686_IMAGE')]:
187 if deprecated in os.environ:
188 print("'{}' has been deprecated, and will be removed in a future release. Use the option '{}' instead.".format(deprecated, alternative))
189 if alternative not in os.environ:
190 print("Using value of option '{}' as replacement for '{}'".format(deprecated, alternative))
191 os.environ[alternative] = os.environ[deprecated]
192 else:
193 print("Option '{}' is not empty. Please unset '{}'".format(alternative, deprecated))
194 exit(2)
195
196 # Check for 'manylinux1' in the 'CIBW_BUILD' and 'CIBW_SKIP' options
197 for deprecated in ['CIBW_BUILD', 'CIBW_SKIP']:
198 if deprecated in os.environ and 'manylinux1' in os.environ[deprecated]:
199 print("Build identifiers with 'manylinux1' been deprecated. Replacing all occurences of 'manylinux1' by 'manylinux' in the option '{}'".format(deprecated))
200 os.environ[deprecated] = os.environ[deprecated].replace('manylinux1', 'manylinux')
201
202
203 def print_preamble(platform, build_options):
204 print(textwrap.dedent('''
205 _ _ _ _ _ _ _
206 ___|_| |_ _ _|_| |_| |_ _ _| |_ ___ ___| |
207 | _| | . | | | | | . | | | | | -_| -_| |
208 |___|_|___|___|_|_|___|_____|_|_|___|___|_|
209 '''))
210
211 print('cibuildwheel version %s\n' % cibuildwheel.__version__)
212
213
214 print('Build options:')
215 print(' platform: %r' % platform)
216 for option, value in sorted(build_options.items()):
217 print(' %s: %r' % (option, value))
218
219 warnings = detect_warnings(platform, build_options)
220 if warnings:
221 print('\nWarnings:')
222 for warning in warnings:
223 print(' ' + warning)
224
225 print('\nHere we go!\n')
226
227
228 def print_build_identifiers(platform, build_selector):
229 if platform == 'linux':
230 python_configurations = cibuildwheel.linux.get_python_configurations(build_selector)
231 elif platform == 'windows':
232 python_configurations = cibuildwheel.windows.get_python_configurations(build_selector)
233 elif platform == 'macos':
234 python_configurations = cibuildwheel.macos.get_python_configurations(build_selector)
235 else:
236 python_configurations = []
237
238 for config in python_configurations:
239 print(config.identifier)
240
241
242 def detect_warnings(platform, build_options):
243 warnings = []
244
245 # warn about deprecated {python} and {pip}
246 for option_name in ['test_command', 'before_build']:
247 option_value = build_options.get(option_name)
248
249 if option_value:
250 if '{python}' in option_value or '{pip}' in option_value:
251 warnings.append(option_name + ": '{python}' and '{pip}' are no longer needed, and will be removed in a future release. Simply use 'python' or 'pip' instead.")
252
253 return warnings
254
255
256 if __name__ == '__main__':
257 main()
258
[end of cibuildwheel/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cibuildwheel/__main__.py b/cibuildwheel/__main__.py
--- a/cibuildwheel/__main__.py
+++ b/cibuildwheel/__main__.py
@@ -24,6 +24,12 @@
return os.environ.get(option_name, default)
+def strtobool(val):
+ if val.lower() in ('y', 'yes', 't', 'true', 'on', '1'):
+ return True
+ return False
+
+
def main():
parser = argparse.ArgumentParser(
description='Build wheels for all the platforms.',
@@ -38,7 +44,7 @@
'script is going to automatically install MacPython on your system, '
'so don\'t run on your development machine. For "windows", you need to '
'run in Windows, and it will build and test for all versions of '
- 'Python at C:\\PythonXX[-x64]. Default: auto.'))
+ 'Python. Default: auto.'))
parser.add_argument('--output-dir',
default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),
help='Destination folder for the wheels.')
@@ -59,36 +65,19 @@
if args.platform != 'auto':
platform = args.platform
else:
- platform = None
-
- if os.environ.get('TRAVIS_OS_NAME') == 'linux':
- platform = 'linux'
- elif os.environ.get('TRAVIS_OS_NAME') == 'osx':
- platform = 'macos'
- elif os.environ.get('TRAVIS_OS_NAME') == 'windows':
- platform = 'windows'
- elif 'APPVEYOR' in os.environ:
- platform = 'windows'
- elif 'BITRISE_BUILD_NUMBER' in os.environ:
- platform = 'macos'
- elif os.environ.get('CIRCLECI'):
+ ci = strtobool(os.environ.get('CI', 'false')) or 'BITRISE_BUILD_NUMBER' in os.environ or 'AZURE_HTTP_USER_AGENT' in os.environ
+ if ci:
if sys.platform.startswith('linux'):
platform = 'linux'
- elif sys.platform.startswith('darwin'):
+ elif sys.platform == 'darwin':
platform = 'macos'
- elif 'AZURE_HTTP_USER_AGENT' in os.environ:
- if os.environ['AGENT_OS'] == 'Linux':
- platform = 'linux'
- elif os.environ['AGENT_OS'] == 'Darwin':
- platform = 'macos'
- elif os.environ['AGENT_OS'] == 'Windows_NT':
+ elif sys.platform == 'win32':
platform = 'windows'
-
if platform is None:
print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '
- 'Travis CI, AppVeyor, and CircleCI are supported. You can run on your development '
- 'machine using the --platform argument. Check --help output for more '
- 'information.',
+ 'Travis CI, AppVeyor, Azure Pipelines and CircleCI are supported. You can run on your '
+ 'development machine or other CI providers using the --platform argument. Check --help '
+ 'output for more information.',
file=sys.stderr)
exit(2)
|
{"golden_diff": "diff --git a/cibuildwheel/__main__.py b/cibuildwheel/__main__.py\n--- a/cibuildwheel/__main__.py\n+++ b/cibuildwheel/__main__.py\n@@ -24,6 +24,12 @@\n return os.environ.get(option_name, default)\n \n \n+def strtobool(val):\n+ if val.lower() in ('y', 'yes', 't', 'true', 'on', '1'):\n+ return True\n+ return False\n+\n+\n def main():\n parser = argparse.ArgumentParser(\n description='Build wheels for all the platforms.',\n@@ -38,7 +44,7 @@\n 'script is going to automatically install MacPython on your system, '\n 'so don\\'t run on your development machine. For \"windows\", you need to '\n 'run in Windows, and it will build and test for all versions of '\n- 'Python at C:\\\\PythonXX[-x64]. Default: auto.'))\n+ 'Python. Default: auto.'))\n parser.add_argument('--output-dir',\n default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),\n help='Destination folder for the wheels.')\n@@ -59,36 +65,19 @@\n if args.platform != 'auto':\n platform = args.platform\n else:\n- platform = None\n-\n- if os.environ.get('TRAVIS_OS_NAME') == 'linux':\n- platform = 'linux'\n- elif os.environ.get('TRAVIS_OS_NAME') == 'osx':\n- platform = 'macos'\n- elif os.environ.get('TRAVIS_OS_NAME') == 'windows':\n- platform = 'windows'\n- elif 'APPVEYOR' in os.environ:\n- platform = 'windows'\n- elif 'BITRISE_BUILD_NUMBER' in os.environ:\n- platform = 'macos'\n- elif os.environ.get('CIRCLECI'):\n+ ci = strtobool(os.environ.get('CI', 'false')) or 'BITRISE_BUILD_NUMBER' in os.environ or 'AZURE_HTTP_USER_AGENT' in os.environ\n+ if ci:\n if sys.platform.startswith('linux'):\n platform = 'linux'\n- elif sys.platform.startswith('darwin'):\n+ elif sys.platform == 'darwin':\n platform = 'macos'\n- elif 'AZURE_HTTP_USER_AGENT' in os.environ:\n- if os.environ['AGENT_OS'] == 'Linux':\n- platform = 'linux'\n- elif os.environ['AGENT_OS'] == 'Darwin':\n- platform = 'macos'\n- elif os.environ['AGENT_OS'] == 'Windows_NT':\n+ elif sys.platform == 'win32':\n platform = 'windows'\n-\n if platform is None:\n print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '\n- 'Travis CI, AppVeyor, and CircleCI are supported. You can run on your development '\n- 'machine using the --platform argument. Check --help output for more '\n- 'information.',\n+ 'Travis CI, AppVeyor, Azure Pipelines and CircleCI are supported. You can run on your '\n+ 'development machine or other CI providers using the --platform argument. Check --help '\n+ 'output for more information.',\n file=sys.stderr)\n exit(2)\n", "issue": "Linux support in AppVeyor\nFYI\r\nhttps://www.appveyor.com/blog/2018/03/06/appveyor-for-linux/\r\nhttps://www.appveyor.com/blog/2018/05/15/appveyor-for-linux-is-generally-available/\r\nhttps://www.appveyor.com/docs/getting-started-with-appveyor-for-linux/\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse, os, subprocess, sys, textwrap\n\nimport cibuildwheel\nimport cibuildwheel.linux, cibuildwheel.windows, cibuildwheel.macos\nfrom cibuildwheel.environment import parse_environment, EnvironmentParseError\nfrom cibuildwheel.util import BuildSelector, Unbuffered\n\ndef get_option_from_environment(option_name, platform=None, default=None):\n '''\n Returns an option from the environment, optionally scoped by the platform.\n\n Example:\n get_option_from_environment('CIBW_COLOR', platform='macos')\n\n This will return the value of CIBW_COLOR_MACOS if it exists, otherwise the value of\n CIBW_COLOR.\n '''\n if platform:\n option = os.environ.get('%s_%s' % (option_name, platform.upper()))\n if option is not None:\n return option\n\n return os.environ.get(option_name, default)\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Build wheels for all the platforms.',\n epilog=('Most options are supplied via environment variables. '\n 'See https://github.com/joerick/cibuildwheel#options for info.'))\n\n parser.add_argument('--platform',\n choices=['auto', 'linux', 'macos', 'windows'],\n default=os.environ.get('CIBW_PLATFORM', 'auto'),\n help=('Platform to build for. For \"linux\" you need docker running, on Mac '\n 'or Linux. For \"macos\", you need a Mac machine, and note that this '\n 'script is going to automatically install MacPython on your system, '\n 'so don\\'t run on your development machine. For \"windows\", you need to '\n 'run in Windows, and it will build and test for all versions of '\n 'Python at C:\\\\PythonXX[-x64]. Default: auto.'))\n parser.add_argument('--output-dir',\n default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),\n help='Destination folder for the wheels.')\n parser.add_argument('project_dir',\n default='.',\n nargs='?',\n help=('Path to the project that you want wheels for. Default: the current '\n 'directory.'))\n\n parser.add_argument('--print-build-identifiers',\n action='store_true',\n help='Print the build identifiers matched by the current invocation and exit.')\n\n args = parser.parse_args()\n\n detect_obsolete_options()\n\n if args.platform != 'auto':\n platform = args.platform\n else:\n platform = None\n\n if os.environ.get('TRAVIS_OS_NAME') == 'linux':\n platform = 'linux'\n elif os.environ.get('TRAVIS_OS_NAME') == 'osx':\n platform = 'macos'\n elif os.environ.get('TRAVIS_OS_NAME') == 'windows':\n platform = 'windows'\n elif 'APPVEYOR' in os.environ:\n platform = 'windows'\n elif 'BITRISE_BUILD_NUMBER' in os.environ:\n platform = 'macos'\n elif os.environ.get('CIRCLECI'):\n if sys.platform.startswith('linux'):\n platform = 'linux'\n elif sys.platform.startswith('darwin'):\n platform = 'macos'\n elif 'AZURE_HTTP_USER_AGENT' in os.environ:\n if os.environ['AGENT_OS'] == 'Linux':\n platform = 'linux'\n elif os.environ['AGENT_OS'] == 'Darwin':\n platform = 'macos'\n elif os.environ['AGENT_OS'] == 'Windows_NT':\n platform = 'windows'\n\n if platform is None:\n print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '\n 'Travis CI, AppVeyor, and CircleCI are supported. You can run on your development '\n 'machine using the --platform argument. Check --help output for more '\n 'information.',\n file=sys.stderr)\n exit(2)\n\n output_dir = args.output_dir\n test_command = get_option_from_environment('CIBW_TEST_COMMAND', platform=platform)\n test_requires = get_option_from_environment('CIBW_TEST_REQUIRES', platform=platform, default='').split()\n test_extras = get_option_from_environment('CIBW_TEST_EXTRAS', platform=platform, default='')\n project_dir = args.project_dir\n before_build = get_option_from_environment('CIBW_BEFORE_BUILD', platform=platform)\n build_verbosity = get_option_from_environment('CIBW_BUILD_VERBOSITY', platform=platform, default='')\n build_config, skip_config = os.environ.get('CIBW_BUILD', '*'), os.environ.get('CIBW_SKIP', '')\n environment_config = get_option_from_environment('CIBW_ENVIRONMENT', platform=platform, default='')\n\n if test_extras:\n test_extras = '[{0}]'.format(test_extras)\n\n try:\n build_verbosity = min(3, max(-3, int(build_verbosity)))\n except ValueError:\n build_verbosity = 0\n\n try:\n environment = parse_environment(environment_config)\n except (EnvironmentParseError, ValueError) as e:\n print('cibuildwheel: Malformed environment option \"%s\"' % environment_config, file=sys.stderr)\n import traceback\n traceback.print_exc(None, sys.stderr)\n exit(2)\n\n build_selector = BuildSelector(build_config, skip_config)\n\n # Add CIBUILDWHEEL environment variable\n # This needs to be passed on to the docker container in linux.py\n os.environ['CIBUILDWHEEL'] = '1'\n\n if not os.path.exists(os.path.join(project_dir, 'setup.py')):\n print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)\n exit(2)\n\n if args.print_build_identifiers:\n print_build_identifiers(platform, build_selector)\n exit(0)\n\n build_options = dict(\n project_dir=project_dir,\n output_dir=output_dir,\n test_command=test_command,\n test_requires=test_requires,\n test_extras=test_extras,\n before_build=before_build,\n build_verbosity=build_verbosity,\n build_selector=build_selector,\n environment=environment,\n )\n\n if platform == 'linux':\n manylinux_x86_64_image = os.environ.get('CIBW_MANYLINUX_X86_64_IMAGE', 'manylinux2010')\n manylinux_i686_image = os.environ.get('CIBW_MANYLINUX_I686_IMAGE', 'manylinux2010')\n\n default_manylinux_images_x86_64 = {'manylinux1': 'quay.io/pypa/manylinux1_x86_64',\n 'manylinux2010': 'quay.io/pypa/manylinux2010_x86_64'}\n default_manylinux_images_i686 = {'manylinux1': 'quay.io/pypa/manylinux1_i686',\n 'manylinux2010': 'quay.io/pypa/manylinux2010_i686'}\n\n build_options.update(\n manylinux_images={'x86_64': default_manylinux_images_x86_64.get(manylinux_x86_64_image) or manylinux_x86_64_image,\n 'i686': default_manylinux_images_i686.get(manylinux_i686_image) or manylinux_i686_image},\n )\n elif platform == 'macos':\n pass\n elif platform == 'windows':\n pass\n\n # Python is buffering by default when running on the CI platforms, giving problems interleaving subprocess call output with unflushed calls to 'print'\n sys.stdout = Unbuffered(sys.stdout)\n\n print_preamble(platform, build_options)\n\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n\n if platform == 'linux':\n cibuildwheel.linux.build(**build_options)\n elif platform == 'windows':\n cibuildwheel.windows.build(**build_options)\n elif platform == 'macos':\n cibuildwheel.macos.build(**build_options)\n else:\n raise Exception('Unsupported platform')\n\n\ndef detect_obsolete_options():\n # Check the old 'MANYLINUX1_*_IMAGE' options\n for (deprecated, alternative) in [('CIBW_MANYLINUX1_X86_64_IMAGE', 'CIBW_MANYLINUX_X86_64_IMAGE'),\n ('CIBW_MANYLINUX1_I686_IMAGE', 'CIBW_MANYLINUX_I686_IMAGE')]:\n if deprecated in os.environ:\n print(\"'{}' has been deprecated, and will be removed in a future release. Use the option '{}' instead.\".format(deprecated, alternative))\n if alternative not in os.environ:\n print(\"Using value of option '{}' as replacement for '{}'\".format(deprecated, alternative))\n os.environ[alternative] = os.environ[deprecated]\n else:\n print(\"Option '{}' is not empty. Please unset '{}'\".format(alternative, deprecated))\n exit(2)\n\n # Check for 'manylinux1' in the 'CIBW_BUILD' and 'CIBW_SKIP' options\n for deprecated in ['CIBW_BUILD', 'CIBW_SKIP']:\n if deprecated in os.environ and 'manylinux1' in os.environ[deprecated]:\n print(\"Build identifiers with 'manylinux1' been deprecated. Replacing all occurences of 'manylinux1' by 'manylinux' in the option '{}'\".format(deprecated))\n os.environ[deprecated] = os.environ[deprecated].replace('manylinux1', 'manylinux')\n\n\ndef print_preamble(platform, build_options):\n print(textwrap.dedent('''\n _ _ _ _ _ _ _\n ___|_| |_ _ _|_| |_| |_ _ _| |_ ___ ___| |\n | _| | . | | | | | . | | | | | -_| -_| |\n |___|_|___|___|_|_|___|_____|_|_|___|___|_|\n '''))\n\n print('cibuildwheel version %s\\n' % cibuildwheel.__version__)\n\n\n print('Build options:')\n print(' platform: %r' % platform)\n for option, value in sorted(build_options.items()):\n print(' %s: %r' % (option, value))\n\n warnings = detect_warnings(platform, build_options)\n if warnings:\n print('\\nWarnings:')\n for warning in warnings:\n print(' ' + warning)\n\n print('\\nHere we go!\\n')\n\n\ndef print_build_identifiers(platform, build_selector):\n if platform == 'linux':\n python_configurations = cibuildwheel.linux.get_python_configurations(build_selector)\n elif platform == 'windows':\n python_configurations = cibuildwheel.windows.get_python_configurations(build_selector)\n elif platform == 'macos':\n python_configurations = cibuildwheel.macos.get_python_configurations(build_selector)\n else:\n python_configurations = []\n\n for config in python_configurations:\n print(config.identifier)\n\n\ndef detect_warnings(platform, build_options):\n warnings = []\n\n # warn about deprecated {python} and {pip}\n for option_name in ['test_command', 'before_build']:\n option_value = build_options.get(option_name)\n\n if option_value:\n if '{python}' in option_value or '{pip}' in option_value:\n warnings.append(option_name + \": '{python}' and '{pip}' are no longer needed, and will be removed in a future release. Simply use 'python' or 'pip' instead.\")\n\n return warnings\n\n\nif __name__ == '__main__':\n main()\n", "path": "cibuildwheel/__main__.py"}]}
| 3,843 | 732 |
gh_patches_debug_15830
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-666
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "all" install option
We have several subpackages now. I think it would be good if we had an option to install all subpackages.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 with open('parsl/version.py') as f:
4 exec(f.read())
5
6 with open('requirements.txt') as f:
7 install_requires = f.readlines()
8
9 setup(
10 name='parsl',
11 version=VERSION,
12 description='Simple data dependent workflows in Python',
13 long_description='Simple parallel workflows system for Python',
14 url='https://github.com/Parsl/parsl',
15 author='The Parsl Team',
16 author_email='[email protected]',
17 license='Apache 2.0',
18 download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),
19 package_data={'': ['LICENSE']},
20 packages=find_packages(),
21 install_requires=install_requires,
22 scripts = ['parsl/executors/high_throughput/process_worker_pool.py',
23 'parsl/executors/extreme_scale/mpi_worker_pool.py'],
24 extras_require = {
25 'db_logging' : ['CMRESHandler', 'psutil', 'sqlalchemy'],
26 'aws' : ['boto3'],
27 'jetstream' : ['python-novaclient'],
28 'extreme_scale' : ['mpi4py'],
29 'docs' : ['nbsphinx', 'sphinx_rtd_theme'],
30 'google_cloud' : ['google-auth', 'google-api-python-client']
31 },
32 classifiers = [
33 # Maturity
34 'Development Status :: 3 - Alpha',
35 # Intended audience
36 'Intended Audience :: Developers',
37 # Licence, must match with licence above
38 'License :: OSI Approved :: Apache Software License',
39 # Python versions supported
40 'Programming Language :: Python :: 3.5',
41 'Programming Language :: Python :: 3.6',
42 ],
43 keywords=['Workflows', 'Scientific computing'],
44 )
45
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,10 +24,17 @@
extras_require = {
'db_logging' : ['CMRESHandler', 'psutil', 'sqlalchemy'],
'aws' : ['boto3'],
- 'jetstream' : ['python-novaclient'],
+ # Jetstream is deprecated since the interface has not been maintained.
+ # 'jetstream' : ['python-novaclient'],
'extreme_scale' : ['mpi4py'],
'docs' : ['nbsphinx', 'sphinx_rtd_theme'],
- 'google_cloud' : ['google-auth', 'google-api-python-client']
+ 'google_cloud' : ['google-auth', 'google-api-python-client'],
+ 'all' : ['CMRESHandler', 'psutil', 'sqlalchemy',
+ 'boto3',
+ 'mpi4py',
+ 'nbsphinx', 'sphinx_rtd_theme',
+ 'google-auth', 'google-api-python-client']
+
},
classifiers = [
# Maturity
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,10 +24,17 @@\n extras_require = {\n 'db_logging' : ['CMRESHandler', 'psutil', 'sqlalchemy'],\n 'aws' : ['boto3'],\n- 'jetstream' : ['python-novaclient'],\n+ # Jetstream is deprecated since the interface has not been maintained.\n+ # 'jetstream' : ['python-novaclient'],\n 'extreme_scale' : ['mpi4py'],\n 'docs' : ['nbsphinx', 'sphinx_rtd_theme'],\n- 'google_cloud' : ['google-auth', 'google-api-python-client']\n+ 'google_cloud' : ['google-auth', 'google-api-python-client'],\n+ 'all' : ['CMRESHandler', 'psutil', 'sqlalchemy',\n+ 'boto3',\n+ 'mpi4py',\n+ 'nbsphinx', 'sphinx_rtd_theme',\n+ 'google-auth', 'google-api-python-client']\n+\n },\n classifiers = [\n # Maturity\n", "issue": "Add \"all\" install option\nWe have several subpackages now. I think it would be good if we had an option to install all subpackages. \n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nwith open('parsl/version.py') as f:\n exec(f.read())\n\nwith open('requirements.txt') as f:\n install_requires = f.readlines()\n\nsetup(\n name='parsl',\n version=VERSION,\n description='Simple data dependent workflows in Python',\n long_description='Simple parallel workflows system for Python',\n url='https://github.com/Parsl/parsl',\n author='The Parsl Team',\n author_email='[email protected]',\n license='Apache 2.0',\n download_url='https://github.com/Parsl/parsl/archive/{}.tar.gz'.format(VERSION),\n package_data={'': ['LICENSE']},\n packages=find_packages(),\n install_requires=install_requires,\n scripts = ['parsl/executors/high_throughput/process_worker_pool.py',\n 'parsl/executors/extreme_scale/mpi_worker_pool.py'],\n extras_require = {\n 'db_logging' : ['CMRESHandler', 'psutil', 'sqlalchemy'],\n 'aws' : ['boto3'],\n 'jetstream' : ['python-novaclient'],\n 'extreme_scale' : ['mpi4py'],\n 'docs' : ['nbsphinx', 'sphinx_rtd_theme'],\n 'google_cloud' : ['google-auth', 'google-api-python-client']\n },\n classifiers = [\n # Maturity\n 'Development Status :: 3 - Alpha',\n # Intended audience\n 'Intended Audience :: Developers',\n # Licence, must match with licence above\n 'License :: OSI Approved :: Apache Software License',\n # Python versions supported\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['Workflows', 'Scientific computing'],\n)\n", "path": "setup.py"}]}
| 1,028 | 244 |
gh_patches_debug_15111
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-2621
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add default icon for TableBlock
As per https://github.com/torchbox/wagtail/pull/1705#issuecomment-216053655. Best to do this after #2417 is merged, to avoid conflicts.
</issue>
<code>
[start of wagtail/contrib/table_block/fields.py]
1 from __future__ import absolute_import, unicode_literals
2
3 import json
4
5 from django import forms
6 from django.template.loader import render_to_string
7 from django.utils import translation
8 from django.utils.functional import cached_property
9
10 from wagtail.utils.widgets import WidgetWithScript
11 from wagtail.wagtailcore.blocks import FieldBlock
12
13
14 class TableInput(WidgetWithScript, forms.HiddenInput):
15
16 def __init__(self, table_options=None, attrs=None):
17 self.table_options = table_options
18 super(TableInput, self).__init__(attrs=attrs)
19
20 def render(self, name, value, attrs=None):
21 original_field_html = super(TableInput, self).render(name, value, attrs)
22 return render_to_string("table_block/widgets/table.html", {
23 'original_field_html': original_field_html,
24 'attrs': attrs,
25 'value': value,
26 })
27
28 def render_js_init(self, id_, name, value):
29 return "initTable({0}, {1});".format(json.dumps(id_), json.dumps(self.table_options))
30
31
32
33 class TableBlock(FieldBlock):
34 def __init__(self, required=True, help_text=None, table_options=None, **kwargs):
35 # CharField's 'label' and 'initial' parameters are not exposed, as Block handles that functionality
36 # natively (via 'label' and 'default')
37 # CharField's 'max_length' and 'min_length' parameters are not exposed as table data needs to
38 # have arbitrary length
39 # table_options can contain any valid handsontable options: http://docs.handsontable.com/0.18.0/Options.html
40 self.field_options = {'required': required, 'help_text': help_text}
41
42 language = translation.get_language()
43 if language is not None and len(language) > 2:
44 language = language[:2]
45
46 default_table_options = {
47 'minSpareRows': 0,
48 'startRows': 3,
49 'startCols': 3,
50 'colHeaders': False,
51 'rowHeaders': False,
52 'contextMenu': True,
53 'editor': 'text',
54 'stretchH': 'all',
55 'height': 108,
56 'language': language,
57 'renderer': 'text',
58 'autoColumnSize': False,
59 }
60 if table_options is not None:
61 default_table_options.update(table_options)
62 self.table_options = default_table_options
63 super(TableBlock, self).__init__(**kwargs)
64
65 @cached_property
66 def field(self):
67 return forms.CharField(widget=TableInput(table_options=self.table_options), **self.field_options)
68
69 def value_from_form(self, value):
70 return json.loads(value)
71
72 def value_for_form(self, value):
73 return json.dumps(value)
74
75 def is_html_renderer(self):
76 return self.table_options['renderer'] == 'html'
77
78 def render(self, value):
79 template = getattr(self.meta, 'template', None)
80 if template and value:
81 table_header = value['data'][0] if value.get('data', None) and len(value['data']) > 0 and value.get('first_row_is_table_header', False) else None
82 first_col_is_header = value.get('first_col_is_header', False)
83 context = {
84 'self': value,
85 self.TEMPLATE_VAR: value,
86 'table_header': table_header,
87 'first_col_is_header': first_col_is_header,
88 'html_renderer': self.is_html_renderer(),
89 'data': value['data'][1:] if table_header else value.get('data', [])
90 }
91 return render_to_string(template, context)
92 else:
93 return self.render_basic(value)
94
95 @property
96 def media(self):
97 return forms.Media(
98 css={'all': ['table_block/css/vendor/handsontable-0.24.2.full.min.css']},
99 js=['table_block/js/vendor/handsontable-0.24.2.full.min.js', 'table_block/js/table.js']
100 )
101
102 class Meta:
103 default = None
104 template = 'table_block/blocks/table.html'
105
[end of wagtail/contrib/table_block/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/contrib/table_block/fields.py b/wagtail/contrib/table_block/fields.py
--- a/wagtail/contrib/table_block/fields.py
+++ b/wagtail/contrib/table_block/fields.py
@@ -29,7 +29,6 @@
return "initTable({0}, {1});".format(json.dumps(id_), json.dumps(self.table_options))
-
class TableBlock(FieldBlock):
def __init__(self, required=True, help_text=None, table_options=None, **kwargs):
# CharField's 'label' and 'initial' parameters are not exposed, as Block handles that functionality
@@ -102,3 +101,4 @@
class Meta:
default = None
template = 'table_block/blocks/table.html'
+ icon = "table"
|
{"golden_diff": "diff --git a/wagtail/contrib/table_block/fields.py b/wagtail/contrib/table_block/fields.py\n--- a/wagtail/contrib/table_block/fields.py\n+++ b/wagtail/contrib/table_block/fields.py\n@@ -29,7 +29,6 @@\n return \"initTable({0}, {1});\".format(json.dumps(id_), json.dumps(self.table_options))\n \n \n-\n class TableBlock(FieldBlock):\n def __init__(self, required=True, help_text=None, table_options=None, **kwargs):\n # CharField's 'label' and 'initial' parameters are not exposed, as Block handles that functionality\n@@ -102,3 +101,4 @@\n class Meta:\n default = None\n template = 'table_block/blocks/table.html'\n+ icon = \"table\"\n", "issue": "Add default icon for TableBlock\nAs per https://github.com/torchbox/wagtail/pull/1705#issuecomment-216053655. Best to do this after #2417 is merged, to avoid conflicts.\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport json\n\nfrom django import forms\nfrom django.template.loader import render_to_string\nfrom django.utils import translation\nfrom django.utils.functional import cached_property\n\nfrom wagtail.utils.widgets import WidgetWithScript\nfrom wagtail.wagtailcore.blocks import FieldBlock\n\n\nclass TableInput(WidgetWithScript, forms.HiddenInput):\n\n def __init__(self, table_options=None, attrs=None):\n self.table_options = table_options\n super(TableInput, self).__init__(attrs=attrs)\n\n def render(self, name, value, attrs=None):\n original_field_html = super(TableInput, self).render(name, value, attrs)\n return render_to_string(\"table_block/widgets/table.html\", {\n 'original_field_html': original_field_html,\n 'attrs': attrs,\n 'value': value,\n })\n\n def render_js_init(self, id_, name, value):\n return \"initTable({0}, {1});\".format(json.dumps(id_), json.dumps(self.table_options))\n\n\n\nclass TableBlock(FieldBlock):\n def __init__(self, required=True, help_text=None, table_options=None, **kwargs):\n # CharField's 'label' and 'initial' parameters are not exposed, as Block handles that functionality\n # natively (via 'label' and 'default')\n # CharField's 'max_length' and 'min_length' parameters are not exposed as table data needs to\n # have arbitrary length\n # table_options can contain any valid handsontable options: http://docs.handsontable.com/0.18.0/Options.html\n self.field_options = {'required': required, 'help_text': help_text}\n\n language = translation.get_language()\n if language is not None and len(language) > 2:\n language = language[:2]\n\n default_table_options = {\n 'minSpareRows': 0,\n 'startRows': 3,\n 'startCols': 3,\n 'colHeaders': False,\n 'rowHeaders': False,\n 'contextMenu': True,\n 'editor': 'text',\n 'stretchH': 'all',\n 'height': 108,\n 'language': language,\n 'renderer': 'text',\n 'autoColumnSize': False,\n }\n if table_options is not None:\n default_table_options.update(table_options)\n self.table_options = default_table_options\n super(TableBlock, self).__init__(**kwargs)\n\n @cached_property\n def field(self):\n return forms.CharField(widget=TableInput(table_options=self.table_options), **self.field_options)\n\n def value_from_form(self, value):\n return json.loads(value)\n\n def value_for_form(self, value):\n return json.dumps(value)\n\n def is_html_renderer(self):\n return self.table_options['renderer'] == 'html'\n\n def render(self, value):\n template = getattr(self.meta, 'template', None)\n if template and value:\n table_header = value['data'][0] if value.get('data', None) and len(value['data']) > 0 and value.get('first_row_is_table_header', False) else None\n first_col_is_header = value.get('first_col_is_header', False)\n context = {\n 'self': value,\n self.TEMPLATE_VAR: value,\n 'table_header': table_header,\n 'first_col_is_header': first_col_is_header,\n 'html_renderer': self.is_html_renderer(),\n 'data': value['data'][1:] if table_header else value.get('data', [])\n }\n return render_to_string(template, context)\n else:\n return self.render_basic(value)\n\n @property\n def media(self):\n return forms.Media(\n css={'all': ['table_block/css/vendor/handsontable-0.24.2.full.min.css']},\n js=['table_block/js/vendor/handsontable-0.24.2.full.min.js', 'table_block/js/table.js']\n )\n\n class Meta:\n default = None\n template = 'table_block/blocks/table.html'\n", "path": "wagtail/contrib/table_block/fields.py"}]}
| 1,688 | 180 |
gh_patches_debug_17716
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-4900
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scrub: The `auto` option should be a no-op when `import.write` is disabled
### Problem
I was importing new tracks with `import.write` disabled and seeing all tracks populated with new tags. With help from the [forum](https://discourse.beets.io/t/tags-are-being-written-to-imported-files-even-though-import-write-no/2068/4), I was able to determine that with `scrub.auto` enabled and `import.write` disabled, the scrub plugin removes tags and rewrites them from the database.
The [scrub documentation](https://beets.readthedocs.io/en/stable/plugins/scrub.html#automatic-scrubbing) indicates that scrubbing on import is only active if `import.write` is also enabled.
> When importing new files (with import.write turned on) or modifying files’ tags with the beet modify command, beets will first strip all types of tags entirely and then write the database-tracked metadata to the file.
I'd think a resolution could be:
- Update the documentation to indicate scrub doesn't read the `import.write` setting.
- Or scrub _should_ read that setting.
- Or, even better, have a separate scrub configuration that disables restoring tags after the scrub.
### Setup
* OS: Docker https://hub.docker.com/r/linuxserver/beets
* Python version: 3.9.7
* beets version: 1.6.0
* Turning off plugins made problem go away (yes/no): yes
My configuration (output of `beet config`) is:
```yaml
lyrics:
bing_lang_from: []
auto: no
fallback: ''
sources: genius
bing_client_secret: REDACTED
bing_lang_to:
google_API_key: REDACTED
google_engine_ID: REDACTED
genius_api_key: REDACTED
force: no
local: no
plugins:
- fetchart
- embedart
- convert
- scrub
- replaygain
- lyrics
- lastgenre
- the
- inline
- chroma
- web
- permissions
directory: /music
library: /config/musiclibrary.blb
art_filename: _cover
threaded: yes
original_date: yes
per_disc_numbering: yes
chroma:
auto: yes
embedart:
auto: no
maxwidth: 0
compare_threshold: 0
ifempty: no
remove_art_file: no
quality: 0
convert:
auto: no
dest: /converted
format: aac
formats:
aac:
command: ffmpeg -i $source -c:a aac -vbr 4 $dest
extension: m4a
mp3:
command: /config/gapless-mp3.sh $source $dest
extension: mp3
alac:
command: ffmpeg -i $source -y -vn -acodec alac $dest
extension: m4a
flac: ffmpeg -i $source -y -vn -acodec flac $dest
opus: ffmpeg -i $source -y -vn -acodec libopus -ab 96k $dest
ogg: ffmpeg -i $source -y -vn -acodec libvorbis -aq 3 $dest
wma: ffmpeg -i $source -y -vn -acodec wmav2 -vn $dest
pretend: no
link: no
hardlink: no
threads: 8
id3v23: inherit
max_bitrate: 500
tmpdir:
quiet: no
embed: yes
paths: {}
no_convert: ''
never_convert_lossy_files: no
copy_album_art: no
album_art_maxwidth: 0
delete_originals: no
item_fields:
disc_and_track: u'%02i.%02i' % (disc, track) if disctotal > 1 else u'%02i' % (track)
the:
a: no
the: yes
format: '{0}, {1}'
strip: no
patterns: []
paths:
default: '%the{$albumartist}/$original_year - $album%aunique{}/$disc_and_track $title'
singleton: '%the{$albumartist}/_singles/$original_year - $title'
comp: _Compilations/$original_year - $album%aunique{}/$disc_and_track $title
albumtype_soundtrack: _Soundtracks/$original_year - $album%aunique{}/$disc_and_track $title
permissions:
file: 664
dir: 775
import:
write: no
copy: no
move: yes
resume: ask
incremental: no
quiet_fallback: skip
timid: no
log: /config/beet.log
lastgenre:
auto: yes
source: album
whitelist: yes
min_weight: 10
count: 1
fallback:
canonical: no
force: yes
separator: ', '
prefer_specific: no
title_case: yes
fetchart:
auto: yes
store_source: yes
minwidth: 600
sources:
- filesystem
- coverart
- itunes
- albumart
- amazon
maxwidth: 0
quality: 0
max_filesize: 0
enforce_ratio: no
cautious: no
cover_names:
- cover
- front
- art
- album
- folder
google_key: REDACTED
google_engine: 001442825323518660753:hrh5ch1gjzm
fanarttv_key: REDACTED
lastfm_key: REDACTED
high_resolution: no
deinterlace: no
cover_format:
replaygain:
auto: yes
backend: ffmpeg
overwrite: no
threads: 8
parallel_on_import: no
per_disc: no
peak: 'true'
targetlevel: 89
r128: [Opus]
r128_targetlevel: 84
scrub:
auto: yes
replace:
^\.: _
'[\x00-\x1f]': _
'[<>:"\?\*\|]': _
'[\xE8-\xEB]': e
'[\xEC-\xEF]': i
'[\xE2-\xE6]': a
'[\xF2-\xF6]': o
'[\xF8]': o
\.$: _
\s+$: ''
web:
host: 0.0.0.0
port: 8337
cors: ''
cors_supports_credentials: no
reverse_proxy: no
include_paths: no
readonly: yes
pathfields: {}
album_fields: {}
```
</issue>
<code>
[start of beetsplug/scrub.py]
1 # This file is part of beets.
2 # Copyright 2016, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Cleans extraneous metadata from files' tags via a command or
16 automatically whenever tags are written.
17 """
18
19
20 from beets.plugins import BeetsPlugin
21 from beets import ui
22 from beets import util
23 from beets import config
24 import mediafile
25 import mutagen
26
27 _MUTAGEN_FORMATS = {
28 'asf': 'ASF',
29 'apev2': 'APEv2File',
30 'flac': 'FLAC',
31 'id3': 'ID3FileType',
32 'mp3': 'MP3',
33 'mp4': 'MP4',
34 'oggflac': 'OggFLAC',
35 'oggspeex': 'OggSpeex',
36 'oggtheora': 'OggTheora',
37 'oggvorbis': 'OggVorbis',
38 'oggopus': 'OggOpus',
39 'trueaudio': 'TrueAudio',
40 'wavpack': 'WavPack',
41 'monkeysaudio': 'MonkeysAudio',
42 'optimfrog': 'OptimFROG',
43 }
44
45
46 class ScrubPlugin(BeetsPlugin):
47 """Removes extraneous metadata from files' tags."""
48 def __init__(self):
49 super().__init__()
50 self.config.add({
51 'auto': True,
52 })
53
54 if self.config['auto']:
55 self.register_listener("import_task_files", self.import_task_files)
56
57 def commands(self):
58 def scrub_func(lib, opts, args):
59 # Walk through matching files and remove tags.
60 for item in lib.items(ui.decargs(args)):
61 self._log.info('scrubbing: {0}',
62 util.displayable_path(item.path))
63 self._scrub_item(item, opts.write)
64
65 scrub_cmd = ui.Subcommand('scrub', help='clean audio tags')
66 scrub_cmd.parser.add_option(
67 '-W', '--nowrite', dest='write',
68 action='store_false', default=True,
69 help='leave tags empty')
70 scrub_cmd.func = scrub_func
71
72 return [scrub_cmd]
73
74 @staticmethod
75 def _mutagen_classes():
76 """Get a list of file type classes from the Mutagen module.
77 """
78 classes = []
79 for modname, clsname in _MUTAGEN_FORMATS.items():
80 mod = __import__(f'mutagen.{modname}',
81 fromlist=[clsname])
82 classes.append(getattr(mod, clsname))
83 return classes
84
85 def _scrub(self, path):
86 """Remove all tags from a file.
87 """
88 for cls in self._mutagen_classes():
89 # Try opening the file with this type, but just skip in the
90 # event of any error.
91 try:
92 f = cls(util.syspath(path))
93 except Exception:
94 continue
95 if f.tags is None:
96 continue
97
98 # Remove the tag for this type.
99 try:
100 f.delete()
101 except NotImplementedError:
102 # Some Mutagen metadata subclasses (namely, ASFTag) do not
103 # support .delete(), presumably because it is impossible to
104 # remove them. In this case, we just remove all the tags.
105 for tag in f.keys():
106 del f[tag]
107 f.save()
108 except (OSError, mutagen.MutagenError) as exc:
109 self._log.error('could not scrub {0}: {1}',
110 util.displayable_path(path), exc)
111
112 def _scrub_item(self, item, restore=True):
113 """Remove tags from an Item's associated file and, if `restore`
114 is enabled, write the database's tags back to the file.
115 """
116 # Get album art if we need to restore it.
117 if restore:
118 try:
119 mf = mediafile.MediaFile(util.syspath(item.path),
120 config['id3v23'].get(bool))
121 except mediafile.UnreadableFileError as exc:
122 self._log.error('could not open file to scrub: {0}',
123 exc)
124 return
125 images = mf.images
126
127 # Remove all tags.
128 self._scrub(item.path)
129
130 # Restore tags, if enabled.
131 if restore:
132 self._log.debug('writing new tags after scrub')
133 item.try_write()
134 if images:
135 self._log.debug('restoring art')
136 try:
137 mf = mediafile.MediaFile(util.syspath(item.path),
138 config['id3v23'].get(bool))
139 mf.images = images
140 mf.save()
141 except mediafile.UnreadableFileError as exc:
142 self._log.error('could not write tags: {0}', exc)
143
144 def import_task_files(self, session, task):
145 """Automatically scrub imported files."""
146 for item in task.imported_items():
147 self._log.debug('auto-scrubbing {0}',
148 util.displayable_path(item.path))
149 self._scrub_item(item)
150
[end of beetsplug/scrub.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/beetsplug/scrub.py b/beetsplug/scrub.py
--- a/beetsplug/scrub.py
+++ b/beetsplug/scrub.py
@@ -109,7 +109,7 @@
self._log.error('could not scrub {0}: {1}',
util.displayable_path(path), exc)
- def _scrub_item(self, item, restore=True):
+ def _scrub_item(self, item, restore):
"""Remove tags from an Item's associated file and, if `restore`
is enabled, write the database's tags back to the file.
"""
@@ -146,4 +146,4 @@
for item in task.imported_items():
self._log.debug('auto-scrubbing {0}',
util.displayable_path(item.path))
- self._scrub_item(item)
+ self._scrub_item(item, ui.should_write())
|
{"golden_diff": "diff --git a/beetsplug/scrub.py b/beetsplug/scrub.py\n--- a/beetsplug/scrub.py\n+++ b/beetsplug/scrub.py\n@@ -109,7 +109,7 @@\n self._log.error('could not scrub {0}: {1}',\n util.displayable_path(path), exc)\n \n- def _scrub_item(self, item, restore=True):\n+ def _scrub_item(self, item, restore):\n \"\"\"Remove tags from an Item's associated file and, if `restore`\n is enabled, write the database's tags back to the file.\n \"\"\"\n@@ -146,4 +146,4 @@\n for item in task.imported_items():\n self._log.debug('auto-scrubbing {0}',\n util.displayable_path(item.path))\n- self._scrub_item(item)\n+ self._scrub_item(item, ui.should_write())\n", "issue": "scrub: The `auto` option should be a no-op when `import.write` is disabled\n### Problem\r\n\r\nI was importing new tracks with `import.write` disabled and seeing all tracks populated with new tags. With help from the [forum](https://discourse.beets.io/t/tags-are-being-written-to-imported-files-even-though-import-write-no/2068/4), I was able to determine that with `scrub.auto` enabled and `import.write` disabled, the scrub plugin removes tags and rewrites them from the database.\r\nThe [scrub documentation](https://beets.readthedocs.io/en/stable/plugins/scrub.html#automatic-scrubbing) indicates that scrubbing on import is only active if `import.write` is also enabled.\r\n> When importing new files (with import.write turned on) or modifying files\u2019 tags with the beet modify command, beets will first strip all types of tags entirely and then write the database-tracked metadata to the file.\r\n\r\nI'd think a resolution could be:\r\n- Update the documentation to indicate scrub doesn't read the `import.write` setting.\r\n- Or scrub _should_ read that setting.\r\n- Or, even better, have a separate scrub configuration that disables restoring tags after the scrub.\r\n\r\n### Setup\r\n\r\n* OS: Docker https://hub.docker.com/r/linuxserver/beets\r\n* Python version: 3.9.7\r\n* beets version: 1.6.0\r\n* Turning off plugins made problem go away (yes/no): yes\r\n\r\nMy configuration (output of `beet config`) is:\r\n\r\n```yaml\r\nlyrics:\r\n bing_lang_from: []\r\n auto: no\r\n fallback: ''\r\n sources: genius\r\n bing_client_secret: REDACTED\r\n bing_lang_to:\r\n google_API_key: REDACTED\r\n google_engine_ID: REDACTED\r\n genius_api_key: REDACTED\r\n force: no\r\n local: no\r\n\r\nplugins:\r\n- fetchart\r\n- embedart\r\n- convert\r\n- scrub\r\n- replaygain\r\n- lyrics\r\n- lastgenre\r\n- the\r\n- inline\r\n- chroma\r\n- web\r\n- permissions\r\ndirectory: /music\r\nlibrary: /config/musiclibrary.blb\r\nart_filename: _cover\r\nthreaded: yes\r\noriginal_date: yes\r\nper_disc_numbering: yes\r\nchroma:\r\n auto: yes\r\nembedart:\r\n auto: no\r\n maxwidth: 0\r\n compare_threshold: 0\r\n ifempty: no\r\n remove_art_file: no\r\n quality: 0\r\nconvert:\r\n auto: no\r\n dest: /converted\r\n format: aac\r\n formats:\r\n aac:\r\n command: ffmpeg -i $source -c:a aac -vbr 4 $dest\r\n extension: m4a\r\n mp3:\r\n command: /config/gapless-mp3.sh $source $dest\r\n extension: mp3\r\n alac:\r\n command: ffmpeg -i $source -y -vn -acodec alac $dest\r\n extension: m4a\r\n flac: ffmpeg -i $source -y -vn -acodec flac $dest\r\n opus: ffmpeg -i $source -y -vn -acodec libopus -ab 96k $dest\r\n ogg: ffmpeg -i $source -y -vn -acodec libvorbis -aq 3 $dest\r\n wma: ffmpeg -i $source -y -vn -acodec wmav2 -vn $dest\r\n pretend: no\r\n link: no\r\n hardlink: no\r\n threads: 8\r\n id3v23: inherit\r\n max_bitrate: 500\r\n tmpdir:\r\n quiet: no\r\n embed: yes\r\n\r\n paths: {}\r\n no_convert: ''\r\n never_convert_lossy_files: no\r\n copy_album_art: no\r\n album_art_maxwidth: 0\r\n delete_originals: no\r\nitem_fields:\r\n disc_and_track: u'%02i.%02i' % (disc, track) if disctotal > 1 else u'%02i' % (track)\r\nthe:\r\n a: no\r\n the: yes\r\n format: '{0}, {1}'\r\n strip: no\r\n patterns: []\r\n\r\npaths:\r\n default: '%the{$albumartist}/$original_year - $album%aunique{}/$disc_and_track $title'\r\n singleton: '%the{$albumartist}/_singles/$original_year - $title'\r\n comp: _Compilations/$original_year - $album%aunique{}/$disc_and_track $title\r\n albumtype_soundtrack: _Soundtracks/$original_year - $album%aunique{}/$disc_and_track $title\r\npermissions:\r\n file: 664\r\n dir: 775\r\n\r\nimport:\r\n write: no\r\n copy: no\r\n move: yes\r\n resume: ask\r\n incremental: no\r\n quiet_fallback: skip\r\n timid: no\r\n log: /config/beet.log\r\nlastgenre:\r\n auto: yes\r\n source: album\r\n whitelist: yes\r\n min_weight: 10\r\n count: 1\r\n fallback:\r\n canonical: no\r\n force: yes\r\n separator: ', '\r\n prefer_specific: no\r\n title_case: yes\r\nfetchart:\r\n auto: yes\r\n store_source: yes\r\n minwidth: 600\r\n sources:\r\n - filesystem\r\n - coverart\r\n - itunes\r\n - albumart\r\n - amazon\r\n maxwidth: 0\r\n quality: 0\r\n max_filesize: 0\r\n enforce_ratio: no\r\n cautious: no\r\n cover_names:\r\n - cover\r\n - front\r\n - art\r\n - album\r\n - folder\r\n google_key: REDACTED\r\n google_engine: 001442825323518660753:hrh5ch1gjzm\r\n fanarttv_key: REDACTED\r\n lastfm_key: REDACTED\r\n high_resolution: no\r\n deinterlace: no\r\n cover_format:\r\nreplaygain:\r\n auto: yes\r\n backend: ffmpeg\r\n overwrite: no\r\n threads: 8\r\n parallel_on_import: no\r\n per_disc: no\r\n peak: 'true'\r\n targetlevel: 89\r\n r128: [Opus]\r\n r128_targetlevel: 84\r\nscrub:\r\n auto: yes\r\n\r\nreplace:\r\n ^\\.: _\r\n '[\\x00-\\x1f]': _\r\n '[<>:\"\\?\\*\\|]': _\r\n '[\\xE8-\\xEB]': e\r\n '[\\xEC-\\xEF]': i\r\n '[\\xE2-\\xE6]': a\r\n '[\\xF2-\\xF6]': o\r\n '[\\xF8]': o\r\n \\.$: _\r\n \\s+$: ''\r\nweb:\r\n host: 0.0.0.0\r\n port: 8337\r\n cors: ''\r\n cors_supports_credentials: no\r\n reverse_proxy: no\r\n include_paths: no\r\n readonly: yes\r\npathfields: {}\r\nalbum_fields: {}\r\n\r\n```\r\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2016, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Cleans extraneous metadata from files' tags via a command or\nautomatically whenever tags are written.\n\"\"\"\n\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets import util\nfrom beets import config\nimport mediafile\nimport mutagen\n\n_MUTAGEN_FORMATS = {\n 'asf': 'ASF',\n 'apev2': 'APEv2File',\n 'flac': 'FLAC',\n 'id3': 'ID3FileType',\n 'mp3': 'MP3',\n 'mp4': 'MP4',\n 'oggflac': 'OggFLAC',\n 'oggspeex': 'OggSpeex',\n 'oggtheora': 'OggTheora',\n 'oggvorbis': 'OggVorbis',\n 'oggopus': 'OggOpus',\n 'trueaudio': 'TrueAudio',\n 'wavpack': 'WavPack',\n 'monkeysaudio': 'MonkeysAudio',\n 'optimfrog': 'OptimFROG',\n}\n\n\nclass ScrubPlugin(BeetsPlugin):\n \"\"\"Removes extraneous metadata from files' tags.\"\"\"\n def __init__(self):\n super().__init__()\n self.config.add({\n 'auto': True,\n })\n\n if self.config['auto']:\n self.register_listener(\"import_task_files\", self.import_task_files)\n\n def commands(self):\n def scrub_func(lib, opts, args):\n # Walk through matching files and remove tags.\n for item in lib.items(ui.decargs(args)):\n self._log.info('scrubbing: {0}',\n util.displayable_path(item.path))\n self._scrub_item(item, opts.write)\n\n scrub_cmd = ui.Subcommand('scrub', help='clean audio tags')\n scrub_cmd.parser.add_option(\n '-W', '--nowrite', dest='write',\n action='store_false', default=True,\n help='leave tags empty')\n scrub_cmd.func = scrub_func\n\n return [scrub_cmd]\n\n @staticmethod\n def _mutagen_classes():\n \"\"\"Get a list of file type classes from the Mutagen module.\n \"\"\"\n classes = []\n for modname, clsname in _MUTAGEN_FORMATS.items():\n mod = __import__(f'mutagen.{modname}',\n fromlist=[clsname])\n classes.append(getattr(mod, clsname))\n return classes\n\n def _scrub(self, path):\n \"\"\"Remove all tags from a file.\n \"\"\"\n for cls in self._mutagen_classes():\n # Try opening the file with this type, but just skip in the\n # event of any error.\n try:\n f = cls(util.syspath(path))\n except Exception:\n continue\n if f.tags is None:\n continue\n\n # Remove the tag for this type.\n try:\n f.delete()\n except NotImplementedError:\n # Some Mutagen metadata subclasses (namely, ASFTag) do not\n # support .delete(), presumably because it is impossible to\n # remove them. In this case, we just remove all the tags.\n for tag in f.keys():\n del f[tag]\n f.save()\n except (OSError, mutagen.MutagenError) as exc:\n self._log.error('could not scrub {0}: {1}',\n util.displayable_path(path), exc)\n\n def _scrub_item(self, item, restore=True):\n \"\"\"Remove tags from an Item's associated file and, if `restore`\n is enabled, write the database's tags back to the file.\n \"\"\"\n # Get album art if we need to restore it.\n if restore:\n try:\n mf = mediafile.MediaFile(util.syspath(item.path),\n config['id3v23'].get(bool))\n except mediafile.UnreadableFileError as exc:\n self._log.error('could not open file to scrub: {0}',\n exc)\n return\n images = mf.images\n\n # Remove all tags.\n self._scrub(item.path)\n\n # Restore tags, if enabled.\n if restore:\n self._log.debug('writing new tags after scrub')\n item.try_write()\n if images:\n self._log.debug('restoring art')\n try:\n mf = mediafile.MediaFile(util.syspath(item.path),\n config['id3v23'].get(bool))\n mf.images = images\n mf.save()\n except mediafile.UnreadableFileError as exc:\n self._log.error('could not write tags: {0}', exc)\n\n def import_task_files(self, session, task):\n \"\"\"Automatically scrub imported files.\"\"\"\n for item in task.imported_items():\n self._log.debug('auto-scrubbing {0}',\n util.displayable_path(item.path))\n self._scrub_item(item)\n", "path": "beetsplug/scrub.py"}]}
| 3,644 | 205 |
gh_patches_debug_39300
|
rasdani/github-patches
|
git_diff
|
lightly-ai__lightly-303
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add documentation for creating and uploading embeddings through the pip package (non cli)
The CLI way of obtaining and uploading embeddings is described. It's pretty minimal but works. However, there is no information on how to do this yourself without using the CLI.
I would prefer steps starting from having a pre-trained model.
```python
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def embed(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
return x
def forward(self, x):
x = self.embed(x)
output = F.log_softmax(x, dim=1)
return output
model = Net()
# let's assume here model is pre-trained
# I can get the embeddings and predictions using
embedding = model.embed(x)
prediction = model(x)
# TODO: how to create embeddings and upload them to lightly
```
</issue>
<code>
[start of lightly/api/api_workflow_client.py]
1 import time
2 import random
3 import time
4 import warnings
5 from io import IOBase
6 from typing import *
7
8 import requests
9 from requests import Response
10
11 from lightly.__init__ import __version__
12 from lightly.api.api_workflow_datasets import _DatasetsMixin
13 from lightly.api.api_workflow_download_dataset import _DownloadDatasetMixin
14 from lightly.api.api_workflow_sampling import _SamplingMixin
15 from lightly.api.api_workflow_upload_dataset import _UploadDatasetMixin
16 from lightly.api.api_workflow_upload_embeddings import _UploadEmbeddingsMixin
17 from lightly.api.utils import getenv
18 from lightly.api.version_checking import get_minimum_compatible_version, version_compare
19 from lightly.openapi_generated.swagger_client import TagData, ScoresApi, QuotaApi
20 from lightly.openapi_generated.swagger_client.api.datasets_api import DatasetsApi
21 from lightly.openapi_generated.swagger_client.api.embeddings_api import EmbeddingsApi
22 from lightly.openapi_generated.swagger_client.api.jobs_api import JobsApi
23 from lightly.openapi_generated.swagger_client.api.mappings_api import MappingsApi
24 from lightly.openapi_generated.swagger_client.api.samples_api import SamplesApi
25 from lightly.openapi_generated.swagger_client.api.samplings_api import SamplingsApi
26 from lightly.openapi_generated.swagger_client.api.tags_api import TagsApi
27 from lightly.openapi_generated.swagger_client.api_client import ApiClient
28 from lightly.openapi_generated.swagger_client.configuration import Configuration
29 from lightly.openapi_generated.swagger_client.models.dataset_data import DatasetData
30
31
32 class ApiWorkflowClient(_UploadEmbeddingsMixin, _SamplingMixin, _UploadDatasetMixin, _DownloadDatasetMixin,
33 _DatasetsMixin):
34 """Provides a uniform interface to communicate with the api
35
36 The APIWorkflowClient is used to communicaate with the Lightly API. The client
37 can run also more complex workflows which include multiple API calls at once.
38
39 The client can be used in combination with the active learning agent.
40
41 Args:
42 token:
43 the token of the user, provided in webapp
44 dataset_id:
45 the id of the dataset, provided in webapp. \
46 If it is not set, but used by a workflow, \
47 the last modfied dataset is taken by default.
48 embedding_id:
49 the id of the embedding to use. If it is not set, \
50 but used by a workflow, the newest embedding is taken by default
51 """
52
53 def __init__(self, token: str, dataset_id: str = None, embedding_id: str = None):
54
55 self.check_version_compatibility()
56
57 configuration = Configuration()
58 configuration.host = getenv('LIGHTLY_SERVER_LOCATION', 'https://api.lightly.ai')
59 configuration.api_key = {'token': token}
60 api_client = ApiClient(configuration=configuration)
61 self.api_client = api_client
62
63 self.token = token
64 if dataset_id is not None:
65 self._dataset_id = dataset_id
66 if embedding_id is not None:
67 self.embedding_id = embedding_id
68
69 self.datasets_api = DatasetsApi(api_client=self.api_client)
70 self.samplings_api = SamplingsApi(api_client=self.api_client)
71 self.jobs_api = JobsApi(api_client=self.api_client)
72 self.tags_api = TagsApi(api_client=self.api_client)
73 self.embeddings_api = EmbeddingsApi(api_client=api_client)
74 self.mappings_api = MappingsApi(api_client=api_client)
75 self.scores_api = ScoresApi(api_client=api_client)
76 self.samples_api = SamplesApi(api_client=api_client)
77 self.quota_api = QuotaApi(api_client=api_client)
78
79 def check_version_compatibility(self):
80 minimum_version = get_minimum_compatible_version()
81 if version_compare(__version__, minimum_version) < 0:
82 raise ValueError(f"Incompatible Version of lightly pip package. "
83 f"Please upgrade to at least version {minimum_version} "
84 f"to be able to access the api and webapp")
85
86 @property
87 def dataset_id(self) -> str:
88 ''' Returns the dataset_id
89
90 If the dataset_id is set, it is returned.
91 If it is unset, then the dataset_id of the last modified dataset is taken.
92
93 '''
94 try:
95 return self._dataset_id
96 except AttributeError:
97 all_datasets: List[DatasetData] = self.datasets_api.get_datasets()
98 datasets_sorted = sorted(all_datasets, key=lambda dataset: dataset.last_modified_at)
99 last_modified_dataset = datasets_sorted[-1]
100 self._dataset_id = last_modified_dataset.id
101 warnings.warn(UserWarning(f"Dataset has not been specified, "
102 f"taking the last modified dataset {last_modified_dataset.name} as default dataset."))
103 return self._dataset_id
104
105 def _get_all_tags(self) -> List[TagData]:
106 return self.tags_api.get_tags_by_dataset_id(self.dataset_id)
107
108 def _order_list_by_filenames(self, filenames_for_list: List[str], list_to_order: List[object]) -> List[object]:
109 """Orders a list such that it is in the order of the filenames specified on the server.
110
111 Args:
112 filenames_for_list:
113 The filenames of samples in a specific order
114 list_to_order:
115 Some values belonging to the samples
116
117 Returns:
118 The list reordered. The same reorder applied on the filenames_for_list
119 would put them in the order of the filenames in self.filenames_on_server
120
121 """
122 assert len(filenames_for_list) == len(list_to_order)
123 dict_by_filenames = dict(zip(filenames_for_list, list_to_order))
124 list_ordered = [dict_by_filenames[filename] for filename in self.filenames_on_server
125 if filename in filenames_for_list]
126 return list_ordered
127
128 @property
129 def filenames_on_server(self):
130 if not hasattr(self, "_filenames_on_server"):
131 self._filenames_on_server = self.mappings_api. \
132 get_sample_mappings_by_dataset_id(dataset_id=self.dataset_id, field="fileName")
133 return self._filenames_on_server
134
135 def upload_file_with_signed_url(self, file: IOBase, signed_write_url: str,
136 max_backoff: int = 32, max_retries: int = 5) -> Response:
137 """Uploads a file to a url via a put request.
138
139 Args:
140 file:
141 The file to upload.
142 signed_write_url:
143 The url to upload the file to. As no authorization is used,
144 the url must be a signed write url.
145 max_backoff:
146 Maximal backoff before retrying.
147 max_retries:
148 Maximum number of retries before timing out.
149
150 Returns:
151 The response of the put request, usually a 200 for the success case.
152
153 """
154
155 response = requests.put(signed_write_url, data=file)
156
157 if response.status_code != 200:
158 msg = f'Failed PUT request to {signed_write_url} with status_code'
159 msg += f'{response.status__code}!'
160 raise RuntimeError(msg)
161
162 return response
163
[end of lightly/api/api_workflow_client.py]
[start of lightly/api/api_workflow_upload_embeddings.py]
1 import csv
2 from typing import List
3
4 from lightly.openapi_generated.swagger_client.models.dataset_embedding_data import DatasetEmbeddingData
5 from lightly.openapi_generated.swagger_client.models.write_csv_url_data import WriteCSVUrlData
6
7
8
9 def _is_valid_filename(filename: str):
10 """Returns False if the filename is misformatted.
11
12 """
13 invalid_characters = [',']
14 for character in invalid_characters:
15 if character in filename:
16 return False
17 return True
18
19
20 class _UploadEmbeddingsMixin:
21
22 def set_embedding_id_by_name(self, embedding_name: str = None):
23 embeddings: List[DatasetEmbeddingData] = \
24 self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)
25
26 if embedding_name is None:
27 self.embedding_id = embeddings[-1].id
28 return
29
30 try:
31 self.embedding_id = next(embedding.id for embedding in embeddings if embedding.name == embedding_name)
32 except StopIteration:
33 raise ValueError(f"No embedding with name {embedding_name} found on the server.")
34
35 def upload_embeddings(self, path_to_embeddings_csv: str, name: str):
36 """Uploads embeddings to the server.
37
38 First checks that the specified embedding name is not on ther server. If it is, the upload is aborted.
39 Then creates a new csv with the embeddings in the order specified on the server. Next it uploads it to the server.
40 The received embedding_id is saved as a property of self.
41 Args:
42 path_to_embeddings_csv: the filepath to the .csv containing the embeddings, e.g. "path/to/embeddings.csv"
43 name: The name of the embedding. If an embedding with such a name already exists on the server,
44 the upload is aborted.
45
46 Returns:
47 None
48
49 """
50 # get the names of the current embeddings on the server:
51 embeddings_on_server: List[DatasetEmbeddingData] = \
52 self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)
53 names_embeddings_on_server = [embedding.name for embedding in embeddings_on_server]
54
55 if name in names_embeddings_on_server:
56 print(f"Aborting upload, embedding with name='{name}' already exists.")
57 self.embedding_id = next(embedding for embedding in embeddings_on_server if embedding.name == name).id
58 return
59
60 # create a new csv with the filenames in the desired order
61 path_to_ordered_embeddings_csv = self._order_csv_by_filenames(
62 path_to_embeddings_csv=path_to_embeddings_csv)
63
64 # get the URL to upload the csv to
65 response: WriteCSVUrlData = \
66 self.embeddings_api.get_embeddings_csv_write_url_by_id(self.dataset_id, name=name)
67 self.embedding_id = response.embedding_id
68 signed_write_url = response.signed_write_url
69
70 # upload the csv to the URL
71 with open(path_to_ordered_embeddings_csv, 'rb') as file_ordered_embeddings_csv:
72 self.upload_file_with_signed_url(file=file_ordered_embeddings_csv, signed_write_url=signed_write_url)
73
74 def _order_csv_by_filenames(self, path_to_embeddings_csv: str) -> str:
75 """Orders the rows in a csv according to the order specified on the server and saves it as a new file.
76
77 Args:
78 path_to_embeddings_csv:
79 the path to the csv to order
80
81 Returns:
82 the filepath to the new csv
83
84 """
85 with open(path_to_embeddings_csv, 'r') as f:
86 data = csv.reader(f)
87
88 rows = list(data)
89 header_row = rows[0]
90 rows_without_header = rows[1:]
91 index_filenames = header_row.index('filenames')
92 filenames = [row[index_filenames] for row in rows_without_header]
93
94 if len(filenames) != len(self.filenames_on_server):
95 raise ValueError(f'There are {len(filenames)} rows in the embedding file, but '
96 f'{len(self.filenames_on_server)} filenames/samples on the server.')
97 if set(filenames) != set(self.filenames_on_server):
98 raise ValueError(f'The filenames in the embedding file and the filenames on the server do not align')
99 invalid_filenames = [f for f in filenames if not _is_valid_filename(f)]
100 if len(invalid_filenames) > 0:
101 raise ValueError(f'Invalid filename(s) in embedding file: {invalid_filenames}')
102
103 rows_without_header_ordered = self._order_list_by_filenames(filenames, rows_without_header)
104
105 rows_to_write = [header_row]
106 rows_to_write += rows_without_header_ordered
107
108 path_to_ordered_embeddings_csv = path_to_embeddings_csv.replace('.csv', '_sorted.csv')
109 with open(path_to_ordered_embeddings_csv, 'w') as f:
110 writer = csv.writer(f)
111 writer.writerows(rows_to_write)
112
113 return path_to_ordered_embeddings_csv
114
[end of lightly/api/api_workflow_upload_embeddings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lightly/api/api_workflow_client.py b/lightly/api/api_workflow_client.py
--- a/lightly/api/api_workflow_client.py
+++ b/lightly/api/api_workflow_client.py
@@ -85,12 +85,11 @@
@property
def dataset_id(self) -> str:
- ''' Returns the dataset_id
+ '''The current dataset_id.
If the dataset_id is set, it is returned.
- If it is unset, then the dataset_id of the last modified dataset is taken.
-
- '''
+ If it is not set, then the dataset_id of the last modified dataset is selected.
+ '''
try:
return self._dataset_id
except AttributeError:
@@ -127,6 +126,9 @@
@property
def filenames_on_server(self):
+ '''The list of the filenames in the dataset.
+
+ '''
if not hasattr(self, "_filenames_on_server"):
self._filenames_on_server = self.mappings_api. \
get_sample_mappings_by_dataset_id(dataset_id=self.dataset_id, field="fileName")
diff --git a/lightly/api/api_workflow_upload_embeddings.py b/lightly/api/api_workflow_upload_embeddings.py
--- a/lightly/api/api_workflow_upload_embeddings.py
+++ b/lightly/api/api_workflow_upload_embeddings.py
@@ -20,6 +20,15 @@
class _UploadEmbeddingsMixin:
def set_embedding_id_by_name(self, embedding_name: str = None):
+ """Sets the embedding id of the client by embedding name.
+
+ Args:
+ embedding_name:
+ Name under which the embedding was uploaded.
+
+ Raises:
+ ValueError if the embedding does not exist.
+ """
embeddings: List[DatasetEmbeddingData] = \
self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)
@@ -38,14 +47,14 @@
First checks that the specified embedding name is not on ther server. If it is, the upload is aborted.
Then creates a new csv with the embeddings in the order specified on the server. Next it uploads it to the server.
The received embedding_id is saved as a property of self.
+
Args:
- path_to_embeddings_csv: the filepath to the .csv containing the embeddings, e.g. "path/to/embeddings.csv"
- name: The name of the embedding. If an embedding with such a name already exists on the server,
+ path_to_embeddings_csv:
+ The path to the .csv containing the embeddings, e.g. "path/to/embeddings.csv"
+ name:
+ The name of the embedding. If an embedding with such a name already exists on the server,
the upload is aborted.
- Returns:
- None
-
"""
# get the names of the current embeddings on the server:
embeddings_on_server: List[DatasetEmbeddingData] = \
|
{"golden_diff": "diff --git a/lightly/api/api_workflow_client.py b/lightly/api/api_workflow_client.py\n--- a/lightly/api/api_workflow_client.py\n+++ b/lightly/api/api_workflow_client.py\n@@ -85,12 +85,11 @@\n \n @property\n def dataset_id(self) -> str:\n- ''' Returns the dataset_id\n+ '''The current dataset_id.\n \n If the dataset_id is set, it is returned.\n- If it is unset, then the dataset_id of the last modified dataset is taken.\n-\n- '''\n+ If it is not set, then the dataset_id of the last modified dataset is selected.\n+ ''' \n try:\n return self._dataset_id\n except AttributeError:\n@@ -127,6 +126,9 @@\n \n @property\n def filenames_on_server(self):\n+ '''The list of the filenames in the dataset.\n+\n+ '''\n if not hasattr(self, \"_filenames_on_server\"):\n self._filenames_on_server = self.mappings_api. \\\n get_sample_mappings_by_dataset_id(dataset_id=self.dataset_id, field=\"fileName\")\ndiff --git a/lightly/api/api_workflow_upload_embeddings.py b/lightly/api/api_workflow_upload_embeddings.py\n--- a/lightly/api/api_workflow_upload_embeddings.py\n+++ b/lightly/api/api_workflow_upload_embeddings.py\n@@ -20,6 +20,15 @@\n class _UploadEmbeddingsMixin:\n \n def set_embedding_id_by_name(self, embedding_name: str = None):\n+ \"\"\"Sets the embedding id of the client by embedding name.\n+\n+ Args:\n+ embedding_name:\n+ Name under which the embedding was uploaded.\n+ \n+ Raises:\n+ ValueError if the embedding does not exist.\n+ \"\"\"\n embeddings: List[DatasetEmbeddingData] = \\\n self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)\n \n@@ -38,14 +47,14 @@\n First checks that the specified embedding name is not on ther server. If it is, the upload is aborted.\n Then creates a new csv with the embeddings in the order specified on the server. Next it uploads it to the server.\n The received embedding_id is saved as a property of self.\n+\n Args:\n- path_to_embeddings_csv: the filepath to the .csv containing the embeddings, e.g. \"path/to/embeddings.csv\"\n- name: The name of the embedding. If an embedding with such a name already exists on the server,\n+ path_to_embeddings_csv:\n+ The path to the .csv containing the embeddings, e.g. \"path/to/embeddings.csv\"\n+ name:\n+ The name of the embedding. If an embedding with such a name already exists on the server,\n the upload is aborted.\n \n- Returns:\n- None\n-\n \"\"\"\n # get the names of the current embeddings on the server:\n embeddings_on_server: List[DatasetEmbeddingData] = \\\n", "issue": "Add documentation for creating and uploading embeddings through the pip package (non cli)\nThe CLI way of obtaining and uploading embeddings is described. It's pretty minimal but works. However, there is no information on how to do this yourself without using the CLI. \r\n\r\nI would prefer steps starting from having a pre-trained model.\r\n```python\r\nclass Net(nn.Module):\r\n def __init__(self):\r\n super(Net, self).__init__()\r\n self.conv1 = nn.Conv2d(1, 32, 3, 1)\r\n self.conv2 = nn.Conv2d(32, 64, 3, 1)\r\n self.dropout1 = nn.Dropout(0.25)\r\n self.dropout2 = nn.Dropout(0.5)\r\n self.fc1 = nn.Linear(9216, 128)\r\n self.fc2 = nn.Linear(128, 10)\r\n\r\n def embed(self, x):\r\n x = self.conv1(x)\r\n x = F.relu(x)\r\n x = self.conv2(x)\r\n x = F.relu(x)\r\n x = F.max_pool2d(x, 2)\r\n x = self.dropout1(x)\r\n x = torch.flatten(x, 1)\r\n x = self.fc1(x)\r\n x = F.relu(x)\r\n x = self.dropout2(x)\r\n x = self.fc2(x)\r\n return x\r\n\r\n def forward(self, x):\r\n x = self.embed(x)\r\n output = F.log_softmax(x, dim=1)\r\n return output\r\n\r\nmodel = Net()\r\n\r\n# let's assume here model is pre-trained\r\n# I can get the embeddings and predictions using\r\nembedding = model.embed(x)\r\nprediction = model(x)\r\n\r\n# TODO: how to create embeddings and upload them to lightly\r\n```\n", "before_files": [{"content": "import time\nimport random\nimport time\nimport warnings\nfrom io import IOBase\nfrom typing import *\n\nimport requests\nfrom requests import Response\n\nfrom lightly.__init__ import __version__\nfrom lightly.api.api_workflow_datasets import _DatasetsMixin\nfrom lightly.api.api_workflow_download_dataset import _DownloadDatasetMixin\nfrom lightly.api.api_workflow_sampling import _SamplingMixin\nfrom lightly.api.api_workflow_upload_dataset import _UploadDatasetMixin\nfrom lightly.api.api_workflow_upload_embeddings import _UploadEmbeddingsMixin\nfrom lightly.api.utils import getenv\nfrom lightly.api.version_checking import get_minimum_compatible_version, version_compare\nfrom lightly.openapi_generated.swagger_client import TagData, ScoresApi, QuotaApi\nfrom lightly.openapi_generated.swagger_client.api.datasets_api import DatasetsApi\nfrom lightly.openapi_generated.swagger_client.api.embeddings_api import EmbeddingsApi\nfrom lightly.openapi_generated.swagger_client.api.jobs_api import JobsApi\nfrom lightly.openapi_generated.swagger_client.api.mappings_api import MappingsApi\nfrom lightly.openapi_generated.swagger_client.api.samples_api import SamplesApi\nfrom lightly.openapi_generated.swagger_client.api.samplings_api import SamplingsApi\nfrom lightly.openapi_generated.swagger_client.api.tags_api import TagsApi\nfrom lightly.openapi_generated.swagger_client.api_client import ApiClient\nfrom lightly.openapi_generated.swagger_client.configuration import Configuration\nfrom lightly.openapi_generated.swagger_client.models.dataset_data import DatasetData\n\n\nclass ApiWorkflowClient(_UploadEmbeddingsMixin, _SamplingMixin, _UploadDatasetMixin, _DownloadDatasetMixin,\n _DatasetsMixin):\n \"\"\"Provides a uniform interface to communicate with the api \n \n The APIWorkflowClient is used to communicaate with the Lightly API. The client\n can run also more complex workflows which include multiple API calls at once.\n \n The client can be used in combination with the active learning agent. \n\n Args:\n token:\n the token of the user, provided in webapp\n dataset_id:\n the id of the dataset, provided in webapp. \\\n If it is not set, but used by a workflow, \\\n the last modfied dataset is taken by default.\n embedding_id:\n the id of the embedding to use. If it is not set, \\\n but used by a workflow, the newest embedding is taken by default\n \"\"\"\n\n def __init__(self, token: str, dataset_id: str = None, embedding_id: str = None):\n\n self.check_version_compatibility()\n\n configuration = Configuration()\n configuration.host = getenv('LIGHTLY_SERVER_LOCATION', 'https://api.lightly.ai')\n configuration.api_key = {'token': token}\n api_client = ApiClient(configuration=configuration)\n self.api_client = api_client\n\n self.token = token\n if dataset_id is not None:\n self._dataset_id = dataset_id\n if embedding_id is not None:\n self.embedding_id = embedding_id\n\n self.datasets_api = DatasetsApi(api_client=self.api_client)\n self.samplings_api = SamplingsApi(api_client=self.api_client)\n self.jobs_api = JobsApi(api_client=self.api_client)\n self.tags_api = TagsApi(api_client=self.api_client)\n self.embeddings_api = EmbeddingsApi(api_client=api_client)\n self.mappings_api = MappingsApi(api_client=api_client)\n self.scores_api = ScoresApi(api_client=api_client)\n self.samples_api = SamplesApi(api_client=api_client)\n self.quota_api = QuotaApi(api_client=api_client)\n\n def check_version_compatibility(self):\n minimum_version = get_minimum_compatible_version()\n if version_compare(__version__, minimum_version) < 0:\n raise ValueError(f\"Incompatible Version of lightly pip package. \"\n f\"Please upgrade to at least version {minimum_version} \"\n f\"to be able to access the api and webapp\")\n\n @property\n def dataset_id(self) -> str:\n ''' Returns the dataset_id\n\n If the dataset_id is set, it is returned.\n If it is unset, then the dataset_id of the last modified dataset is taken.\n\n '''\n try:\n return self._dataset_id\n except AttributeError:\n all_datasets: List[DatasetData] = self.datasets_api.get_datasets()\n datasets_sorted = sorted(all_datasets, key=lambda dataset: dataset.last_modified_at)\n last_modified_dataset = datasets_sorted[-1]\n self._dataset_id = last_modified_dataset.id\n warnings.warn(UserWarning(f\"Dataset has not been specified, \"\n f\"taking the last modified dataset {last_modified_dataset.name} as default dataset.\"))\n return self._dataset_id\n\n def _get_all_tags(self) -> List[TagData]:\n return self.tags_api.get_tags_by_dataset_id(self.dataset_id)\n\n def _order_list_by_filenames(self, filenames_for_list: List[str], list_to_order: List[object]) -> List[object]:\n \"\"\"Orders a list such that it is in the order of the filenames specified on the server.\n\n Args:\n filenames_for_list:\n The filenames of samples in a specific order\n list_to_order:\n Some values belonging to the samples\n\n Returns:\n The list reordered. The same reorder applied on the filenames_for_list\n would put them in the order of the filenames in self.filenames_on_server\n\n \"\"\"\n assert len(filenames_for_list) == len(list_to_order)\n dict_by_filenames = dict(zip(filenames_for_list, list_to_order))\n list_ordered = [dict_by_filenames[filename] for filename in self.filenames_on_server\n if filename in filenames_for_list]\n return list_ordered\n\n @property\n def filenames_on_server(self):\n if not hasattr(self, \"_filenames_on_server\"):\n self._filenames_on_server = self.mappings_api. \\\n get_sample_mappings_by_dataset_id(dataset_id=self.dataset_id, field=\"fileName\")\n return self._filenames_on_server\n\n def upload_file_with_signed_url(self, file: IOBase, signed_write_url: str,\n max_backoff: int = 32, max_retries: int = 5) -> Response:\n \"\"\"Uploads a file to a url via a put request.\n\n Args:\n file:\n The file to upload.\n signed_write_url:\n The url to upload the file to. As no authorization is used,\n the url must be a signed write url.\n max_backoff:\n Maximal backoff before retrying.\n max_retries:\n Maximum number of retries before timing out.\n\n Returns:\n The response of the put request, usually a 200 for the success case.\n\n \"\"\"\n\n response = requests.put(signed_write_url, data=file)\n\n if response.status_code != 200:\n msg = f'Failed PUT request to {signed_write_url} with status_code'\n msg += f'{response.status__code}!'\n raise RuntimeError(msg)\n\n return response\n", "path": "lightly/api/api_workflow_client.py"}, {"content": "import csv\nfrom typing import List\n\nfrom lightly.openapi_generated.swagger_client.models.dataset_embedding_data import DatasetEmbeddingData\nfrom lightly.openapi_generated.swagger_client.models.write_csv_url_data import WriteCSVUrlData\n\n\n\ndef _is_valid_filename(filename: str):\n \"\"\"Returns False if the filename is misformatted.\n\n \"\"\"\n invalid_characters = [',']\n for character in invalid_characters:\n if character in filename:\n return False\n return True\n\n\nclass _UploadEmbeddingsMixin:\n\n def set_embedding_id_by_name(self, embedding_name: str = None):\n embeddings: List[DatasetEmbeddingData] = \\\n self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)\n\n if embedding_name is None:\n self.embedding_id = embeddings[-1].id\n return\n\n try:\n self.embedding_id = next(embedding.id for embedding in embeddings if embedding.name == embedding_name)\n except StopIteration:\n raise ValueError(f\"No embedding with name {embedding_name} found on the server.\")\n\n def upload_embeddings(self, path_to_embeddings_csv: str, name: str):\n \"\"\"Uploads embeddings to the server.\n\n First checks that the specified embedding name is not on ther server. If it is, the upload is aborted.\n Then creates a new csv with the embeddings in the order specified on the server. Next it uploads it to the server.\n The received embedding_id is saved as a property of self.\n Args:\n path_to_embeddings_csv: the filepath to the .csv containing the embeddings, e.g. \"path/to/embeddings.csv\"\n name: The name of the embedding. If an embedding with such a name already exists on the server,\n the upload is aborted.\n\n Returns:\n None\n\n \"\"\"\n # get the names of the current embeddings on the server:\n embeddings_on_server: List[DatasetEmbeddingData] = \\\n self.embeddings_api.get_embeddings_by_dataset_id(dataset_id=self.dataset_id)\n names_embeddings_on_server = [embedding.name for embedding in embeddings_on_server]\n\n if name in names_embeddings_on_server:\n print(f\"Aborting upload, embedding with name='{name}' already exists.\")\n self.embedding_id = next(embedding for embedding in embeddings_on_server if embedding.name == name).id\n return\n\n # create a new csv with the filenames in the desired order\n path_to_ordered_embeddings_csv = self._order_csv_by_filenames(\n path_to_embeddings_csv=path_to_embeddings_csv)\n\n # get the URL to upload the csv to\n response: WriteCSVUrlData = \\\n self.embeddings_api.get_embeddings_csv_write_url_by_id(self.dataset_id, name=name)\n self.embedding_id = response.embedding_id\n signed_write_url = response.signed_write_url\n\n # upload the csv to the URL\n with open(path_to_ordered_embeddings_csv, 'rb') as file_ordered_embeddings_csv:\n self.upload_file_with_signed_url(file=file_ordered_embeddings_csv, signed_write_url=signed_write_url)\n\n def _order_csv_by_filenames(self, path_to_embeddings_csv: str) -> str:\n \"\"\"Orders the rows in a csv according to the order specified on the server and saves it as a new file.\n\n Args:\n path_to_embeddings_csv:\n the path to the csv to order\n\n Returns:\n the filepath to the new csv\n\n \"\"\"\n with open(path_to_embeddings_csv, 'r') as f:\n data = csv.reader(f)\n\n rows = list(data)\n header_row = rows[0]\n rows_without_header = rows[1:]\n index_filenames = header_row.index('filenames')\n filenames = [row[index_filenames] for row in rows_without_header]\n\n if len(filenames) != len(self.filenames_on_server):\n raise ValueError(f'There are {len(filenames)} rows in the embedding file, but '\n f'{len(self.filenames_on_server)} filenames/samples on the server.')\n if set(filenames) != set(self.filenames_on_server):\n raise ValueError(f'The filenames in the embedding file and the filenames on the server do not align')\n invalid_filenames = [f for f in filenames if not _is_valid_filename(f)]\n if len(invalid_filenames) > 0:\n raise ValueError(f'Invalid filename(s) in embedding file: {invalid_filenames}')\n\n rows_without_header_ordered = self._order_list_by_filenames(filenames, rows_without_header)\n\n rows_to_write = [header_row]\n rows_to_write += rows_without_header_ordered\n\n path_to_ordered_embeddings_csv = path_to_embeddings_csv.replace('.csv', '_sorted.csv')\n with open(path_to_ordered_embeddings_csv, 'w') as f:\n writer = csv.writer(f)\n writer.writerows(rows_to_write)\n\n return path_to_ordered_embeddings_csv\n", "path": "lightly/api/api_workflow_upload_embeddings.py"}]}
| 4,023 | 634 |
gh_patches_debug_27387
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1250
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Participant count incorrect on results index
E.g. for "Lehrerworkshop bei der Endrunde des Bundeswettbewerb Informatik (Sommersemester 2017)" the overview shows "0/0", or for MINT-Camps it says "1/1" or "3/3" participants - the correct numbers are 12, 24 and 22.

</issue>
<code>
[start of evap/results/views.py]
1 from collections import defaultdict
2 from statistics import median
3
4 from django.conf import settings
5 from django.db.models import QuerySet, Prefetch, Count
6 from django.core.cache import caches
7 from django.core.cache.utils import make_template_fragment_key
8 from django.core.exceptions import PermissionDenied
9 from django.shortcuts import get_object_or_404, render
10 from django.template.loader import get_template
11 from django.contrib.auth.decorators import login_required
12 from django.utils import translation
13
14 from evap.evaluation.models import Semester, Degree, Contribution, Course, CourseType
15 from evap.evaluation.auth import internal_required
16 from evap.results.tools import collect_results, calculate_average_distribution, distribution_to_grade, \
17 TextAnswer, TextResult, HeadingResult, get_single_result_rating_result
18
19
20 def get_course_result_template_fragment_cache_key(course_id, language, can_user_see_results_page):
21 return make_template_fragment_key('course_result_template_fragment', [course_id, language, can_user_see_results_page])
22
23
24 def delete_template_cache(course):
25 assert course.state != 'published'
26 caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'en', True))
27 caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'en', False))
28 caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'de', True))
29 caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'de', False))
30
31
32 def warm_up_template_cache(courses):
33 courses = get_courses_with_prefetched_data(courses)
34 current_language = translation.get_language()
35 try:
36 for course in courses:
37 assert course.state == 'published'
38 translation.activate('en')
39 get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=True))
40 get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=False))
41 translation.activate('de')
42 get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=True))
43 get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=False))
44 assert get_course_result_template_fragment_cache_key(course.id, 'en', True) in caches['results']
45 assert get_course_result_template_fragment_cache_key(course.id, 'en', False) in caches['results']
46 assert get_course_result_template_fragment_cache_key(course.id, 'de', True) in caches['results']
47 assert get_course_result_template_fragment_cache_key(course.id, 'de', False) in caches['results']
48 finally:
49 translation.activate(current_language) # reset to previously set language to prevent unwanted side effects
50
51
52 def get_courses_with_prefetched_data(courses):
53 if isinstance(courses, QuerySet):
54 courses = (courses
55 .annotate(num_participants=Count("participants", distinct=True), num_voters=Count("voters", distinct=True))
56 .select_related("type")
57 .prefetch_related(
58 "degrees",
59 "semester",
60 Prefetch("contributions", queryset=Contribution.objects.filter(responsible=True).select_related("contributor"), to_attr="responsible_contributions")
61 )
62 )
63 for course in courses:
64 course.responsible_contributors = [contribution.contributor for contribution in course.responsible_contributions]
65 for course in courses:
66 if not course.is_single_result:
67 course.distribution = calculate_average_distribution(course)
68 course.avg_grade = distribution_to_grade(course.distribution)
69 else:
70 course.single_result_rating_result = get_single_result_rating_result(course)
71 return courses
72
73
74 @internal_required
75 def index(request):
76 semesters = Semester.get_all_with_published_unarchived_results()
77 courses = Course.objects.filter(semester__in=semesters, state='published')
78 courses = [course for course in courses if course.can_user_see_course(request.user)]
79
80 if request.user.is_reviewer:
81 additional_courses = Course.objects.filter(semester__in=semesters, state__in=['in_evaluation', 'evaluated', 'reviewed'])
82 courses += get_courses_with_prefetched_data(additional_courses)
83
84 course_pks = [course.pk for course in courses]
85 degrees = Degree.objects.filter(courses__pk__in=course_pks).distinct()
86 course_types = CourseType.objects.filter(courses__pk__in=course_pks).distinct()
87 template_data = dict(
88 courses=courses,
89 degrees=degrees,
90 course_types=sorted(course_types, key=lambda course_type: course_type.name),
91 semesters=semesters,
92 )
93 return render(request, "results_index.html", template_data)
94
95
96 @login_required
97 def course_detail(request, semester_id, course_id):
98 semester = get_object_or_404(Semester, id=semester_id)
99 course = get_object_or_404(semester.course_set, id=course_id, semester=semester)
100
101 if not course.can_user_see_results_page(request.user):
102 raise PermissionDenied
103
104 course_result = collect_results(course)
105
106 if request.user.is_reviewer:
107 public_view = request.GET.get('public_view') != 'false' # if parameter is not given, show public view.
108 else:
109 public_view = request.GET.get('public_view') == 'true' # if parameter is not given, show own view.
110
111 # redirect to non-public view if there is none because the results have not been published
112 if not course.can_publish_rating_results:
113 public_view = False
114
115 represented_users = list(request.user.represented_users.all()) + [request.user]
116
117 # remove text answers if the user may not see them
118 for questionnaire_result in course_result.questionnaire_results:
119 for question_result in questionnaire_result.question_results:
120 if isinstance(question_result, TextResult):
121 question_result.answers = [answer for answer in question_result.answers if user_can_see_text_answer(request.user, represented_users, answer, public_view)]
122 # remove empty TextResults
123 questionnaire_result.question_results = [result for result in questionnaire_result.question_results if not isinstance(result, TextResult) or len(result.answers) > 0]
124
125 # filter empty headings
126 for questionnaire_result in course_result.questionnaire_results:
127 filtered_question_results = []
128 for index, question_result in enumerate(questionnaire_result.question_results):
129 # filter out if there are no more questions or the next question is also a heading question
130 if isinstance(question_result, HeadingResult):
131 if index == len(questionnaire_result.question_results) - 1 or isinstance(questionnaire_result.question_results[index + 1], HeadingResult):
132 continue
133 filtered_question_results.append(question_result)
134 questionnaire_result.question_results = filtered_question_results
135
136 # remove empty questionnaire_results and contribution_results
137 for contribution_result in course_result.contribution_results:
138 contribution_result.questionnaire_results = [questionnaire_result for questionnaire_result in contribution_result.questionnaire_results if questionnaire_result.question_results]
139 course_result.contribution_results = [contribution_result for contribution_result in course_result.contribution_results if contribution_result.questionnaire_results]
140
141 add_warnings(course, course_result)
142
143 # split course_result into different lists
144 course_questionnaire_results_top = []
145 course_questionnaire_results_bottom = []
146 contributor_contribution_results = []
147 for contribution_result in course_result.contribution_results:
148 if contribution_result.contributor is None:
149 for questionnaire_result in contribution_result.questionnaire_results:
150 if questionnaire_result.questionnaire.is_below_contributors:
151 course_questionnaire_results_bottom.append(questionnaire_result)
152 else:
153 course_questionnaire_results_top.append(questionnaire_result)
154 else:
155 contributor_contribution_results.append(contribution_result)
156
157 if not contributor_contribution_results:
158 course_questionnaire_results_top += course_questionnaire_results_bottom
159 course_questionnaire_results_bottom = []
160
161 course.distribution = calculate_average_distribution(course)
162 course.avg_grade = distribution_to_grade(course.distribution)
163
164 template_data = dict(
165 course=course,
166 course_questionnaire_results_top=course_questionnaire_results_top,
167 course_questionnaire_results_bottom=course_questionnaire_results_bottom,
168 contributor_contribution_results=contributor_contribution_results,
169 reviewer=request.user.is_reviewer,
170 contributor=course.is_user_contributor_or_delegate(request.user),
171 can_download_grades=request.user.can_download_grades,
172 public_view=public_view)
173 return render(request, "results_course_detail.html", template_data)
174
175
176 def add_warnings(course, course_result):
177 if not course.can_publish_rating_results:
178 return
179
180 # calculate the median values of how many people answered a questionnaire across all contributions
181 questionnaire_max_answers = defaultdict(list)
182 for questionnaire_result in course_result.questionnaire_results:
183 max_answers = max((question_result.count_sum for question_result in questionnaire_result.question_results if question_result.question.is_rating_question), default=0)
184 questionnaire_max_answers[questionnaire_result.questionnaire].append(max_answers)
185
186 questionnaire_warning_thresholds = {}
187 for questionnaire, max_answers_list in questionnaire_max_answers.items():
188 questionnaire_warning_thresholds[questionnaire] = max(settings.RESULTS_WARNING_PERCENTAGE * median(max_answers_list), settings.RESULTS_WARNING_COUNT)
189
190 for questionnaire_result in course_result.questionnaire_results:
191 rating_results = [question_result for question_result in questionnaire_result.question_results if question_result.question.is_rating_question]
192 max_answers = max((rating_result.count_sum for rating_result in rating_results), default=0)
193 questionnaire_result.warning = 0 < max_answers < questionnaire_warning_thresholds[questionnaire_result.questionnaire]
194
195 for rating_result in rating_results:
196 rating_result.warning = questionnaire_result.warning or rating_result.has_answers and rating_result.count_sum < questionnaire_warning_thresholds[questionnaire_result.questionnaire]
197
198
199 def user_can_see_text_answer(user, represented_users, text_answer, public_view=False):
200 assert text_answer.state in [TextAnswer.PRIVATE, TextAnswer.PUBLISHED]
201
202 if public_view:
203 return False
204 if user.is_reviewer:
205 return True
206
207 contributor = text_answer.contribution.contributor
208
209 if text_answer.is_private:
210 return contributor == user
211
212 if text_answer.is_published:
213 if text_answer.contribution.responsible:
214 return contributor == user or user in contributor.delegates.all()
215
216 if contributor in represented_users:
217 return True
218 if text_answer.contribution.course.contributions.filter(
219 contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():
220 return True
221 if text_answer.contribution.is_general and text_answer.contribution.course.contributions.filter(
222 contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():
223 return True
224
225 return False
226
[end of evap/results/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/results/views.py b/evap/results/views.py
--- a/evap/results/views.py
+++ b/evap/results/views.py
@@ -51,8 +51,9 @@
def get_courses_with_prefetched_data(courses):
if isinstance(courses, QuerySet):
+ participant_counts = courses.annotate(num_participants=Count("participants")).values_list("num_participants", flat=True)
+ voter_counts = courses.annotate(num_voters=Count("voters")).values_list("num_voters", flat=True)
courses = (courses
- .annotate(num_participants=Count("participants", distinct=True), num_voters=Count("voters", distinct=True))
.select_related("type")
.prefetch_related(
"degrees",
@@ -60,7 +61,10 @@
Prefetch("contributions", queryset=Contribution.objects.filter(responsible=True).select_related("contributor"), to_attr="responsible_contributions")
)
)
- for course in courses:
+ for course, participant_count, voter_count in zip(courses, participant_counts, voter_counts):
+ if course._participant_count is None:
+ course.num_participants = participant_count
+ course.num_voters = voter_count
course.responsible_contributors = [contribution.contributor for contribution in course.responsible_contributions]
for course in courses:
if not course.is_single_result:
|
{"golden_diff": "diff --git a/evap/results/views.py b/evap/results/views.py\n--- a/evap/results/views.py\n+++ b/evap/results/views.py\n@@ -51,8 +51,9 @@\n \n def get_courses_with_prefetched_data(courses):\n if isinstance(courses, QuerySet):\n+ participant_counts = courses.annotate(num_participants=Count(\"participants\")).values_list(\"num_participants\", flat=True)\n+ voter_counts = courses.annotate(num_voters=Count(\"voters\")).values_list(\"num_voters\", flat=True)\n courses = (courses\n- .annotate(num_participants=Count(\"participants\", distinct=True), num_voters=Count(\"voters\", distinct=True))\n .select_related(\"type\")\n .prefetch_related(\n \"degrees\",\n@@ -60,7 +61,10 @@\n Prefetch(\"contributions\", queryset=Contribution.objects.filter(responsible=True).select_related(\"contributor\"), to_attr=\"responsible_contributions\")\n )\n )\n- for course in courses:\n+ for course, participant_count, voter_count in zip(courses, participant_counts, voter_counts):\n+ if course._participant_count is None:\n+ course.num_participants = participant_count\n+ course.num_voters = voter_count\n course.responsible_contributors = [contribution.contributor for contribution in course.responsible_contributions]\n for course in courses:\n if not course.is_single_result:\n", "issue": "Participant count incorrect on results index\nE.g. for \"Lehrerworkshop bei der Endrunde des Bundeswettbewerb Informatik (Sommersemester 2017)\" the overview shows \"0/0\", or for MINT-Camps it says \"1/1\" or \"3/3\" participants - the correct numbers are 12, 24 and 22.\r\n\r\n\r\n\n", "before_files": [{"content": "from collections import defaultdict\nfrom statistics import median\n\nfrom django.conf import settings\nfrom django.db.models import QuerySet, Prefetch, Count\nfrom django.core.cache import caches\nfrom django.core.cache.utils import make_template_fragment_key\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, render\nfrom django.template.loader import get_template\nfrom django.contrib.auth.decorators import login_required\nfrom django.utils import translation\n\nfrom evap.evaluation.models import Semester, Degree, Contribution, Course, CourseType\nfrom evap.evaluation.auth import internal_required\nfrom evap.results.tools import collect_results, calculate_average_distribution, distribution_to_grade, \\\n TextAnswer, TextResult, HeadingResult, get_single_result_rating_result\n\n\ndef get_course_result_template_fragment_cache_key(course_id, language, can_user_see_results_page):\n return make_template_fragment_key('course_result_template_fragment', [course_id, language, can_user_see_results_page])\n\n\ndef delete_template_cache(course):\n assert course.state != 'published'\n caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'en', True))\n caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'en', False))\n caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'de', True))\n caches['results'].delete(get_course_result_template_fragment_cache_key(course.id, 'de', False))\n\n\ndef warm_up_template_cache(courses):\n courses = get_courses_with_prefetched_data(courses)\n current_language = translation.get_language()\n try:\n for course in courses:\n assert course.state == 'published'\n translation.activate('en')\n get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=True))\n get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=False))\n translation.activate('de')\n get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=True))\n get_template('results_index_course.html').render(dict(course=course, can_user_see_results_page=False))\n assert get_course_result_template_fragment_cache_key(course.id, 'en', True) in caches['results']\n assert get_course_result_template_fragment_cache_key(course.id, 'en', False) in caches['results']\n assert get_course_result_template_fragment_cache_key(course.id, 'de', True) in caches['results']\n assert get_course_result_template_fragment_cache_key(course.id, 'de', False) in caches['results']\n finally:\n translation.activate(current_language) # reset to previously set language to prevent unwanted side effects\n\n\ndef get_courses_with_prefetched_data(courses):\n if isinstance(courses, QuerySet):\n courses = (courses\n .annotate(num_participants=Count(\"participants\", distinct=True), num_voters=Count(\"voters\", distinct=True))\n .select_related(\"type\")\n .prefetch_related(\n \"degrees\",\n \"semester\",\n Prefetch(\"contributions\", queryset=Contribution.objects.filter(responsible=True).select_related(\"contributor\"), to_attr=\"responsible_contributions\")\n )\n )\n for course in courses:\n course.responsible_contributors = [contribution.contributor for contribution in course.responsible_contributions]\n for course in courses:\n if not course.is_single_result:\n course.distribution = calculate_average_distribution(course)\n course.avg_grade = distribution_to_grade(course.distribution)\n else:\n course.single_result_rating_result = get_single_result_rating_result(course)\n return courses\n\n\n@internal_required\ndef index(request):\n semesters = Semester.get_all_with_published_unarchived_results()\n courses = Course.objects.filter(semester__in=semesters, state='published')\n courses = [course for course in courses if course.can_user_see_course(request.user)]\n\n if request.user.is_reviewer:\n additional_courses = Course.objects.filter(semester__in=semesters, state__in=['in_evaluation', 'evaluated', 'reviewed'])\n courses += get_courses_with_prefetched_data(additional_courses)\n\n course_pks = [course.pk for course in courses]\n degrees = Degree.objects.filter(courses__pk__in=course_pks).distinct()\n course_types = CourseType.objects.filter(courses__pk__in=course_pks).distinct()\n template_data = dict(\n courses=courses,\n degrees=degrees,\n course_types=sorted(course_types, key=lambda course_type: course_type.name),\n semesters=semesters,\n )\n return render(request, \"results_index.html\", template_data)\n\n\n@login_required\ndef course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n course = get_object_or_404(semester.course_set, id=course_id, semester=semester)\n\n if not course.can_user_see_results_page(request.user):\n raise PermissionDenied\n\n course_result = collect_results(course)\n\n if request.user.is_reviewer:\n public_view = request.GET.get('public_view') != 'false' # if parameter is not given, show public view.\n else:\n public_view = request.GET.get('public_view') == 'true' # if parameter is not given, show own view.\n\n # redirect to non-public view if there is none because the results have not been published\n if not course.can_publish_rating_results:\n public_view = False\n\n represented_users = list(request.user.represented_users.all()) + [request.user]\n\n # remove text answers if the user may not see them\n for questionnaire_result in course_result.questionnaire_results:\n for question_result in questionnaire_result.question_results:\n if isinstance(question_result, TextResult):\n question_result.answers = [answer for answer in question_result.answers if user_can_see_text_answer(request.user, represented_users, answer, public_view)]\n # remove empty TextResults\n questionnaire_result.question_results = [result for result in questionnaire_result.question_results if not isinstance(result, TextResult) or len(result.answers) > 0]\n\n # filter empty headings\n for questionnaire_result in course_result.questionnaire_results:\n filtered_question_results = []\n for index, question_result in enumerate(questionnaire_result.question_results):\n # filter out if there are no more questions or the next question is also a heading question\n if isinstance(question_result, HeadingResult):\n if index == len(questionnaire_result.question_results) - 1 or isinstance(questionnaire_result.question_results[index + 1], HeadingResult):\n continue\n filtered_question_results.append(question_result)\n questionnaire_result.question_results = filtered_question_results\n\n # remove empty questionnaire_results and contribution_results\n for contribution_result in course_result.contribution_results:\n contribution_result.questionnaire_results = [questionnaire_result for questionnaire_result in contribution_result.questionnaire_results if questionnaire_result.question_results]\n course_result.contribution_results = [contribution_result for contribution_result in course_result.contribution_results if contribution_result.questionnaire_results]\n\n add_warnings(course, course_result)\n\n # split course_result into different lists\n course_questionnaire_results_top = []\n course_questionnaire_results_bottom = []\n contributor_contribution_results = []\n for contribution_result in course_result.contribution_results:\n if contribution_result.contributor is None:\n for questionnaire_result in contribution_result.questionnaire_results:\n if questionnaire_result.questionnaire.is_below_contributors:\n course_questionnaire_results_bottom.append(questionnaire_result)\n else:\n course_questionnaire_results_top.append(questionnaire_result)\n else:\n contributor_contribution_results.append(contribution_result)\n\n if not contributor_contribution_results:\n course_questionnaire_results_top += course_questionnaire_results_bottom\n course_questionnaire_results_bottom = []\n\n course.distribution = calculate_average_distribution(course)\n course.avg_grade = distribution_to_grade(course.distribution)\n\n template_data = dict(\n course=course,\n course_questionnaire_results_top=course_questionnaire_results_top,\n course_questionnaire_results_bottom=course_questionnaire_results_bottom,\n contributor_contribution_results=contributor_contribution_results,\n reviewer=request.user.is_reviewer,\n contributor=course.is_user_contributor_or_delegate(request.user),\n can_download_grades=request.user.can_download_grades,\n public_view=public_view)\n return render(request, \"results_course_detail.html\", template_data)\n\n\ndef add_warnings(course, course_result):\n if not course.can_publish_rating_results:\n return\n\n # calculate the median values of how many people answered a questionnaire across all contributions\n questionnaire_max_answers = defaultdict(list)\n for questionnaire_result in course_result.questionnaire_results:\n max_answers = max((question_result.count_sum for question_result in questionnaire_result.question_results if question_result.question.is_rating_question), default=0)\n questionnaire_max_answers[questionnaire_result.questionnaire].append(max_answers)\n\n questionnaire_warning_thresholds = {}\n for questionnaire, max_answers_list in questionnaire_max_answers.items():\n questionnaire_warning_thresholds[questionnaire] = max(settings.RESULTS_WARNING_PERCENTAGE * median(max_answers_list), settings.RESULTS_WARNING_COUNT)\n\n for questionnaire_result in course_result.questionnaire_results:\n rating_results = [question_result for question_result in questionnaire_result.question_results if question_result.question.is_rating_question]\n max_answers = max((rating_result.count_sum for rating_result in rating_results), default=0)\n questionnaire_result.warning = 0 < max_answers < questionnaire_warning_thresholds[questionnaire_result.questionnaire]\n\n for rating_result in rating_results:\n rating_result.warning = questionnaire_result.warning or rating_result.has_answers and rating_result.count_sum < questionnaire_warning_thresholds[questionnaire_result.questionnaire]\n\n\ndef user_can_see_text_answer(user, represented_users, text_answer, public_view=False):\n assert text_answer.state in [TextAnswer.PRIVATE, TextAnswer.PUBLISHED]\n\n if public_view:\n return False\n if user.is_reviewer:\n return True\n\n contributor = text_answer.contribution.contributor\n\n if text_answer.is_private:\n return contributor == user\n\n if text_answer.is_published:\n if text_answer.contribution.responsible:\n return contributor == user or user in contributor.delegates.all()\n\n if contributor in represented_users:\n return True\n if text_answer.contribution.course.contributions.filter(\n contributor__in=represented_users, comment_visibility=Contribution.ALL_COMMENTS).exists():\n return True\n if text_answer.contribution.is_general and text_answer.contribution.course.contributions.filter(\n contributor__in=represented_users, comment_visibility=Contribution.COURSE_COMMENTS).exists():\n return True\n\n return False\n", "path": "evap/results/views.py"}]}
| 3,505 | 310 |
gh_patches_debug_23780
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-2256
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected behavior when distutils is invoked before Setuptools
TL;DR We are forced to use distutils because setuptools has broken symlink processing and this causes `distutils.errors.DistutilsClassError: command class <class '__main__.SDistCommand'> must subclass Command`.
It works with `setuptools<48` and the changelog doesn't document any breaking behaviors for this version.
Repro:
```console
$ git clone https://github.com/ansible/ansible.git
$ cd ansible
$ pip install -U 'setuptools>=48'
$ python setup.py sdist
```
(tried under Python 3.8)
Ref: https://github.com/ansible/ansible/issues/70456
</issue>
<code>
[start of setuptools/distutils_patch.py]
1 """
2 Ensure that the local copy of distutils is preferred over stdlib.
3
4 See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
5 for more motivation.
6 """
7
8 import sys
9 import re
10 import os
11 import importlib
12 import warnings
13
14
15 def clear_distutils():
16 if 'distutils' not in sys.modules:
17 return
18 warnings.warn("Setuptools is replacing distutils")
19 mods = [name for name in sys.modules if re.match(r'distutils\b', name)]
20 for name in mods:
21 del sys.modules[name]
22
23
24 def enabled():
25 """
26 Allow selection of distutils by environment variable.
27 """
28 which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib')
29 return which == 'local'
30
31
32 def ensure_local_distutils():
33 clear_distutils()
34 distutils = importlib.import_module('setuptools._distutils')
35 distutils.__name__ = 'distutils'
36 sys.modules['distutils'] = distutils
37
38 # sanity check that submodules load as expected
39 core = importlib.import_module('distutils.core')
40 assert '_distutils' in core.__file__, core.__file__
41
42
43 if enabled():
44 ensure_local_distutils()
45
[end of setuptools/distutils_patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setuptools/distutils_patch.py b/setuptools/distutils_patch.py
--- a/setuptools/distutils_patch.py
+++ b/setuptools/distutils_patch.py
@@ -12,10 +12,26 @@
import warnings
+is_pypy = '__pypy__' in sys.builtin_module_names
+
+
+def warn_distutils_present():
+ if 'distutils' not in sys.modules:
+ return
+ if is_pypy and sys.version_info < (3, 7):
+ # PyPy for 3.6 unconditionally imports distutils, so bypass the warning
+ # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
+ return
+ warnings.warn(
+ "Distutils was imported before Setuptools. This usage is discouraged "
+ "and may exhibit undesirable behaviors or errors. Please use "
+ "Setuptools' objects directly or at least import Setuptools first.")
+
+
def clear_distutils():
if 'distutils' not in sys.modules:
return
- warnings.warn("Setuptools is replacing distutils")
+ warnings.warn("Setuptools is replacing distutils.")
mods = [name for name in sys.modules if re.match(r'distutils\b', name)]
for name in mods:
del sys.modules[name]
@@ -40,5 +56,6 @@
assert '_distutils' in core.__file__, core.__file__
+warn_distutils_present()
if enabled():
ensure_local_distutils()
|
{"golden_diff": "diff --git a/setuptools/distutils_patch.py b/setuptools/distutils_patch.py\n--- a/setuptools/distutils_patch.py\n+++ b/setuptools/distutils_patch.py\n@@ -12,10 +12,26 @@\n import warnings\n \n \n+is_pypy = '__pypy__' in sys.builtin_module_names\n+\n+\n+def warn_distutils_present():\n+ if 'distutils' not in sys.modules:\n+ return\n+ if is_pypy and sys.version_info < (3, 7):\n+ # PyPy for 3.6 unconditionally imports distutils, so bypass the warning\n+ # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250\n+ return\n+ warnings.warn(\n+ \"Distutils was imported before Setuptools. This usage is discouraged \"\n+ \"and may exhibit undesirable behaviors or errors. Please use \"\n+ \"Setuptools' objects directly or at least import Setuptools first.\")\n+\n+\n def clear_distutils():\n if 'distutils' not in sys.modules:\n return\n- warnings.warn(\"Setuptools is replacing distutils\")\n+ warnings.warn(\"Setuptools is replacing distutils.\")\n mods = [name for name in sys.modules if re.match(r'distutils\\b', name)]\n for name in mods:\n del sys.modules[name]\n@@ -40,5 +56,6 @@\n assert '_distutils' in core.__file__, core.__file__\n \n \n+warn_distutils_present()\n if enabled():\n ensure_local_distutils()\n", "issue": "Unexpected behavior when distutils is invoked before Setuptools\nTL;DR We are forced to use distutils because setuptools has broken symlink processing and this causes `distutils.errors.DistutilsClassError: command class <class '__main__.SDistCommand'> must subclass Command`.\r\n\r\nIt works with `setuptools<48` and the changelog doesn't document any breaking behaviors for this version.\r\n\r\nRepro:\r\n```console\r\n$ git clone https://github.com/ansible/ansible.git\r\n$ cd ansible\r\n$ pip install -U 'setuptools>=48'\r\n$ python setup.py sdist\r\n```\r\n(tried under Python 3.8)\r\n\r\nRef: https://github.com/ansible/ansible/issues/70456\n", "before_files": [{"content": "\"\"\"\nEnsure that the local copy of distutils is preferred over stdlib.\n\nSee https://github.com/pypa/setuptools/issues/417#issuecomment-392298401\nfor more motivation.\n\"\"\"\n\nimport sys\nimport re\nimport os\nimport importlib\nimport warnings\n\n\ndef clear_distutils():\n if 'distutils' not in sys.modules:\n return\n warnings.warn(\"Setuptools is replacing distutils\")\n mods = [name for name in sys.modules if re.match(r'distutils\\b', name)]\n for name in mods:\n del sys.modules[name]\n\n\ndef enabled():\n \"\"\"\n Allow selection of distutils by environment variable.\n \"\"\"\n which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib')\n return which == 'local'\n\n\ndef ensure_local_distutils():\n clear_distutils()\n distutils = importlib.import_module('setuptools._distutils')\n distutils.__name__ = 'distutils'\n sys.modules['distutils'] = distutils\n\n # sanity check that submodules load as expected\n core = importlib.import_module('distutils.core')\n assert '_distutils' in core.__file__, core.__file__\n\n\nif enabled():\n ensure_local_distutils()\n", "path": "setuptools/distutils_patch.py"}]}
| 1,045 | 378 |
gh_patches_debug_3506
|
rasdani/github-patches
|
git_diff
|
vega__altair-692
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Two renderers with same name
Working with a fresh Anaconda installation of Jupyter 3.6. Followed Altair Notebook installation instructions. Basic example (flower petals scatter plot) rendered but with JSON underneath graph (issue #634). I thought this might be due to having selected the wrong renderer., so I went to list the available renderers.
`alt.renderers.names()`
returns
`['default', 'json', 'notebook', 'notebook']`
Secretly hoping the second `notebook` renderer solves #634. In any case, I think you shouldn't be able to have two renderers with the same name.
(hs teacher, maybe I'm missing something)
</issue>
<code>
[start of altair/utils/plugin_registry.py]
1 from typing import Callable, Generic, List, TypeVar, Union, cast
2
3 import entrypoints
4
5
6 PluginType = TypeVar('PluginType')
7
8
9 class PluginRegistry(Generic[PluginType]):
10 """A registry for plugins.
11
12 This is a plugin registry that allows plugins to be loaded/registered
13 in two ways:
14
15 1. Through an explicit call to ``.register(name, value)``.
16 2. By looking for other Python packages that are installed and provide
17 a setuptools entry point group.
18
19 When you create an instance of this class, provide the name of the
20 entry point group to use::
21
22 reg = PluginRegister('my_entrypoint_group')
23
24 """
25 # this is a mapping of name to error message to allow custom error messages
26 # in case an entrypoint is not found
27 entrypoint_err_messages = {}
28
29 def __init__(self, entry_point_group: str = '', plugin_type=object) -> None:
30 """Create a PluginRegistry for a named entry point group.
31
32 Parameters
33 ==========
34 entry_point_group: str
35 The name of the entry point group.
36 plugin_type: object
37 A type that will optionally be used for runtime type checking of
38 loaded plugins using isinstance.
39 """
40 self.entry_point_group = entry_point_group
41 self.plugin_type = plugin_type
42 self._active = None # type: None
43 self._active_name = '' # type: str
44 self._plugins = {} # type: dict
45 self._options = {} # type: dict
46
47 def register(self, name: str, value: Union[PluginType,None]) -> PluginType:
48 """Register a plugin by name and value.
49
50 This method is used for explicit registration of a plugin and shouldn't be
51 used to manage entry point managed plugins, which are auto-loaded.
52
53 Parameters
54 ==========
55 name: str
56 The name of the plugin.
57 value: PluginType or None
58 The actual plugin object to register or None to unregister that plugin.
59
60 Returns
61 =======
62 plugin: PluginType
63 The plugin that was registered or unregistered.
64 """
65 if value is None and name in self._plugins:
66 return self._plugins.pop(name)
67 else:
68 assert isinstance(value, self.plugin_type)
69 self._plugins[name] = value
70 return value
71
72
73 def names(self) -> List[str]:
74 """List the names of the registered and entry points plugins."""
75 exts = list(self._plugins.keys())
76 more_exts = [ep.name for ep in entrypoints.get_group_all(self.entry_point_group)]
77 exts.extend(more_exts)
78 return exts
79
80
81 def enable(self, name: str) -> None:
82 """Enable a plugin by name."""
83 if name not in self._plugins:
84 try:
85 ep = entrypoints.get_single(self.entry_point_group, name)
86 except entrypoints.NoSuchEntryPoint as err:
87 if name in self.entrypoint_err_messages:
88 raise ValueError(self.entrypoint_err_messages[name])
89 else:
90 raise
91 value = cast(PluginType, ep.load())
92 assert isinstance(value, self.plugin_type)
93 self.register(name, value)
94 self._active_name = name
95 self._active = self._plugins[name]
96
97 @property
98 def active(self) -> str:
99 """Return the name of the currently active plugin"""
100 return self._active_name
101
102 def get(self) -> PluginType:
103 """Return the currently active plugin."""
104 return self._active
105
106 def __repr__(self) -> str:
107 return ("{0}(active={1!r}, registered={2!r})"
108 "".format(self.__class__.__name__,
109 self._active_name,
110 list(self.names())))
111
[end of altair/utils/plugin_registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/altair/utils/plugin_registry.py b/altair/utils/plugin_registry.py
--- a/altair/utils/plugin_registry.py
+++ b/altair/utils/plugin_registry.py
@@ -75,7 +75,7 @@
exts = list(self._plugins.keys())
more_exts = [ep.name for ep in entrypoints.get_group_all(self.entry_point_group)]
exts.extend(more_exts)
- return exts
+ return sorted(set(exts))
def enable(self, name: str) -> None:
|
{"golden_diff": "diff --git a/altair/utils/plugin_registry.py b/altair/utils/plugin_registry.py\n--- a/altair/utils/plugin_registry.py\n+++ b/altair/utils/plugin_registry.py\n@@ -75,7 +75,7 @@\n exts = list(self._plugins.keys())\n more_exts = [ep.name for ep in entrypoints.get_group_all(self.entry_point_group)]\n exts.extend(more_exts)\n- return exts\n+ return sorted(set(exts))\n \n \n def enable(self, name: str) -> None:\n", "issue": "Two renderers with same name\nWorking with a fresh Anaconda installation of Jupyter 3.6. Followed Altair Notebook installation instructions. Basic example (flower petals scatter plot) rendered but with JSON underneath graph (issue #634). I thought this might be due to having selected the wrong renderer., so I went to list the available renderers. \r\n\r\n`alt.renderers.names()`\r\nreturns\r\n`['default', 'json', 'notebook', 'notebook']`\r\n\r\nSecretly hoping the second `notebook` renderer solves #634. In any case, I think you shouldn't be able to have two renderers with the same name. \r\n\r\n(hs teacher, maybe I'm missing something)\n", "before_files": [{"content": "from typing import Callable, Generic, List, TypeVar, Union, cast\n\nimport entrypoints\n\n\nPluginType = TypeVar('PluginType')\n\n\nclass PluginRegistry(Generic[PluginType]):\n \"\"\"A registry for plugins.\n\n This is a plugin registry that allows plugins to be loaded/registered\n in two ways:\n\n 1. Through an explicit call to ``.register(name, value)``.\n 2. By looking for other Python packages that are installed and provide\n a setuptools entry point group.\n\n When you create an instance of this class, provide the name of the\n entry point group to use::\n\n reg = PluginRegister('my_entrypoint_group')\n\n \"\"\"\n # this is a mapping of name to error message to allow custom error messages\n # in case an entrypoint is not found\n entrypoint_err_messages = {}\n\n def __init__(self, entry_point_group: str = '', plugin_type=object) -> None:\n \"\"\"Create a PluginRegistry for a named entry point group.\n\n Parameters\n ==========\n entry_point_group: str\n The name of the entry point group.\n plugin_type: object\n A type that will optionally be used for runtime type checking of\n loaded plugins using isinstance.\n \"\"\"\n self.entry_point_group = entry_point_group\n self.plugin_type = plugin_type\n self._active = None # type: None\n self._active_name = '' # type: str\n self._plugins = {} # type: dict\n self._options = {} # type: dict\n\n def register(self, name: str, value: Union[PluginType,None]) -> PluginType:\n \"\"\"Register a plugin by name and value.\n\n This method is used for explicit registration of a plugin and shouldn't be\n used to manage entry point managed plugins, which are auto-loaded.\n\n Parameters\n ==========\n name: str\n The name of the plugin.\n value: PluginType or None\n The actual plugin object to register or None to unregister that plugin.\n\n Returns\n =======\n plugin: PluginType\n The plugin that was registered or unregistered.\n \"\"\"\n if value is None and name in self._plugins:\n return self._plugins.pop(name)\n else:\n assert isinstance(value, self.plugin_type)\n self._plugins[name] = value\n return value\n\n\n def names(self) -> List[str]:\n \"\"\"List the names of the registered and entry points plugins.\"\"\"\n exts = list(self._plugins.keys())\n more_exts = [ep.name for ep in entrypoints.get_group_all(self.entry_point_group)]\n exts.extend(more_exts)\n return exts\n\n\n def enable(self, name: str) -> None:\n \"\"\"Enable a plugin by name.\"\"\"\n if name not in self._plugins:\n try:\n ep = entrypoints.get_single(self.entry_point_group, name)\n except entrypoints.NoSuchEntryPoint as err:\n if name in self.entrypoint_err_messages:\n raise ValueError(self.entrypoint_err_messages[name])\n else:\n raise\n value = cast(PluginType, ep.load())\n assert isinstance(value, self.plugin_type)\n self.register(name, value)\n self._active_name = name\n self._active = self._plugins[name]\n\n @property\n def active(self) -> str:\n \"\"\"Return the name of the currently active plugin\"\"\"\n return self._active_name\n\n def get(self) -> PluginType:\n \"\"\"Return the currently active plugin.\"\"\"\n return self._active\n\n def __repr__(self) -> str:\n return (\"{0}(active={1!r}, registered={2!r})\"\n \"\".format(self.__class__.__name__,\n self._active_name,\n list(self.names())))\n", "path": "altair/utils/plugin_registry.py"}]}
| 1,732 | 119 |
gh_patches_debug_18401
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-941
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ENH] Add `softmax` to math submodule
In reviewing the `math` module, I'd like to add `softmax` to the functions. This will complete the "activation" functions :) as we have sigmoids but no softmax.
[ENH] Add `softmax` to math submodule
In reviewing the `math` module, I'd like to add `softmax` to the functions. This will complete the "activation" functions :) as we have sigmoids but no softmax.
</issue>
<code>
[start of janitor/math.py]
1 """ Miscellaneous mathematical operators. """
2
3 import warnings
4 from typing import Tuple
5
6 import numpy as np
7 import pandas as pd
8 import pandas_flavor as pf
9 from pandas.api.types import is_numeric_dtype
10 from scipy.special import expit
11 from scipy.stats import norm
12
13
14 @pf.register_series_method
15 def log(s: pd.Series, error: str = "warn") -> pd.Series:
16 """
17 Take natural logarithm of the Series.
18
19 :param s: Input Series.
20 :param error: Determines behavior when taking the log of nonpositive
21 entries. If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`,
22 then a `RuntimeError` is thrown. Otherwise, nothing is thrown and
23 log of nonpositive values is `np.nan`; defaults to `'warn'`.
24 :raises RuntimeError: Raised when there are nonpositive values in the
25 Series and `error='raise'`.
26 :return: Transformed Series.
27 """
28 s = s.copy()
29 nonpositive = s <= 0
30 if (nonpositive).any():
31 msg = f"Log taken on {nonpositive.sum()} nonpositive value(s)"
32 if error.lower() == "warn":
33 warnings.warn(msg, RuntimeWarning)
34 if error.lower() == "raise":
35 raise RuntimeError(msg)
36 else:
37 pass
38 s[nonpositive] = np.nan
39 return np.log(s)
40
41
42 @pf.register_series_method
43 def exp(s: pd.Series) -> pd.Series:
44 """
45 Take the exponential transform of the series.
46
47 :param s: Input Series.
48 :return: Transformed Series.
49 """
50 return np.exp(s)
51
52
53 @pf.register_series_method
54 def sigmoid(s: pd.Series) -> pd.Series:
55 """
56 Take the sigmoid transform of the series where:
57
58 ```python
59 sigmoid(x) = 1 / (1 + exp(-x))
60 ```
61
62 :param s: Input Series.
63 :return: Transformed Series.
64 """
65 return expit(s)
66
67
68 @pf.register_series_method
69 def logit(s: pd.Series, error: str = "warn") -> pd.Series:
70 """
71 Take logit transform of the Series where:
72
73 ```python
74 logit(p) = log(p/(1-p))
75 ```
76
77 :param s: Input Series.
78 :param error: Determines behavior when `s / (1-s)` is outside of `(0, 1)`.
79 If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`, then a
80 `RuntimeError` is thrown. Otherwise, nothing is thrown and `np.nan`
81 is returned for the problematic entries; defaults to `'warn'`.
82 :return: Transformed Series.
83 :raises RuntimeError: if `error` is set to `'raise'`.
84 """
85 s = s.copy()
86 odds_ratio = s / (1 - s)
87 outside_support = (odds_ratio <= 0) | (odds_ratio >= 1)
88 if (outside_support).any():
89 msg = f"Odds ratio for {outside_support.sum()} value(s) \
90 are outside of (0, 1)"
91 if error.lower() == "warn":
92 warnings.warn(msg, RuntimeWarning)
93 if error.lower() == "raise":
94 raise RuntimeError(msg)
95 else:
96 pass
97 odds_ratio[outside_support] = np.nan
98 return odds_ratio.log(error="ignore")
99
100
101 @pf.register_series_method
102 def normal_cdf(s: pd.Series) -> pd.Series:
103 """
104 Transforms the Series via the CDF of the Normal distribution.
105
106 :param s: Input Series.
107 :return: Transformed Series.
108 """
109 return pd.Series(norm.cdf(s), index=s.index)
110
111
112 @pf.register_series_method
113 def probit(s: pd.Series, error: str = "warn") -> pd.Series:
114 """
115 Transforms the Series via the inverse CDF of the Normal distribution.
116
117 :param s: Input Series.
118 :param error: Determines behavior when `s` is outside of `(0, 1)`.
119 If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`, then
120 a `RuntimeError` is thrown. Otherwise, nothing is thrown and `np.nan`
121 is returned for the problematic entries; defaults to `'warn'`.
122 :raises RuntimeError: Raised when there are problematic values
123 in the Series and `error='raise'`.
124 :return: Transformed Series
125 """
126 s = s.copy()
127 outside_support = (s <= 0) | (s >= 1)
128 if (outside_support).any():
129 msg = f"{outside_support.sum()} value(s) are outside of (0, 1)"
130 if error.lower() == "warn":
131 warnings.warn(msg, RuntimeWarning)
132 if error.lower() == "raise":
133 raise RuntimeError(msg)
134 else:
135 pass
136 s[outside_support] = np.nan
137 with np.errstate(all="ignore"):
138 out = pd.Series(norm.ppf(s), index=s.index)
139 return out
140
141
142 @pf.register_series_method
143 def z_score(
144 s: pd.Series,
145 moments_dict: dict = None,
146 keys: Tuple[str, str] = ("mean", "std"),
147 ) -> pd.Series:
148 """
149 Transforms the Series into z-scores where:
150
151 ```python
152 z = (s - s.mean()) / s.std()
153 ```
154
155 :param s: Input Series.
156 :param moments_dict: If not `None`, then the mean and standard
157 deviation used to compute the z-score transformation is
158 saved as entries in `moments_dict` with keys determined by
159 the `keys` argument; defaults to `None`.
160 :param keys: Determines the keys saved in `moments_dict`
161 if moments are saved; defaults to (`'mean'`, `'std'`).
162 :return: Transformed Series.
163 """
164 mean = s.mean()
165 std = s.std()
166 if std == 0:
167 return 0
168 if moments_dict is not None:
169 moments_dict[keys[0]] = mean
170 moments_dict[keys[1]] = std
171 return (s - mean) / std
172
173
174 @pf.register_series_method
175 def ecdf(s: pd.Series) -> Tuple[np.ndarray, np.ndarray]:
176 """
177 Return cumulative distribution of values in a series.
178
179 Intended to be used with the following pattern:
180
181 ```python
182 df = pd.DataFrame(...)
183
184 # Obtain ECDF values to be plotted
185 x, y = df["column_name"].ecdf()
186
187 # Plot ECDF values
188 plt.scatter(x, y)
189 ```
190
191 Null values must be dropped from the series,
192 otherwise a `ValueError` is raised.
193
194 Also, if the `dtype` of the series is not numeric,
195 a `TypeError` is raised.
196
197 :param s: A pandas series. `dtype` should be numeric.
198 :returns: `(x, y)`.
199 `x`: sorted array of values.
200 `y`: cumulative fraction of data points with value `x` or lower.
201 :raises TypeError: if series is not numeric.
202 :raises ValueError: if series contains nulls.
203 """
204 if not is_numeric_dtype(s):
205 raise TypeError(f"series {s.name} must be numeric!")
206 if not s.isna().sum() == 0:
207 raise ValueError(f"series {s.name} contains nulls. Please drop them.")
208
209 n = len(s)
210 x = np.sort(s)
211 y = np.arange(1, n + 1) / n
212
213 return x, y
214
[end of janitor/math.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/janitor/math.py b/janitor/math.py
--- a/janitor/math.py
+++ b/janitor/math.py
@@ -8,6 +8,7 @@
import pandas_flavor as pf
from pandas.api.types import is_numeric_dtype
from scipy.special import expit
+from scipy.special import softmax as scipy_softmax
from scipy.stats import norm
@@ -65,6 +66,27 @@
return expit(s)
[email protected]_series_method
+def softmax(s: pd.Series) -> pd.Series:
+ """
+ Take the softmax transform of the series.
+
+ The softmax function transforms each element of a collection by
+ computing the exponential of each element divided by the sum of the
+ exponentials of all the elements.
+
+ That is, if x is a one-dimensional numpy array or pandas Series:
+
+ ```python
+ softmax(x) = exp(x)/sum(exp(x))
+ ```
+
+ :param s: Input Series.
+ :return: Transformed Series.
+ """
+ return scipy_softmax(s)
+
+
@pf.register_series_method
def logit(s: pd.Series, error: str = "warn") -> pd.Series:
"""
|
{"golden_diff": "diff --git a/janitor/math.py b/janitor/math.py\n--- a/janitor/math.py\n+++ b/janitor/math.py\n@@ -8,6 +8,7 @@\n import pandas_flavor as pf\n from pandas.api.types import is_numeric_dtype\n from scipy.special import expit\n+from scipy.special import softmax as scipy_softmax\n from scipy.stats import norm\n \n \n@@ -65,6 +66,27 @@\n return expit(s)\n \n \[email protected]_series_method\n+def softmax(s: pd.Series) -> pd.Series:\n+ \"\"\"\n+ Take the softmax transform of the series.\n+\n+ The softmax function transforms each element of a collection by\n+ computing the exponential of each element divided by the sum of the\n+ exponentials of all the elements.\n+\n+ That is, if x is a one-dimensional numpy array or pandas Series:\n+\n+ ```python\n+ softmax(x) = exp(x)/sum(exp(x))\n+ ```\n+\n+ :param s: Input Series.\n+ :return: Transformed Series.\n+ \"\"\"\n+ return scipy_softmax(s)\n+\n+\n @pf.register_series_method\n def logit(s: pd.Series, error: str = \"warn\") -> pd.Series:\n \"\"\"\n", "issue": "[ENH] Add `softmax` to math submodule\nIn reviewing the `math` module, I'd like to add `softmax` to the functions. This will complete the \"activation\" functions :) as we have sigmoids but no softmax.\n[ENH] Add `softmax` to math submodule\nIn reviewing the `math` module, I'd like to add `softmax` to the functions. This will complete the \"activation\" functions :) as we have sigmoids but no softmax.\n", "before_files": [{"content": "\"\"\" Miscellaneous mathematical operators. \"\"\"\n\nimport warnings\nfrom typing import Tuple\n\nimport numpy as np\nimport pandas as pd\nimport pandas_flavor as pf\nfrom pandas.api.types import is_numeric_dtype\nfrom scipy.special import expit\nfrom scipy.stats import norm\n\n\[email protected]_series_method\ndef log(s: pd.Series, error: str = \"warn\") -> pd.Series:\n \"\"\"\n Take natural logarithm of the Series.\n\n :param s: Input Series.\n :param error: Determines behavior when taking the log of nonpositive\n entries. If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`,\n then a `RuntimeError` is thrown. Otherwise, nothing is thrown and\n log of nonpositive values is `np.nan`; defaults to `'warn'`.\n :raises RuntimeError: Raised when there are nonpositive values in the\n Series and `error='raise'`.\n :return: Transformed Series.\n \"\"\"\n s = s.copy()\n nonpositive = s <= 0\n if (nonpositive).any():\n msg = f\"Log taken on {nonpositive.sum()} nonpositive value(s)\"\n if error.lower() == \"warn\":\n warnings.warn(msg, RuntimeWarning)\n if error.lower() == \"raise\":\n raise RuntimeError(msg)\n else:\n pass\n s[nonpositive] = np.nan\n return np.log(s)\n\n\[email protected]_series_method\ndef exp(s: pd.Series) -> pd.Series:\n \"\"\"\n Take the exponential transform of the series.\n\n :param s: Input Series.\n :return: Transformed Series.\n \"\"\"\n return np.exp(s)\n\n\[email protected]_series_method\ndef sigmoid(s: pd.Series) -> pd.Series:\n \"\"\"\n Take the sigmoid transform of the series where:\n\n ```python\n sigmoid(x) = 1 / (1 + exp(-x))\n ```\n\n :param s: Input Series.\n :return: Transformed Series.\n \"\"\"\n return expit(s)\n\n\[email protected]_series_method\ndef logit(s: pd.Series, error: str = \"warn\") -> pd.Series:\n \"\"\"\n Take logit transform of the Series where:\n\n ```python\n logit(p) = log(p/(1-p))\n ```\n\n :param s: Input Series.\n :param error: Determines behavior when `s / (1-s)` is outside of `(0, 1)`.\n If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`, then a\n `RuntimeError` is thrown. Otherwise, nothing is thrown and `np.nan`\n is returned for the problematic entries; defaults to `'warn'`.\n :return: Transformed Series.\n :raises RuntimeError: if `error` is set to `'raise'`.\n \"\"\"\n s = s.copy()\n odds_ratio = s / (1 - s)\n outside_support = (odds_ratio <= 0) | (odds_ratio >= 1)\n if (outside_support).any():\n msg = f\"Odds ratio for {outside_support.sum()} value(s) \\\nare outside of (0, 1)\"\n if error.lower() == \"warn\":\n warnings.warn(msg, RuntimeWarning)\n if error.lower() == \"raise\":\n raise RuntimeError(msg)\n else:\n pass\n odds_ratio[outside_support] = np.nan\n return odds_ratio.log(error=\"ignore\")\n\n\[email protected]_series_method\ndef normal_cdf(s: pd.Series) -> pd.Series:\n \"\"\"\n Transforms the Series via the CDF of the Normal distribution.\n\n :param s: Input Series.\n :return: Transformed Series.\n \"\"\"\n return pd.Series(norm.cdf(s), index=s.index)\n\n\[email protected]_series_method\ndef probit(s: pd.Series, error: str = \"warn\") -> pd.Series:\n \"\"\"\n Transforms the Series via the inverse CDF of the Normal distribution.\n\n :param s: Input Series.\n :param error: Determines behavior when `s` is outside of `(0, 1)`.\n If `'warn'` then a `RuntimeWarning` is thrown. If `'raise'`, then\n a `RuntimeError` is thrown. Otherwise, nothing is thrown and `np.nan`\n is returned for the problematic entries; defaults to `'warn'`.\n :raises RuntimeError: Raised when there are problematic values\n in the Series and `error='raise'`.\n :return: Transformed Series\n \"\"\"\n s = s.copy()\n outside_support = (s <= 0) | (s >= 1)\n if (outside_support).any():\n msg = f\"{outside_support.sum()} value(s) are outside of (0, 1)\"\n if error.lower() == \"warn\":\n warnings.warn(msg, RuntimeWarning)\n if error.lower() == \"raise\":\n raise RuntimeError(msg)\n else:\n pass\n s[outside_support] = np.nan\n with np.errstate(all=\"ignore\"):\n out = pd.Series(norm.ppf(s), index=s.index)\n return out\n\n\[email protected]_series_method\ndef z_score(\n s: pd.Series,\n moments_dict: dict = None,\n keys: Tuple[str, str] = (\"mean\", \"std\"),\n) -> pd.Series:\n \"\"\"\n Transforms the Series into z-scores where:\n\n ```python\n z = (s - s.mean()) / s.std()\n ```\n\n :param s: Input Series.\n :param moments_dict: If not `None`, then the mean and standard\n deviation used to compute the z-score transformation is\n saved as entries in `moments_dict` with keys determined by\n the `keys` argument; defaults to `None`.\n :param keys: Determines the keys saved in `moments_dict`\n if moments are saved; defaults to (`'mean'`, `'std'`).\n :return: Transformed Series.\n \"\"\"\n mean = s.mean()\n std = s.std()\n if std == 0:\n return 0\n if moments_dict is not None:\n moments_dict[keys[0]] = mean\n moments_dict[keys[1]] = std\n return (s - mean) / std\n\n\[email protected]_series_method\ndef ecdf(s: pd.Series) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"\n Return cumulative distribution of values in a series.\n\n Intended to be used with the following pattern:\n\n ```python\n df = pd.DataFrame(...)\n\n # Obtain ECDF values to be plotted\n x, y = df[\"column_name\"].ecdf()\n\n # Plot ECDF values\n plt.scatter(x, y)\n ```\n\n Null values must be dropped from the series,\n otherwise a `ValueError` is raised.\n\n Also, if the `dtype` of the series is not numeric,\n a `TypeError` is raised.\n\n :param s: A pandas series. `dtype` should be numeric.\n :returns: `(x, y)`.\n `x`: sorted array of values.\n `y`: cumulative fraction of data points with value `x` or lower.\n :raises TypeError: if series is not numeric.\n :raises ValueError: if series contains nulls.\n \"\"\"\n if not is_numeric_dtype(s):\n raise TypeError(f\"series {s.name} must be numeric!\")\n if not s.isna().sum() == 0:\n raise ValueError(f\"series {s.name} contains nulls. Please drop them.\")\n\n n = len(s)\n x = np.sort(s)\n y = np.arange(1, n + 1) / n\n\n return x, y\n", "path": "janitor/math.py"}]}
| 2,822 | 270 |
gh_patches_debug_25592
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-155
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Recipes with plotting option throw import error
Some of the preprocessing recipes have a `plot` argument, but as far as I can tell, they'll only throw an error. `recipe_zheng17` and `recipe_seurat` have the lines:
```python
if plot:
from .. import plotting as pl # should not import at the top of the file
pl.filter_genes_dispersion(filter_result, log=True)
```
But `plotting` doesn't have the function `filter_genes_dispersion` exposed. Here's an example of the error using `scanpy` pulled from github, but the same issue occurs on the release on pypi:
```python
In [1]: import numpy as np
...: import pandas as pd
...: import scanpy.api as sc
...:
...: sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
...: sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
...: sc.logging.print_versions()
/Users/isaac/miniconda3/envs/scanpy/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
adatascanpy==1.0.4+91.ge9ae4ff anndata==0.6 numpy==1.14.3 scipy==1.1.0 pandas==0.22.0 scikit-learn==0.19.1 statsmodels==0.8.0
In [2]: adata = sc.read("./data/pbmc3k_filtered_gene_bc_matrices/hg19/matrix.mtx").T
--> This might be very slow. Consider passing `cache=True`, which enables much faster reading from a cache file.
In [3]: sc.pp.recipe_zheng17(adata, plot=True)
running recipe zheng17
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-c19f237f1c6e> in <module>()
----> 1 sc.pp.recipe_zheng17(adata, plot=True)
~/github/scanpy/scanpy/preprocessing/recipes.py in recipe_zheng17(adata, n_top_genes, log, plot, copy)
106 if plot:
107 from .. import plotting as pl # should not import at the top of the file
--> 108 pl.filter_genes_dispersion(filter_result, log=True)
109 # actually filter the genes, the following is the inplace version of
110 # adata = adata[:, filter_result.gene_subset]
AttributeError: module 'scanpy.plotting' has no attribute 'filter_genes_dispersion'
```
It looks like there's a pretty easy fix here, so I'd be up for making a pull request if you'd like.
</issue>
<code>
[start of scanpy/preprocessing/recipes.py]
1 """Preprocessing recipes from the literature
2 """
3
4 from . import simple as pp
5 from .. import logging as logg
6
7
8 def recipe_weinreb17(adata, log=True, mean_threshold=0.01, cv_threshold=2,
9 n_pcs=50, svd_solver='randomized', random_state=0,
10 copy=False):
11 """Normalization and filtering as of [Weinreb17]_.
12
13 Expects non-logarithmized data. If using logarithmized data, pass `log=False`.
14
15 Parameters
16 ----------
17 adata : :class:`~scanpy.api.AnnData`
18 Annotated data matrix.
19 copy : bool (default: False)
20 Return a copy if true.
21 """
22 from scipy.sparse import issparse
23 if issparse(adata.X):
24 raise ValueError('`recipe_weinreb16 does not support sparse matrices.')
25 if copy: adata = adata.copy()
26 if log: pp.log1p(adata)
27 adata.X = pp.normalize_per_cell_weinreb16_deprecated(adata.X,
28 max_fraction=0.05,
29 mult_with_mean=True)
30 gene_subset = pp.filter_genes_cv_deprecated(adata.X, mean_threshold, cv_threshold)
31 adata._inplace_subset_var(gene_subset) # this modifies the object itself
32 X_pca = pp.pca(pp.zscore_deprecated(adata.X),
33 n_comps=n_pcs, svd_solver=svd_solver, random_state=random_state)
34 # update adata
35 adata.obsm['X_pca'] = X_pca
36 return adata if copy else None
37
38
39 def recipe_seurat(adata, log=True, plot=False, copy=False):
40 """Normalization and filtering as of Seurat [Satija15]_.
41
42 This uses a particular preprocessing.
43
44 Expects non-logarithmized data. If using logarithmized data, pass `log=False`.
45 """
46 if copy: adata = adata.copy()
47 pp.filter_cells(adata, min_genes=200)
48 pp.filter_genes(adata, min_cells=3)
49 pp.normalize_per_cell(adata, counts_per_cell_after=1e4)
50 filter_result = pp.filter_genes_dispersion(
51 adata.X, min_mean=0.0125, max_mean=3, min_disp=0.5, log=not log)
52 if plot:
53 from .. import plotting as pl # should not import at the top of the file
54 pl.filter_genes_dispersion(filter_result, log=not log)
55 adata._inplace_subset_var(filter_result.gene_subset) # filter genes
56 if log: pp.log1p(adata)
57 pp.scale(adata, max_value=10)
58 return adata if copy else None
59
60
61 def recipe_zheng17(adata, n_top_genes=1000, log=True, plot=False, copy=False):
62 """Normalization and filtering as of [Zheng17]_.
63
64 Reproduces the preprocessing of [Zheng17]_ - the Cell Ranger R Kit of 10x
65 Genomics.
66
67 Expects non-logarithmized data. If using logarithmized data, pass `log=False`.
68
69 The recipe runs the following steps::
70
71 sc.pp.filter_genes(adata, min_counts=1) # only consider genes with more than 1 count
72 sc.pp.normalize_per_cell( # normalize with total UMI count per cell
73 adata, key_n_counts='n_counts_all')
74 filter_result = sc.pp.filter_genes_dispersion( # select highly-variable genes
75 adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)
76 adata = adata[:, filter_result.gene_subset] # subset the genes
77 sc.pp.normalize_per_cell(adata) # renormalize after filtering
78 if log: sc.pp.log1p(adata) # log transform: adata.X = log(adata.X + 1)
79 sc.pp.scale(adata) # scale to unit variance and shift to zero mean
80
81
82 Parameters
83 ----------
84 adata : :class:`~scanpy.api.AnnData`
85 Annotated data matrix.
86 n_top_genes : `int`, optional (default: 1000)
87 Number of genes to keep.
88 log : `bool`, optional (default: `True`)
89 Take logarithm.
90 plot : `bool`, optional (default: `True`)
91 Show a plot of the gene dispersion vs. mean relation.
92 copy : `bool`, optional (default: `False`)
93 Return a copy of `adata` instead of updating it.
94
95 Returns
96 -------
97 Returns or updates `adata` depending on `copy`.
98 """
99 logg.info('running recipe zheng17', reset=True)
100 if copy: adata = adata.copy()
101 pp.filter_genes(adata, min_counts=1) # only consider genes with more than 1 count
102 pp.normalize_per_cell(adata, # normalize with total UMI count per cell
103 key_n_counts='n_counts_all')
104 filter_result = pp.filter_genes_dispersion(
105 adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)
106 if plot:
107 from .. import plotting as pl # should not import at the top of the file
108 pl.filter_genes_dispersion(filter_result, log=True)
109 # actually filter the genes, the following is the inplace version of
110 # adata = adata[:, filter_result.gene_subset]
111 adata._inplace_subset_var(filter_result.gene_subset) # filter genes
112 pp.normalize_per_cell(adata) # renormalize after filtering
113 if log: pp.log1p(adata) # log transform: X = log(X + 1)
114 pp.scale(adata)
115 logg.info(' finished', time=True)
116 return adata if copy else None
117
[end of scanpy/preprocessing/recipes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scanpy/preprocessing/recipes.py b/scanpy/preprocessing/recipes.py
--- a/scanpy/preprocessing/recipes.py
+++ b/scanpy/preprocessing/recipes.py
@@ -51,7 +51,7 @@
adata.X, min_mean=0.0125, max_mean=3, min_disp=0.5, log=not log)
if plot:
from .. import plotting as pl # should not import at the top of the file
- pl.filter_genes_dispersion(filter_result, log=not log)
+ pl.preprocessing.filter_genes_dispersion(filter_result, log=not log)
adata._inplace_subset_var(filter_result.gene_subset) # filter genes
if log: pp.log1p(adata)
pp.scale(adata, max_value=10)
@@ -105,7 +105,7 @@
adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)
if plot:
from .. import plotting as pl # should not import at the top of the file
- pl.filter_genes_dispersion(filter_result, log=True)
+ pl.preprocessing.filter_genes_dispersion(filter_result, log=True)
# actually filter the genes, the following is the inplace version of
# adata = adata[:, filter_result.gene_subset]
adata._inplace_subset_var(filter_result.gene_subset) # filter genes
|
{"golden_diff": "diff --git a/scanpy/preprocessing/recipes.py b/scanpy/preprocessing/recipes.py\n--- a/scanpy/preprocessing/recipes.py\n+++ b/scanpy/preprocessing/recipes.py\n@@ -51,7 +51,7 @@\n adata.X, min_mean=0.0125, max_mean=3, min_disp=0.5, log=not log)\n if plot:\n from .. import plotting as pl # should not import at the top of the file\n- pl.filter_genes_dispersion(filter_result, log=not log)\n+ pl.preprocessing.filter_genes_dispersion(filter_result, log=not log)\n adata._inplace_subset_var(filter_result.gene_subset) # filter genes\n if log: pp.log1p(adata)\n pp.scale(adata, max_value=10)\n@@ -105,7 +105,7 @@\n adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)\n if plot:\n from .. import plotting as pl # should not import at the top of the file\n- pl.filter_genes_dispersion(filter_result, log=True)\n+ pl.preprocessing.filter_genes_dispersion(filter_result, log=True)\n # actually filter the genes, the following is the inplace version of\n # adata = adata[:, filter_result.gene_subset]\n adata._inplace_subset_var(filter_result.gene_subset) # filter genes\n", "issue": "Recipes with plotting option throw import error\nSome of the preprocessing recipes have a `plot` argument, but as far as I can tell, they'll only throw an error. `recipe_zheng17` and `recipe_seurat` have the lines:\r\n\r\n```python\r\n if plot:\r\n from .. import plotting as pl # should not import at the top of the file\r\n pl.filter_genes_dispersion(filter_result, log=True)\r\n```\r\n\r\nBut `plotting` doesn't have the function `filter_genes_dispersion` exposed. Here's an example of the error using `scanpy` pulled from github, but the same issue occurs on the release on pypi:\r\n\r\n```python\r\nIn [1]: import numpy as np\r\n ...: import pandas as pd\r\n ...: import scanpy.api as sc\r\n ...: \r\n ...: sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)\r\n ...: sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures\r\n ...: sc.logging.print_versions()\r\n/Users/isaac/miniconda3/envs/scanpy/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nadatascanpy==1.0.4+91.ge9ae4ff anndata==0.6 numpy==1.14.3 scipy==1.1.0 pandas==0.22.0 scikit-learn==0.19.1 statsmodels==0.8.0 \r\n\r\nIn [2]: adata = sc.read(\"./data/pbmc3k_filtered_gene_bc_matrices/hg19/matrix.mtx\").T\r\n--> This might be very slow. Consider passing `cache=True`, which enables much faster reading from a cache file.\r\nIn [3]: sc.pp.recipe_zheng17(adata, plot=True)\r\nrunning recipe zheng17\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-3-c19f237f1c6e> in <module>()\r\n----> 1 sc.pp.recipe_zheng17(adata, plot=True)\r\n\r\n~/github/scanpy/scanpy/preprocessing/recipes.py in recipe_zheng17(adata, n_top_genes, log, plot, copy)\r\n 106 if plot:\r\n 107 from .. import plotting as pl # should not import at the top of the file\r\n--> 108 pl.filter_genes_dispersion(filter_result, log=True)\r\n 109 # actually filter the genes, the following is the inplace version of\r\n 110 # adata = adata[:, filter_result.gene_subset]\r\n\r\nAttributeError: module 'scanpy.plotting' has no attribute 'filter_genes_dispersion'\r\n```\r\n\r\nIt looks like there's a pretty easy fix here, so I'd be up for making a pull request if you'd like.\n", "before_files": [{"content": "\"\"\"Preprocessing recipes from the literature\n\"\"\"\n\nfrom . import simple as pp\nfrom .. import logging as logg\n\n\ndef recipe_weinreb17(adata, log=True, mean_threshold=0.01, cv_threshold=2,\n n_pcs=50, svd_solver='randomized', random_state=0,\n copy=False):\n \"\"\"Normalization and filtering as of [Weinreb17]_.\n\n Expects non-logarithmized data. If using logarithmized data, pass `log=False`.\n\n Parameters\n ----------\n adata : :class:`~scanpy.api.AnnData`\n Annotated data matrix.\n copy : bool (default: False)\n Return a copy if true.\n \"\"\"\n from scipy.sparse import issparse\n if issparse(adata.X):\n raise ValueError('`recipe_weinreb16 does not support sparse matrices.')\n if copy: adata = adata.copy()\n if log: pp.log1p(adata)\n adata.X = pp.normalize_per_cell_weinreb16_deprecated(adata.X,\n max_fraction=0.05,\n mult_with_mean=True)\n gene_subset = pp.filter_genes_cv_deprecated(adata.X, mean_threshold, cv_threshold)\n adata._inplace_subset_var(gene_subset) # this modifies the object itself\n X_pca = pp.pca(pp.zscore_deprecated(adata.X),\n n_comps=n_pcs, svd_solver=svd_solver, random_state=random_state)\n # update adata\n adata.obsm['X_pca'] = X_pca\n return adata if copy else None\n\n\ndef recipe_seurat(adata, log=True, plot=False, copy=False):\n \"\"\"Normalization and filtering as of Seurat [Satija15]_.\n\n This uses a particular preprocessing.\n\n Expects non-logarithmized data. If using logarithmized data, pass `log=False`.\n \"\"\"\n if copy: adata = adata.copy()\n pp.filter_cells(adata, min_genes=200)\n pp.filter_genes(adata, min_cells=3)\n pp.normalize_per_cell(adata, counts_per_cell_after=1e4)\n filter_result = pp.filter_genes_dispersion(\n adata.X, min_mean=0.0125, max_mean=3, min_disp=0.5, log=not log)\n if plot:\n from .. import plotting as pl # should not import at the top of the file\n pl.filter_genes_dispersion(filter_result, log=not log)\n adata._inplace_subset_var(filter_result.gene_subset) # filter genes\n if log: pp.log1p(adata)\n pp.scale(adata, max_value=10)\n return adata if copy else None\n\n\ndef recipe_zheng17(adata, n_top_genes=1000, log=True, plot=False, copy=False):\n \"\"\"Normalization and filtering as of [Zheng17]_.\n\n Reproduces the preprocessing of [Zheng17]_ - the Cell Ranger R Kit of 10x\n Genomics.\n\n Expects non-logarithmized data. If using logarithmized data, pass `log=False`.\n\n The recipe runs the following steps::\n\n sc.pp.filter_genes(adata, min_counts=1) # only consider genes with more than 1 count\n sc.pp.normalize_per_cell( # normalize with total UMI count per cell\n adata, key_n_counts='n_counts_all')\n filter_result = sc.pp.filter_genes_dispersion( # select highly-variable genes\n adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)\n adata = adata[:, filter_result.gene_subset] # subset the genes\n sc.pp.normalize_per_cell(adata) # renormalize after filtering\n if log: sc.pp.log1p(adata) # log transform: adata.X = log(adata.X + 1)\n sc.pp.scale(adata) # scale to unit variance and shift to zero mean\n\n\n Parameters\n ----------\n adata : :class:`~scanpy.api.AnnData`\n Annotated data matrix.\n n_top_genes : `int`, optional (default: 1000)\n Number of genes to keep.\n log : `bool`, optional (default: `True`)\n Take logarithm.\n plot : `bool`, optional (default: `True`)\n Show a plot of the gene dispersion vs. mean relation.\n copy : `bool`, optional (default: `False`)\n Return a copy of `adata` instead of updating it.\n\n Returns\n -------\n Returns or updates `adata` depending on `copy`.\n \"\"\"\n logg.info('running recipe zheng17', reset=True)\n if copy: adata = adata.copy()\n pp.filter_genes(adata, min_counts=1) # only consider genes with more than 1 count\n pp.normalize_per_cell(adata, # normalize with total UMI count per cell\n key_n_counts='n_counts_all')\n filter_result = pp.filter_genes_dispersion(\n adata.X, flavor='cell_ranger', n_top_genes=n_top_genes, log=False)\n if plot:\n from .. import plotting as pl # should not import at the top of the file\n pl.filter_genes_dispersion(filter_result, log=True)\n # actually filter the genes, the following is the inplace version of\n # adata = adata[:, filter_result.gene_subset]\n adata._inplace_subset_var(filter_result.gene_subset) # filter genes\n pp.normalize_per_cell(adata) # renormalize after filtering\n if log: pp.log1p(adata) # log transform: X = log(X + 1)\n pp.scale(adata)\n logg.info(' finished', time=True)\n return adata if copy else None\n", "path": "scanpy/preprocessing/recipes.py"}]}
| 2,781 | 318 |
gh_patches_debug_51314
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-2643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
module 'skimage.filters' has no attribute 'denoise_tv_chambolle'
There are a couple of undefined symbols in [`filters` module](https://github.com/scikit-image/scikit-image/blob/master/skimage/filters/__init__.py#L46-L48)
Thus `from skimage.filters import *` gives:
```
AttributeError: module 'skimage.filters' has no attribute 'denoise_tv_chambolle'
```
</issue>
<code>
[start of skimage/filters/__init__.py]
1 from .lpi_filter import inverse, wiener, LPIFilter2D
2 from ._gaussian import gaussian
3 from .edges import (sobel, sobel_h, sobel_v,
4 scharr, scharr_h, scharr_v,
5 prewitt, prewitt_h, prewitt_v,
6 roberts, roberts_pos_diag, roberts_neg_diag,
7 laplace)
8 from ._rank_order import rank_order
9 from ._gabor import gabor_kernel, gabor
10 from ._frangi import frangi, hessian
11 from .thresholding import (threshold_local,
12 threshold_adaptive, threshold_otsu, threshold_yen,
13 threshold_isodata, threshold_li, threshold_minimum,
14 threshold_mean, threshold_triangle,
15 threshold_niblack, threshold_sauvola,
16 try_all_threshold)
17 from . import rank
18 from .rank import median
19
20 from .._shared.utils import deprecated, copy_func
21
22
23 gaussian_filter = copy_func(gaussian, name='gaussian_filter')
24 gaussian_filter = deprecated('skimage.filters.gaussian')(gaussian_filter)
25 gabor_filter = copy_func(gabor, name='gabor_filter')
26 gabor_filter = deprecated('skimage.filters.gabor')(gabor_filter)
27
28 __all__ = ['inverse',
29 'wiener',
30 'LPIFilter2D',
31 'gaussian',
32 'median',
33 'sobel',
34 'sobel_h',
35 'sobel_v',
36 'scharr',
37 'scharr_h',
38 'scharr_v',
39 'prewitt',
40 'prewitt_h',
41 'prewitt_v',
42 'roberts',
43 'roberts_pos_diag',
44 'roberts_neg_diag',
45 'laplace',
46 'denoise_tv_chambolle',
47 'denoise_bilateral',
48 'denoise_tv_bregman',
49 'rank_order',
50 'gabor_kernel',
51 'gabor',
52 'try_all_threshold',
53 'frangi',
54 'hessian',
55 'threshold_adaptive',
56 'threshold_otsu',
57 'threshold_yen',
58 'threshold_isodata',
59 'threshold_li',
60 'threshold_minimum',
61 'threshold_mean',
62 'threshold_niblack',
63 'threshold_sauvola',
64 'threshold_triangle',
65 'rank']
66
[end of skimage/filters/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/filters/__init__.py b/skimage/filters/__init__.py
--- a/skimage/filters/__init__.py
+++ b/skimage/filters/__init__.py
@@ -43,9 +43,6 @@
'roberts_pos_diag',
'roberts_neg_diag',
'laplace',
- 'denoise_tv_chambolle',
- 'denoise_bilateral',
- 'denoise_tv_bregman',
'rank_order',
'gabor_kernel',
'gabor',
|
{"golden_diff": "diff --git a/skimage/filters/__init__.py b/skimage/filters/__init__.py\n--- a/skimage/filters/__init__.py\n+++ b/skimage/filters/__init__.py\n@@ -43,9 +43,6 @@\n 'roberts_pos_diag',\n 'roberts_neg_diag',\n 'laplace',\n- 'denoise_tv_chambolle',\n- 'denoise_bilateral',\n- 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor',\n", "issue": "module 'skimage.filters' has no attribute 'denoise_tv_chambolle' \nThere are a couple of undefined symbols in [`filters` module](https://github.com/scikit-image/scikit-image/blob/master/skimage/filters/__init__.py#L46-L48)\r\n\r\nThus `from skimage.filters import *` gives:\r\n\r\n```\r\nAttributeError: module 'skimage.filters' has no attribute 'denoise_tv_chambolle'\r\n```\n", "before_files": [{"content": "from .lpi_filter import inverse, wiener, LPIFilter2D\nfrom ._gaussian import gaussian\nfrom .edges import (sobel, sobel_h, sobel_v,\n scharr, scharr_h, scharr_v,\n prewitt, prewitt_h, prewitt_v,\n roberts, roberts_pos_diag, roberts_neg_diag,\n laplace)\nfrom ._rank_order import rank_order\nfrom ._gabor import gabor_kernel, gabor\nfrom ._frangi import frangi, hessian\nfrom .thresholding import (threshold_local,\n threshold_adaptive, threshold_otsu, threshold_yen,\n threshold_isodata, threshold_li, threshold_minimum,\n threshold_mean, threshold_triangle,\n threshold_niblack, threshold_sauvola,\n try_all_threshold)\nfrom . import rank\nfrom .rank import median\n\nfrom .._shared.utils import deprecated, copy_func\n\n\ngaussian_filter = copy_func(gaussian, name='gaussian_filter')\ngaussian_filter = deprecated('skimage.filters.gaussian')(gaussian_filter)\ngabor_filter = copy_func(gabor, name='gabor_filter')\ngabor_filter = deprecated('skimage.filters.gabor')(gabor_filter)\n\n__all__ = ['inverse',\n 'wiener',\n 'LPIFilter2D',\n 'gaussian',\n 'median',\n 'sobel',\n 'sobel_h',\n 'sobel_v',\n 'scharr',\n 'scharr_h',\n 'scharr_v',\n 'prewitt',\n 'prewitt_h',\n 'prewitt_v',\n 'roberts',\n 'roberts_pos_diag',\n 'roberts_neg_diag',\n 'laplace',\n 'denoise_tv_chambolle',\n 'denoise_bilateral',\n 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor',\n 'try_all_threshold',\n 'frangi',\n 'hessian',\n 'threshold_adaptive',\n 'threshold_otsu',\n 'threshold_yen',\n 'threshold_isodata',\n 'threshold_li',\n 'threshold_minimum',\n 'threshold_mean',\n 'threshold_niblack',\n 'threshold_sauvola',\n 'threshold_triangle',\n 'rank']\n", "path": "skimage/filters/__init__.py"}]}
| 1,249 | 125 |
gh_patches_debug_14631
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect type annotation on Generator.add_provider
* Faker version: 18.3.2
* OS: MacOS
The type annotation on `Generator.add_provider` is needlessly restrictive - the implementation clearly supports passing either a type or an instance, but the type annotation only allows instances:
```python
# faker/generator.py
def add_provider(self, provider: "BaseProvider") -> None:
if isinstance(provider, type):
provider = provider(self)
```
### Steps to reproduce
1. Write code that adds a provider class to a `Generator`
2. Run mypy
```python
class MyCustomProvider(BaseProvider):
pass
fake = Factory.create("en_US")
fake.add_provider(MyCustomProvider)
```
### Expected behavior
`mypy` check should pass
### Actual behavior
`mypy` produces an error:
```
error: Argument 1 to "add_provider" of "Generator" has incompatible type "Type[MyCustomProvider]"; expected "BaseProvider" [arg-type]
```
Incorrect type annotation on Generator.add_provider
* Faker version: 18.3.2
* OS: MacOS
The type annotation on `Generator.add_provider` is needlessly restrictive - the implementation clearly supports passing either a type or an instance, but the type annotation only allows instances:
```python
# faker/generator.py
def add_provider(self, provider: "BaseProvider") -> None:
if isinstance(provider, type):
provider = provider(self)
```
### Steps to reproduce
1. Write code that adds a provider class to a `Generator`
2. Run mypy
```python
class MyCustomProvider(BaseProvider):
pass
fake = Factory.create("en_US")
fake.add_provider(MyCustomProvider)
```
### Expected behavior
`mypy` check should pass
### Actual behavior
`mypy` produces an error:
```
error: Argument 1 to "add_provider" of "Generator" has incompatible type "Type[MyCustomProvider]"; expected "BaseProvider" [arg-type]
```
</issue>
<code>
[start of faker/generator.py]
1 import random as random_module
2 import re
3
4 from typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional
5
6 from .typing import SeedType
7
8 if TYPE_CHECKING:
9 from .providers import BaseProvider
10
11 _re_token = re.compile(r"\{\{\s*(\w+)(:\s*\w+?)?\s*\}\}")
12 random = random_module.Random()
13 mod_random = random # compat with name released in 0.8
14
15
16 Sentinel = object()
17
18
19 class Generator:
20 __config: Dict[str, Dict[Hashable, Any]] = {
21 "arguments": {},
22 }
23
24 _is_seeded = False
25 _global_seed = Sentinel
26
27 def __init__(self, **config: Dict) -> None:
28 self.providers: List["BaseProvider"] = []
29 self.__config = dict(list(self.__config.items()) + list(config.items()))
30 self.__random = random
31
32 def add_provider(self, provider: "BaseProvider") -> None:
33 if isinstance(provider, type):
34 provider = provider(self)
35
36 self.providers.insert(0, provider)
37
38 for method_name in dir(provider):
39 # skip 'private' method
40 if method_name.startswith("_"):
41 continue
42
43 faker_function = getattr(provider, method_name)
44
45 if callable(faker_function):
46 # add all faker method to generator
47 self.set_formatter(method_name, faker_function)
48
49 def provider(self, name: str) -> Optional["BaseProvider"]:
50 try:
51 lst = [p for p in self.get_providers() if hasattr(p, "__provider__") and p.__provider__ == name.lower()]
52 return lst[0]
53 except IndexError:
54 return None
55
56 def get_providers(self) -> List["BaseProvider"]:
57 """Returns added providers."""
58 return self.providers
59
60 @property
61 def random(self) -> random_module.Random:
62 return self.__random
63
64 @random.setter
65 def random(self, value: random_module.Random) -> None:
66 self.__random = value
67
68 def seed_instance(self, seed: Optional[SeedType] = None) -> "Generator":
69 """Calls random.seed"""
70 if self.__random == random:
71 # create per-instance random obj when first time seed_instance() is
72 # called
73 self.__random = random_module.Random()
74 self.__random.seed(seed)
75 self._is_seeded = True
76 return self
77
78 @classmethod
79 def seed(cls, seed: Optional[SeedType] = None) -> None:
80 random.seed(seed)
81 cls._global_seed = seed
82 cls._is_seeded = True
83
84 def format(self, formatter: str, *args: Any, **kwargs: Any) -> str:
85 """
86 This is a secure way to make a fake from another Provider.
87 """
88 return self.get_formatter(formatter)(*args, **kwargs)
89
90 def get_formatter(self, formatter: str) -> Callable:
91 try:
92 return getattr(self, formatter)
93 except AttributeError:
94 if "locale" in self.__config:
95 msg = f'Unknown formatter {formatter!r} with locale {self.__config["locale"]!r}'
96 else:
97 raise AttributeError(f"Unknown formatter {formatter!r}")
98 raise AttributeError(msg)
99
100 def set_formatter(self, name: str, method: Callable) -> None:
101 """
102 This method adds a provider method to generator.
103 Override this method to add some decoration or logging stuff.
104 """
105 setattr(self, name, method)
106
107 def set_arguments(self, group: str, argument: str, value: Optional[Any] = None) -> None:
108 """
109 Creates an argument group, with an individual argument or a dictionary
110 of arguments. The argument groups is used to apply arguments to tokens,
111 when using the generator.parse() method. To further manage argument
112 groups, use get_arguments() and del_arguments() methods.
113
114 generator.set_arguments('small', 'max_value', 10)
115 generator.set_arguments('small', {'min_value': 5, 'max_value': 10})
116 """
117 if group not in self.__config["arguments"]:
118 self.__config["arguments"][group] = {}
119
120 if isinstance(argument, dict):
121 self.__config["arguments"][group] = argument
122 elif not isinstance(argument, str):
123 raise ValueError("Arguments must be either a string or dictionary")
124 else:
125 self.__config["arguments"][group][argument] = value
126
127 def get_arguments(self, group: str, argument: Optional[str] = None) -> Any:
128 """
129 Get the value of an argument configured within a argument group, or
130 the entire group as a dictionary. Used in conjunction with the
131 set_arguments() method.
132
133 generator.get_arguments('small', 'max_value')
134 generator.get_arguments('small')
135 """
136 if group in self.__config["arguments"] and argument:
137 result = self.__config["arguments"][group].get(argument)
138 else:
139 result = self.__config["arguments"].get(group)
140
141 return result
142
143 def del_arguments(self, group: str, argument: Optional[str] = None) -> Any:
144 """
145 Delete an argument from an argument group or the entire argument group.
146 Used in conjunction with the set_arguments() method.
147
148 generator.del_arguments('small')
149 generator.del_arguments('small', 'max_value')
150 """
151 if group in self.__config["arguments"]:
152 if argument:
153 result = self.__config["arguments"][group].pop(argument)
154 else:
155 result = self.__config["arguments"].pop(group)
156 else:
157 result = None
158
159 return result
160
161 def parse(self, text: str) -> str:
162 """
163 Replaces tokens like '{{ tokenName }}' or '{{tokenName}}' in a string with
164 the result from the token method call. Arguments can be parsed by using an
165 argument group. For more information on the use of argument groups, please
166 refer to the set_arguments() method.
167
168 Example:
169
170 generator.set_arguments('red_rgb', {'hue': 'red', 'color_format': 'rgb'})
171 generator.set_arguments('small', 'max_value', 10)
172
173 generator.parse('{{ color:red_rgb }} - {{ pyint:small }}')
174 """
175 return _re_token.sub(self.__format_token, text)
176
177 def __format_token(self, matches):
178 formatter, argument_group = list(matches.groups())
179 argument_group = argument_group.lstrip(":").strip() if argument_group else ""
180
181 if argument_group:
182 try:
183 arguments = self.__config["arguments"][argument_group]
184 except KeyError:
185 raise AttributeError(f"Unknown argument group {argument_group!r}")
186
187 formatted = str(self.format(formatter, **arguments))
188 else:
189 formatted = str(self.format(formatter))
190
191 return "".join(formatted)
192
[end of faker/generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/generator.py b/faker/generator.py
--- a/faker/generator.py
+++ b/faker/generator.py
@@ -1,7 +1,7 @@
import random as random_module
import re
-from typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional
+from typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional, Type, Union
from .typing import SeedType
@@ -29,7 +29,7 @@
self.__config = dict(list(self.__config.items()) + list(config.items()))
self.__random = random
- def add_provider(self, provider: "BaseProvider") -> None:
+ def add_provider(self, provider: Union["BaseProvider", Type["BaseProvider"]]) -> None:
if isinstance(provider, type):
provider = provider(self)
|
{"golden_diff": "diff --git a/faker/generator.py b/faker/generator.py\n--- a/faker/generator.py\n+++ b/faker/generator.py\n@@ -1,7 +1,7 @@\n import random as random_module\n import re\n \n-from typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional\n+from typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional, Type, Union\n \n from .typing import SeedType\n \n@@ -29,7 +29,7 @@\n self.__config = dict(list(self.__config.items()) + list(config.items()))\n self.__random = random\n \n- def add_provider(self, provider: \"BaseProvider\") -> None:\n+ def add_provider(self, provider: Union[\"BaseProvider\", Type[\"BaseProvider\"]]) -> None:\n if isinstance(provider, type):\n provider = provider(self)\n", "issue": "Incorrect type annotation on Generator.add_provider\n* Faker version: 18.3.2\r\n* OS: MacOS\r\n\r\nThe type annotation on `Generator.add_provider` is needlessly restrictive - the implementation clearly supports passing either a type or an instance, but the type annotation only allows instances:\r\n\r\n```python\r\n # faker/generator.py\r\n def add_provider(self, provider: \"BaseProvider\") -> None:\r\n if isinstance(provider, type):\r\n provider = provider(self)\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Write code that adds a provider class to a `Generator`\r\n2. Run mypy\r\n\r\n```python\r\nclass MyCustomProvider(BaseProvider):\r\n pass\r\n\r\nfake = Factory.create(\"en_US\")\r\nfake.add_provider(MyCustomProvider)\r\n```\r\n\r\n### Expected behavior\r\n\r\n`mypy` check should pass\r\n\r\n### Actual behavior\r\n\r\n`mypy` produces an error:\r\n\r\n```\r\nerror: Argument 1 to \"add_provider\" of \"Generator\" has incompatible type \"Type[MyCustomProvider]\"; expected \"BaseProvider\" [arg-type]\r\n```\r\n\nIncorrect type annotation on Generator.add_provider\n* Faker version: 18.3.2\r\n* OS: MacOS\r\n\r\nThe type annotation on `Generator.add_provider` is needlessly restrictive - the implementation clearly supports passing either a type or an instance, but the type annotation only allows instances:\r\n\r\n```python\r\n # faker/generator.py\r\n def add_provider(self, provider: \"BaseProvider\") -> None:\r\n if isinstance(provider, type):\r\n provider = provider(self)\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Write code that adds a provider class to a `Generator`\r\n2. Run mypy\r\n\r\n```python\r\nclass MyCustomProvider(BaseProvider):\r\n pass\r\n\r\nfake = Factory.create(\"en_US\")\r\nfake.add_provider(MyCustomProvider)\r\n```\r\n\r\n### Expected behavior\r\n\r\n`mypy` check should pass\r\n\r\n### Actual behavior\r\n\r\n`mypy` produces an error:\r\n\r\n```\r\nerror: Argument 1 to \"add_provider\" of \"Generator\" has incompatible type \"Type[MyCustomProvider]\"; expected \"BaseProvider\" [arg-type]\r\n```\r\n\n", "before_files": [{"content": "import random as random_module\nimport re\n\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, Hashable, List, Optional\n\nfrom .typing import SeedType\n\nif TYPE_CHECKING:\n from .providers import BaseProvider\n\n_re_token = re.compile(r\"\\{\\{\\s*(\\w+)(:\\s*\\w+?)?\\s*\\}\\}\")\nrandom = random_module.Random()\nmod_random = random # compat with name released in 0.8\n\n\nSentinel = object()\n\n\nclass Generator:\n __config: Dict[str, Dict[Hashable, Any]] = {\n \"arguments\": {},\n }\n\n _is_seeded = False\n _global_seed = Sentinel\n\n def __init__(self, **config: Dict) -> None:\n self.providers: List[\"BaseProvider\"] = []\n self.__config = dict(list(self.__config.items()) + list(config.items()))\n self.__random = random\n\n def add_provider(self, provider: \"BaseProvider\") -> None:\n if isinstance(provider, type):\n provider = provider(self)\n\n self.providers.insert(0, provider)\n\n for method_name in dir(provider):\n # skip 'private' method\n if method_name.startswith(\"_\"):\n continue\n\n faker_function = getattr(provider, method_name)\n\n if callable(faker_function):\n # add all faker method to generator\n self.set_formatter(method_name, faker_function)\n\n def provider(self, name: str) -> Optional[\"BaseProvider\"]:\n try:\n lst = [p for p in self.get_providers() if hasattr(p, \"__provider__\") and p.__provider__ == name.lower()]\n return lst[0]\n except IndexError:\n return None\n\n def get_providers(self) -> List[\"BaseProvider\"]:\n \"\"\"Returns added providers.\"\"\"\n return self.providers\n\n @property\n def random(self) -> random_module.Random:\n return self.__random\n\n @random.setter\n def random(self, value: random_module.Random) -> None:\n self.__random = value\n\n def seed_instance(self, seed: Optional[SeedType] = None) -> \"Generator\":\n \"\"\"Calls random.seed\"\"\"\n if self.__random == random:\n # create per-instance random obj when first time seed_instance() is\n # called\n self.__random = random_module.Random()\n self.__random.seed(seed)\n self._is_seeded = True\n return self\n\n @classmethod\n def seed(cls, seed: Optional[SeedType] = None) -> None:\n random.seed(seed)\n cls._global_seed = seed\n cls._is_seeded = True\n\n def format(self, formatter: str, *args: Any, **kwargs: Any) -> str:\n \"\"\"\n This is a secure way to make a fake from another Provider.\n \"\"\"\n return self.get_formatter(formatter)(*args, **kwargs)\n\n def get_formatter(self, formatter: str) -> Callable:\n try:\n return getattr(self, formatter)\n except AttributeError:\n if \"locale\" in self.__config:\n msg = f'Unknown formatter {formatter!r} with locale {self.__config[\"locale\"]!r}'\n else:\n raise AttributeError(f\"Unknown formatter {formatter!r}\")\n raise AttributeError(msg)\n\n def set_formatter(self, name: str, method: Callable) -> None:\n \"\"\"\n This method adds a provider method to generator.\n Override this method to add some decoration or logging stuff.\n \"\"\"\n setattr(self, name, method)\n\n def set_arguments(self, group: str, argument: str, value: Optional[Any] = None) -> None:\n \"\"\"\n Creates an argument group, with an individual argument or a dictionary\n of arguments. The argument groups is used to apply arguments to tokens,\n when using the generator.parse() method. To further manage argument\n groups, use get_arguments() and del_arguments() methods.\n\n generator.set_arguments('small', 'max_value', 10)\n generator.set_arguments('small', {'min_value': 5, 'max_value': 10})\n \"\"\"\n if group not in self.__config[\"arguments\"]:\n self.__config[\"arguments\"][group] = {}\n\n if isinstance(argument, dict):\n self.__config[\"arguments\"][group] = argument\n elif not isinstance(argument, str):\n raise ValueError(\"Arguments must be either a string or dictionary\")\n else:\n self.__config[\"arguments\"][group][argument] = value\n\n def get_arguments(self, group: str, argument: Optional[str] = None) -> Any:\n \"\"\"\n Get the value of an argument configured within a argument group, or\n the entire group as a dictionary. Used in conjunction with the\n set_arguments() method.\n\n generator.get_arguments('small', 'max_value')\n generator.get_arguments('small')\n \"\"\"\n if group in self.__config[\"arguments\"] and argument:\n result = self.__config[\"arguments\"][group].get(argument)\n else:\n result = self.__config[\"arguments\"].get(group)\n\n return result\n\n def del_arguments(self, group: str, argument: Optional[str] = None) -> Any:\n \"\"\"\n Delete an argument from an argument group or the entire argument group.\n Used in conjunction with the set_arguments() method.\n\n generator.del_arguments('small')\n generator.del_arguments('small', 'max_value')\n \"\"\"\n if group in self.__config[\"arguments\"]:\n if argument:\n result = self.__config[\"arguments\"][group].pop(argument)\n else:\n result = self.__config[\"arguments\"].pop(group)\n else:\n result = None\n\n return result\n\n def parse(self, text: str) -> str:\n \"\"\"\n Replaces tokens like '{{ tokenName }}' or '{{tokenName}}' in a string with\n the result from the token method call. Arguments can be parsed by using an\n argument group. For more information on the use of argument groups, please\n refer to the set_arguments() method.\n\n Example:\n\n generator.set_arguments('red_rgb', {'hue': 'red', 'color_format': 'rgb'})\n generator.set_arguments('small', 'max_value', 10)\n\n generator.parse('{{ color:red_rgb }} - {{ pyint:small }}')\n \"\"\"\n return _re_token.sub(self.__format_token, text)\n\n def __format_token(self, matches):\n formatter, argument_group = list(matches.groups())\n argument_group = argument_group.lstrip(\":\").strip() if argument_group else \"\"\n\n if argument_group:\n try:\n arguments = self.__config[\"arguments\"][argument_group]\n except KeyError:\n raise AttributeError(f\"Unknown argument group {argument_group!r}\")\n\n formatted = str(self.format(formatter, **arguments))\n else:\n formatted = str(self.format(formatter))\n\n return \"\".join(formatted)\n", "path": "faker/generator.py"}]}
| 2,909 | 191 |
gh_patches_debug_12472
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-506
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unit Tests: `command` Directory
</issue>
<code>
[start of rastervision/utils/files.py]
1 import os
2 import shutil
3 from threading import Timer
4 import logging
5
6 from google.protobuf import json_format
7
8 from rastervision.filesystem.filesystem import FileSystem
9 from rastervision.filesystem.filesystem import ProtobufParseException
10 from rastervision.filesystem.local_filesystem import make_dir
11
12 log = logging.getLogger(__name__)
13
14
15 def get_local_path(uri, download_dir, fs=None):
16 """Convert a URI into a corresponding local path.
17
18 If a uri is local, return it. If it's remote, we generate a path for it
19 within download_dir. For an S3 path of form s3://<bucket>/<key>, the path
20 is <download_dir>/s3/<bucket>/<key>.
21
22 Args:
23 uri: (string) URI of file
24 download_dir: (string) path to directory
25 fs: Optional FileSystem to use
26
27 Returns:
28 (string) a local path
29 """
30 if uri is None:
31 return None
32
33 if not fs:
34 fs = FileSystem.get_file_system(uri, 'r')
35 path = fs.local_path(uri, download_dir)
36
37 return path
38
39
40 def sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):
41 """Synchronize a local to a local or remote directory.
42
43 Transfers files from source to destination directories so that the
44 destination has all the source files. If delete is True, also delete
45 files in the destination to match those in the source directory.
46
47 Args:
48 src_dir_uri: (string) URI of local source directory
49 dest_dir_uri: (string) URI of destination directory
50 delete: (bool)
51 fs: Optional FileSystem to use for destination
52 """
53 if not fs:
54 fs = FileSystem.get_file_system(dest_dir_uri, 'w')
55 fs.sync_to_dir(src_dir_uri, dest_dir_uri, delete=delete)
56
57
58 def sync_from_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):
59 """Synchronize a local or remote directory to a local directory.
60
61 Transfers files from source to destination directories so that the
62 destination has all the source files. If delete is True, also delete
63 files in the destination to match those in the source directory.
64
65 Args:
66 src_dir_uri: (string) URI of source directory
67 dest_dir_uri: (string) URI of local destination directory
68 delete: (bool)
69 fs: Optional FileSystem to use
70 """
71 if not fs:
72 fs = FileSystem.get_file_system(src_dir_uri, 'r')
73 fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)
74
75
76 def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):
77 """Start syncing a directory on a schedule.
78
79 Calls sync_to_dir on a schedule.
80
81 Args:
82 src_dir_uri: (string) Path of the local source directory
83 dest_dir_uri: (string) URI of destination directory
84 sync_interval: (int) period in seconds for syncing
85 fs: Optional FileSystem to use
86 """
87
88 def _sync_dir():
89 log.info('Syncing {} to {}...'.format(src_dir_uri, dest_dir_uri))
90 sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=fs)
91
92 class SyncThread:
93 def __init__(self):
94 thread = Timer(sync_interval, _sync_dir)
95 thread.daemon = True
96 thread.start()
97 self.thread = thread
98
99 def __enter__(self):
100 return self.thread
101
102 def __exit__(self, type, value, traceback):
103 self.thread.cancel()
104
105 return SyncThread()
106
107
108 def download_if_needed(uri, download_dir, fs=None):
109 """Download a file into a directory if it's remote.
110
111 If uri is local, there is no need to download the file.
112
113 Args:
114 uri: (string) URI of file
115 download_dir: (string) local directory to download file into
116 fs: Optional FileSystem to use.
117
118 Returns:
119 (string) path to local file
120
121 Raises:
122 NotReadableError if URI cannot be read from
123 """
124 if uri is None:
125 return None
126
127 if not fs:
128 fs = FileSystem.get_file_system(uri, 'r')
129
130 path = get_local_path(uri, download_dir, fs=fs)
131 make_dir(path, use_dirname=True)
132
133 if path != uri:
134 log.info('Downloading {} to {}'.format(uri, path))
135
136 fs.copy_from(uri, path)
137
138 return path
139
140
141 def download_or_copy(uri, target_dir, fs=None):
142 """Downloads or copies a file to a directory
143
144 Args:
145 uri: (string) URI of file
146 target_dir: (string) local directory to copy file to
147 fs: Optional FileSystem to use
148 """
149 local_path = download_if_needed(uri, target_dir, fs=fs)
150 shutil.copy(local_path, target_dir)
151 return local_path
152
153
154 def file_exists(uri, fs=None):
155 if not fs:
156 fs = FileSystem.get_file_system(uri, 'r')
157 return fs.file_exists(uri)
158
159
160 def list_paths(uri, ext=None, fs=None):
161 if uri is None:
162 return None
163
164 if not fs:
165 fs = FileSystem.get_file_system(uri, 'r')
166
167 return fs.list_paths(uri, ext=ext)
168
169
170 def upload_or_copy(src_path, dst_uri, fs=None):
171 """Upload a file if the destination is remote.
172
173 If dst_uri is local, the file is copied.
174
175 Args:
176 src_path: (string) path to source file
177 dst_uri: (string) URI of destination for file
178 fs: Optional FileSystem to use
179 Raises:
180 NotWritableError if URI cannot be written to
181 """
182 if dst_uri is None:
183 return
184
185 if not (os.path.isfile(src_path) or os.path.isdir(src_path)):
186 raise Exception('{} does not exist.'.format(src_path))
187
188 if not src_path == dst_uri:
189 log.info('Uploading {} to {}'.format(src_path, dst_uri))
190
191 if not fs:
192 fs = FileSystem.get_file_system(dst_uri, 'w')
193 fs.copy_to(src_path, dst_uri)
194
195
196 def file_to_str(uri, fs=None):
197 """Download contents of text file into a string.
198
199 Args:
200 uri: (string) URI of file
201 fs: Optional FileSystem to use
202
203 Returns:
204 (string) with contents of text file
205
206 Raises:
207 NotReadableError if URI cannot be read from
208 """
209 if not fs:
210 fs = FileSystem.get_file_system(uri, 'r')
211 return fs.read_str(uri)
212
213
214 def str_to_file(content_str, uri, fs=None):
215 """Writes string to text file.
216
217 Args:
218 content_str: string to write
219 uri: (string) URI of file to write
220 fs: Optional FileSystem to use
221
222 Raise:
223 NotWritableError if file_uri cannot be written
224 """
225 if not fs:
226 fs = FileSystem.get_file_system(uri, 'r')
227 return fs.write_str(uri, content_str)
228
229
230 def load_json_config(uri, message, fs=None):
231 """Load a JSON-formatted protobuf config file.
232
233 Args:
234 uri: (string) URI of config file
235 message: (google.protobuf.message.Message) empty protobuf message of
236 to load the config into. The type needs to match the content of
237 uri.
238 fs: Optional FileSystem to use.
239
240 Returns:
241 the same message passed as input with fields filled in from uri
242
243 Raises:
244 ProtobufParseException if uri cannot be parsed
245 """
246 try:
247 return json_format.Parse(file_to_str(uri, fs=fs), message)
248 except json_format.ParseError as e:
249 error_msg = ('Problem parsing protobuf file {}. '.format(uri) +
250 'You might need to run scripts/compile')
251 raise ProtobufParseException(error_msg) from e
252
253
254 def save_json_config(message, uri, fs=None):
255 """Save a protobuf object to a JSON file.
256
257 Args:
258 message: (google.protobuf.message.Message) protobuf message
259 uri: (string) URI of JSON file to write message to
260 fs: Optional FileSystem to use
261
262 Raises:
263 NotWritableError if uri cannot be written
264 """
265 json_str = json_format.MessageToJson(message)
266 str_to_file(json_str, uri, fs=fs)
267
[end of rastervision/utils/files.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rastervision/utils/files.py b/rastervision/utils/files.py
--- a/rastervision/utils/files.py
+++ b/rastervision/utils/files.py
@@ -73,7 +73,8 @@
fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)
-def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):
+def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600,
+ fs=None): # pragma: no cover
"""Start syncing a directory on a schedule.
Calls sync_to_dir on a schedule.
@@ -157,7 +158,7 @@
return fs.file_exists(uri)
-def list_paths(uri, ext=None, fs=None):
+def list_paths(uri, ext='', fs=None):
if uri is None:
return None
|
{"golden_diff": "diff --git a/rastervision/utils/files.py b/rastervision/utils/files.py\n--- a/rastervision/utils/files.py\n+++ b/rastervision/utils/files.py\n@@ -73,7 +73,8 @@\n fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)\n \n \n-def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):\n+def start_sync(src_dir_uri, dest_dir_uri, sync_interval=600,\n+ fs=None): # pragma: no cover\n \"\"\"Start syncing a directory on a schedule.\n \n Calls sync_to_dir on a schedule.\n@@ -157,7 +158,7 @@\n return fs.file_exists(uri)\n \n \n-def list_paths(uri, ext=None, fs=None):\n+def list_paths(uri, ext='', fs=None):\n if uri is None:\n return None\n", "issue": "Unit Tests: `command` Directory\n\n", "before_files": [{"content": "import os\nimport shutil\nfrom threading import Timer\nimport logging\n\nfrom google.protobuf import json_format\n\nfrom rastervision.filesystem.filesystem import FileSystem\nfrom rastervision.filesystem.filesystem import ProtobufParseException\nfrom rastervision.filesystem.local_filesystem import make_dir\n\nlog = logging.getLogger(__name__)\n\n\ndef get_local_path(uri, download_dir, fs=None):\n \"\"\"Convert a URI into a corresponding local path.\n\n If a uri is local, return it. If it's remote, we generate a path for it\n within download_dir. For an S3 path of form s3://<bucket>/<key>, the path\n is <download_dir>/s3/<bucket>/<key>.\n\n Args:\n uri: (string) URI of file\n download_dir: (string) path to directory\n fs: Optional FileSystem to use\n\n Returns:\n (string) a local path\n \"\"\"\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n path = fs.local_path(uri, download_dir)\n\n return path\n\n\ndef sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):\n \"\"\"Synchronize a local to a local or remote directory.\n\n Transfers files from source to destination directories so that the\n destination has all the source files. If delete is True, also delete\n files in the destination to match those in the source directory.\n\n Args:\n src_dir_uri: (string) URI of local source directory\n dest_dir_uri: (string) URI of destination directory\n delete: (bool)\n fs: Optional FileSystem to use for destination\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(dest_dir_uri, 'w')\n fs.sync_to_dir(src_dir_uri, dest_dir_uri, delete=delete)\n\n\ndef sync_from_dir(src_dir_uri, dest_dir_uri, delete=False, fs=None):\n \"\"\"Synchronize a local or remote directory to a local directory.\n\n Transfers files from source to destination directories so that the\n destination has all the source files. If delete is True, also delete\n files in the destination to match those in the source directory.\n\n Args:\n src_dir_uri: (string) URI of source directory\n dest_dir_uri: (string) URI of local destination directory\n delete: (bool)\n fs: Optional FileSystem to use\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(src_dir_uri, 'r')\n fs.sync_from_dir(src_dir_uri, dest_dir_uri, delete=delete)\n\n\ndef start_sync(src_dir_uri, dest_dir_uri, sync_interval=600, fs=None):\n \"\"\"Start syncing a directory on a schedule.\n\n Calls sync_to_dir on a schedule.\n\n Args:\n src_dir_uri: (string) Path of the local source directory\n dest_dir_uri: (string) URI of destination directory\n sync_interval: (int) period in seconds for syncing\n fs: Optional FileSystem to use\n \"\"\"\n\n def _sync_dir():\n log.info('Syncing {} to {}...'.format(src_dir_uri, dest_dir_uri))\n sync_to_dir(src_dir_uri, dest_dir_uri, delete=False, fs=fs)\n\n class SyncThread:\n def __init__(self):\n thread = Timer(sync_interval, _sync_dir)\n thread.daemon = True\n thread.start()\n self.thread = thread\n\n def __enter__(self):\n return self.thread\n\n def __exit__(self, type, value, traceback):\n self.thread.cancel()\n\n return SyncThread()\n\n\ndef download_if_needed(uri, download_dir, fs=None):\n \"\"\"Download a file into a directory if it's remote.\n\n If uri is local, there is no need to download the file.\n\n Args:\n uri: (string) URI of file\n download_dir: (string) local directory to download file into\n fs: Optional FileSystem to use.\n\n Returns:\n (string) path to local file\n\n Raises:\n NotReadableError if URI cannot be read from\n \"\"\"\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n\n path = get_local_path(uri, download_dir, fs=fs)\n make_dir(path, use_dirname=True)\n\n if path != uri:\n log.info('Downloading {} to {}'.format(uri, path))\n\n fs.copy_from(uri, path)\n\n return path\n\n\ndef download_or_copy(uri, target_dir, fs=None):\n \"\"\"Downloads or copies a file to a directory\n\n Args:\n uri: (string) URI of file\n target_dir: (string) local directory to copy file to\n fs: Optional FileSystem to use\n \"\"\"\n local_path = download_if_needed(uri, target_dir, fs=fs)\n shutil.copy(local_path, target_dir)\n return local_path\n\n\ndef file_exists(uri, fs=None):\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.file_exists(uri)\n\n\ndef list_paths(uri, ext=None, fs=None):\n if uri is None:\n return None\n\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n\n return fs.list_paths(uri, ext=ext)\n\n\ndef upload_or_copy(src_path, dst_uri, fs=None):\n \"\"\"Upload a file if the destination is remote.\n\n If dst_uri is local, the file is copied.\n\n Args:\n src_path: (string) path to source file\n dst_uri: (string) URI of destination for file\n fs: Optional FileSystem to use\n Raises:\n NotWritableError if URI cannot be written to\n \"\"\"\n if dst_uri is None:\n return\n\n if not (os.path.isfile(src_path) or os.path.isdir(src_path)):\n raise Exception('{} does not exist.'.format(src_path))\n\n if not src_path == dst_uri:\n log.info('Uploading {} to {}'.format(src_path, dst_uri))\n\n if not fs:\n fs = FileSystem.get_file_system(dst_uri, 'w')\n fs.copy_to(src_path, dst_uri)\n\n\ndef file_to_str(uri, fs=None):\n \"\"\"Download contents of text file into a string.\n\n Args:\n uri: (string) URI of file\n fs: Optional FileSystem to use\n\n Returns:\n (string) with contents of text file\n\n Raises:\n NotReadableError if URI cannot be read from\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.read_str(uri)\n\n\ndef str_to_file(content_str, uri, fs=None):\n \"\"\"Writes string to text file.\n\n Args:\n content_str: string to write\n uri: (string) URI of file to write\n fs: Optional FileSystem to use\n\n Raise:\n NotWritableError if file_uri cannot be written\n \"\"\"\n if not fs:\n fs = FileSystem.get_file_system(uri, 'r')\n return fs.write_str(uri, content_str)\n\n\ndef load_json_config(uri, message, fs=None):\n \"\"\"Load a JSON-formatted protobuf config file.\n\n Args:\n uri: (string) URI of config file\n message: (google.protobuf.message.Message) empty protobuf message of\n to load the config into. The type needs to match the content of\n uri.\n fs: Optional FileSystem to use.\n\n Returns:\n the same message passed as input with fields filled in from uri\n\n Raises:\n ProtobufParseException if uri cannot be parsed\n \"\"\"\n try:\n return json_format.Parse(file_to_str(uri, fs=fs), message)\n except json_format.ParseError as e:\n error_msg = ('Problem parsing protobuf file {}. '.format(uri) +\n 'You might need to run scripts/compile')\n raise ProtobufParseException(error_msg) from e\n\n\ndef save_json_config(message, uri, fs=None):\n \"\"\"Save a protobuf object to a JSON file.\n\n Args:\n message: (google.protobuf.message.Message) protobuf message\n uri: (string) URI of JSON file to write message to\n fs: Optional FileSystem to use\n\n Raises:\n NotWritableError if uri cannot be written\n \"\"\"\n json_str = json_format.MessageToJson(message)\n str_to_file(json_str, uri, fs=fs)\n", "path": "rastervision/utils/files.py"}]}
| 3,073 | 196 |
gh_patches_debug_19300
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-16427
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ignore subdomains on reddit URLs
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.09**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Various subs use dm or nm subdomain which apply different css, there's also np subdomain which does the same. reddit also has a massive number of [2 alphabet subdomains to serve the site in different languages](https://np.reddit.com/r/redditdev/comments/2dh18g/is_there_a_list_of_redditcom_subdomains_not/).
youtube-dl already supports www and old, but it's not possible to add support for each and every one of them so it'd be better to ignore it altogether since it doesn't affect reddit detection or .json API availability.
I'll submit a PR if this solution is acceptable.
Ignore subdomains on reddit URLs
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.09**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Various subs use dm or nm subdomain which apply different css, there's also np subdomain which does the same. reddit also has a massive number of [2 alphabet subdomains to serve the site in different languages](https://np.reddit.com/r/redditdev/comments/2dh18g/is_there_a_list_of_redditcom_subdomains_not/).
youtube-dl already supports www and old, but it's not possible to add support for each and every one of them so it'd be better to ignore it altogether since it doesn't affect reddit detection or .json API availability.
I'll submit a PR if this solution is acceptable.
</issue>
<code>
[start of youtube_dl/extractor/reddit.py]
1 from __future__ import unicode_literals
2
3 import re
4
5 from .common import InfoExtractor
6 from ..utils import (
7 ExtractorError,
8 int_or_none,
9 float_or_none,
10 )
11
12
13 class RedditIE(InfoExtractor):
14 _VALID_URL = r'https?://v\.redd\.it/(?P<id>[^/?#&]+)'
15 _TEST = {
16 # from https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/
17 'url': 'https://v.redd.it/zv89llsvexdz',
18 'md5': '0a070c53eba7ec4534d95a5a1259e253',
19 'info_dict': {
20 'id': 'zv89llsvexdz',
21 'ext': 'mp4',
22 'title': 'zv89llsvexdz',
23 },
24 'params': {
25 'format': 'bestvideo',
26 },
27 }
28
29 def _real_extract(self, url):
30 video_id = self._match_id(url)
31
32 formats = self._extract_m3u8_formats(
33 'https://v.redd.it/%s/HLSPlaylist.m3u8' % video_id, video_id,
34 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)
35
36 formats.extend(self._extract_mpd_formats(
37 'https://v.redd.it/%s/DASHPlaylist.mpd' % video_id, video_id,
38 mpd_id='dash', fatal=False))
39
40 self._sort_formats(formats)
41
42 return {
43 'id': video_id,
44 'title': video_id,
45 'formats': formats,
46 }
47
48
49 class RedditRIE(InfoExtractor):
50 _VALID_URL = r'(?P<url>https?://(?:(?:www|old)\.)?reddit\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'
51 _TESTS = [{
52 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/',
53 'info_dict': {
54 'id': 'zv89llsvexdz',
55 'ext': 'mp4',
56 'title': 'That small heart attack.',
57 'thumbnail': r're:^https?://.*\.jpg$',
58 'timestamp': 1501941939,
59 'upload_date': '20170805',
60 'uploader': 'Antw87',
61 'like_count': int,
62 'dislike_count': int,
63 'comment_count': int,
64 'age_limit': 0,
65 },
66 'params': {
67 'format': 'bestvideo',
68 'skip_download': True,
69 },
70 }, {
71 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj',
72 'only_matching': True,
73 }, {
74 # imgur
75 'url': 'https://www.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/',
76 'only_matching': True,
77 }, {
78 # imgur @ old reddit
79 'url': 'https://old.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/',
80 'only_matching': True,
81 }, {
82 # streamable
83 'url': 'https://www.reddit.com/r/videos/comments/6t7sg9/comedians_hilarious_joke_about_the_guam_flag/',
84 'only_matching': True,
85 }, {
86 # youtube
87 'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/',
88 'only_matching': True,
89 }]
90
91 def _real_extract(self, url):
92 mobj = re.match(self._VALID_URL, url)
93 url, video_id = mobj.group('url', 'id')
94
95 video_id = self._match_id(url)
96
97 data = self._download_json(
98 url + '/.json', video_id)[0]['data']['children'][0]['data']
99
100 video_url = data['url']
101
102 # Avoid recursing into the same reddit URL
103 if 'reddit.com/' in video_url and '/%s/' % video_id in video_url:
104 raise ExtractorError('No media found', expected=True)
105
106 over_18 = data.get('over_18')
107 if over_18 is True:
108 age_limit = 18
109 elif over_18 is False:
110 age_limit = 0
111 else:
112 age_limit = None
113
114 return {
115 '_type': 'url_transparent',
116 'url': video_url,
117 'title': data.get('title'),
118 'thumbnail': data.get('thumbnail'),
119 'timestamp': float_or_none(data.get('created_utc')),
120 'uploader': data.get('author'),
121 'like_count': int_or_none(data.get('ups')),
122 'dislike_count': int_or_none(data.get('downs')),
123 'comment_count': int_or_none(data.get('num_comments')),
124 'age_limit': age_limit,
125 }
126
[end of youtube_dl/extractor/reddit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/extractor/reddit.py b/youtube_dl/extractor/reddit.py
--- a/youtube_dl/extractor/reddit.py
+++ b/youtube_dl/extractor/reddit.py
@@ -47,7 +47,7 @@
class RedditRIE(InfoExtractor):
- _VALID_URL = r'(?P<url>https?://(?:(?:www|old)\.)?reddit\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'
+ _VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?reddit\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'
_TESTS = [{
'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/',
'info_dict': {
@@ -86,6 +86,10 @@
# youtube
'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/',
'only_matching': True,
+ }, {
+ # reddit video @ nm reddit
+ 'url': 'https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/',
+ 'only_matching': True,
}]
def _real_extract(self, url):
|
{"golden_diff": "diff --git a/youtube_dl/extractor/reddit.py b/youtube_dl/extractor/reddit.py\n--- a/youtube_dl/extractor/reddit.py\n+++ b/youtube_dl/extractor/reddit.py\n@@ -47,7 +47,7 @@\n \n \n class RedditRIE(InfoExtractor):\n- _VALID_URL = r'(?P<url>https?://(?:(?:www|old)\\.)?reddit\\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'\n+ _VALID_URL = r'(?P<url>https?://(?:[^/]+\\.)?reddit\\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'\n _TESTS = [{\n 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/',\n 'info_dict': {\n@@ -86,6 +86,10 @@\n # youtube\n 'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/',\n 'only_matching': True,\n+ }, {\n+ # reddit video @ nm reddit\n+ 'url': 'https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/',\n+ 'only_matching': True,\n }]\n \n def _real_extract(self, url):\n", "issue": "Ignore subdomains on reddit URLs\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.09**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [x] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/\r\n\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nVarious subs use dm or nm subdomain which apply different css, there's also np subdomain which does the same. reddit also has a massive number of [2 alphabet subdomains to serve the site in different languages](https://np.reddit.com/r/redditdev/comments/2dh18g/is_there_a_list_of_redditcom_subdomains_not/).\r\n\r\nyoutube-dl already supports www and old, but it's not possible to add support for each and every one of them so it'd be better to ignore it altogether since it doesn't affect reddit detection or .json API availability.\r\n\r\nI'll submit a PR if this solution is acceptable.\nIgnore subdomains on reddit URLs\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.05.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.05.09**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [x] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://nm.reddit.com/r/Cricket/comments/8idvby/lousy_cameraman_finds_himself_in_cairns_line_of/\r\n\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nVarious subs use dm or nm subdomain which apply different css, there's also np subdomain which does the same. reddit also has a massive number of [2 alphabet subdomains to serve the site in different languages](https://np.reddit.com/r/redditdev/comments/2dh18g/is_there_a_list_of_redditcom_subdomains_not/).\r\n\r\nyoutube-dl already supports www and old, but it's not possible to add support for each and every one of them so it'd be better to ignore it altogether since it doesn't affect reddit detection or .json API availability.\r\n\r\nI'll submit a PR if this solution is acceptable.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n ExtractorError,\n int_or_none,\n float_or_none,\n)\n\n\nclass RedditIE(InfoExtractor):\n _VALID_URL = r'https?://v\\.redd\\.it/(?P<id>[^/?#&]+)'\n _TEST = {\n # from https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/\n 'url': 'https://v.redd.it/zv89llsvexdz',\n 'md5': '0a070c53eba7ec4534d95a5a1259e253',\n 'info_dict': {\n 'id': 'zv89llsvexdz',\n 'ext': 'mp4',\n 'title': 'zv89llsvexdz',\n },\n 'params': {\n 'format': 'bestvideo',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n formats = self._extract_m3u8_formats(\n 'https://v.redd.it/%s/HLSPlaylist.m3u8' % video_id, video_id,\n 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)\n\n formats.extend(self._extract_mpd_formats(\n 'https://v.redd.it/%s/DASHPlaylist.mpd' % video_id, video_id,\n mpd_id='dash', fatal=False))\n\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': video_id,\n 'formats': formats,\n }\n\n\nclass RedditRIE(InfoExtractor):\n _VALID_URL = r'(?P<url>https?://(?:(?:www|old)\\.)?reddit\\.com/r/[^/]+/comments/(?P<id>[^/?#&]+))'\n _TESTS = [{\n 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj/that_small_heart_attack/',\n 'info_dict': {\n 'id': 'zv89llsvexdz',\n 'ext': 'mp4',\n 'title': 'That small heart attack.',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'timestamp': 1501941939,\n 'upload_date': '20170805',\n 'uploader': 'Antw87',\n 'like_count': int,\n 'dislike_count': int,\n 'comment_count': int,\n 'age_limit': 0,\n },\n 'params': {\n 'format': 'bestvideo',\n 'skip_download': True,\n },\n }, {\n 'url': 'https://www.reddit.com/r/videos/comments/6rrwyj',\n 'only_matching': True,\n }, {\n # imgur\n 'url': 'https://www.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/',\n 'only_matching': True,\n }, {\n # imgur @ old reddit\n 'url': 'https://old.reddit.com/r/MadeMeSmile/comments/6t7wi5/wait_for_it/',\n 'only_matching': True,\n }, {\n # streamable\n 'url': 'https://www.reddit.com/r/videos/comments/6t7sg9/comedians_hilarious_joke_about_the_guam_flag/',\n 'only_matching': True,\n }, {\n # youtube\n 'url': 'https://www.reddit.com/r/videos/comments/6t75wq/southern_man_tries_to_speak_without_an_accent/',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n url, video_id = mobj.group('url', 'id')\n\n video_id = self._match_id(url)\n\n data = self._download_json(\n url + '/.json', video_id)[0]['data']['children'][0]['data']\n\n video_url = data['url']\n\n # Avoid recursing into the same reddit URL\n if 'reddit.com/' in video_url and '/%s/' % video_id in video_url:\n raise ExtractorError('No media found', expected=True)\n\n over_18 = data.get('over_18')\n if over_18 is True:\n age_limit = 18\n elif over_18 is False:\n age_limit = 0\n else:\n age_limit = None\n\n return {\n '_type': 'url_transparent',\n 'url': video_url,\n 'title': data.get('title'),\n 'thumbnail': data.get('thumbnail'),\n 'timestamp': float_or_none(data.get('created_utc')),\n 'uploader': data.get('author'),\n 'like_count': int_or_none(data.get('ups')),\n 'dislike_count': int_or_none(data.get('downs')),\n 'comment_count': int_or_none(data.get('num_comments')),\n 'age_limit': age_limit,\n }\n", "path": "youtube_dl/extractor/reddit.py"}]}
| 3,208 | 326 |
gh_patches_debug_35109
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1823
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
B3 propagator package does not confirm to spec.
According to the spec, `OTEL_PROPAGATORS` env var should support two types of B3 propagators, `b3` and `b3multi`.
`b3` is supposed to inject a single header with `-` separate values. https://github.com/openzipkin/b3-propagation#single-header
`b3multi` is supposed to inject one header per value. https://github.com/openzipkin/b3-propagation#multiple-headers
Currently, we have a single implement at `opentelemetry.propgators.b3.B3Format` which actually is the multi-header implementation.
This implementation is exposed as an propagator entry-point with the name `b3` instead of `b3multi`.
This means setting `OTEL_PROPAGATORS=b3` will actually use a multi-header B3 propagator instead of single header while as setting `OTEL_PROPAGATORS=b3multi` will raise an exception.
IMO, this is a bug and should be fixed as follows:
1. Add two new propagators called `B3SingleFormat` and `B3MultiFormat`.
2. Deprecate the existing `B3Format` propagator without changing behavior. Deprecation notice should recommend using `B3MultiFormat`.
3. Change `b3` entrypoint to point to `B3SingleFormat` implementation (*breaking behavioral change*).
4. Add a new `b3multi` entrypoint to point to `B3MultiFormat` implementation.
</issue>
<code>
[start of propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16 from re import compile as re_compile
17
18 import opentelemetry.trace as trace
19 from opentelemetry.context import Context
20 from opentelemetry.propagators.textmap import (
21 CarrierT,
22 Getter,
23 Setter,
24 TextMapPropagator,
25 default_getter,
26 default_setter,
27 )
28 from opentelemetry.trace import format_span_id, format_trace_id
29
30
31 class B3Format(TextMapPropagator):
32 """Propagator for the B3 HTTP header format.
33
34 See: https://github.com/openzipkin/b3-propagation
35 """
36
37 SINGLE_HEADER_KEY = "b3"
38 TRACE_ID_KEY = "x-b3-traceid"
39 SPAN_ID_KEY = "x-b3-spanid"
40 PARENT_SPAN_ID_KEY = "x-b3-parentspanid"
41 SAMPLED_KEY = "x-b3-sampled"
42 FLAGS_KEY = "x-b3-flags"
43 _SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
44 _trace_id_regex = re_compile(r"[\da-fA-F]{16}|[\da-fA-F]{32}")
45 _span_id_regex = re_compile(r"[\da-fA-F]{16}")
46
47 def extract(
48 self,
49 carrier: CarrierT,
50 context: typing.Optional[Context] = None,
51 getter: Getter = default_getter,
52 ) -> Context:
53 if context is None:
54 context = Context()
55 trace_id = trace.INVALID_TRACE_ID
56 span_id = trace.INVALID_SPAN_ID
57 sampled = "0"
58 flags = None
59
60 single_header = _extract_first_element(
61 getter.get(carrier, self.SINGLE_HEADER_KEY)
62 )
63 if single_header:
64 # The b3 spec calls for the sampling state to be
65 # "deferred", which is unspecified. This concept does not
66 # translate to SpanContext, so we set it as recorded.
67 sampled = "1"
68 fields = single_header.split("-", 4)
69
70 if len(fields) == 1:
71 sampled = fields[0]
72 elif len(fields) == 2:
73 trace_id, span_id = fields
74 elif len(fields) == 3:
75 trace_id, span_id, sampled = fields
76 elif len(fields) == 4:
77 trace_id, span_id, sampled, _ = fields
78 else:
79 trace_id = (
80 _extract_first_element(getter.get(carrier, self.TRACE_ID_KEY))
81 or trace_id
82 )
83 span_id = (
84 _extract_first_element(getter.get(carrier, self.SPAN_ID_KEY))
85 or span_id
86 )
87 sampled = (
88 _extract_first_element(getter.get(carrier, self.SAMPLED_KEY))
89 or sampled
90 )
91 flags = (
92 _extract_first_element(getter.get(carrier, self.FLAGS_KEY))
93 or flags
94 )
95
96 if (
97 trace_id == trace.INVALID_TRACE_ID
98 or span_id == trace.INVALID_SPAN_ID
99 or self._trace_id_regex.fullmatch(trace_id) is None
100 or self._span_id_regex.fullmatch(span_id) is None
101 ):
102 return context
103
104 trace_id = int(trace_id, 16)
105 span_id = int(span_id, 16)
106 options = 0
107 # The b3 spec provides no defined behavior for both sample and
108 # flag values set. Since the setting of at least one implies
109 # the desire for some form of sampling, propagate if either
110 # header is set to allow.
111 if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
112 options |= trace.TraceFlags.SAMPLED
113
114 return trace.set_span_in_context(
115 trace.NonRecordingSpan(
116 trace.SpanContext(
117 # trace an span ids are encoded in hex, so must be converted
118 trace_id=trace_id,
119 span_id=span_id,
120 is_remote=True,
121 trace_flags=trace.TraceFlags(options),
122 trace_state=trace.TraceState(),
123 )
124 ),
125 context,
126 )
127
128 def inject(
129 self,
130 carrier: CarrierT,
131 context: typing.Optional[Context] = None,
132 setter: Setter = default_setter,
133 ) -> None:
134 span = trace.get_current_span(context=context)
135
136 span_context = span.get_span_context()
137 if span_context == trace.INVALID_SPAN_CONTEXT:
138 return
139
140 sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0
141 setter.set(
142 carrier,
143 self.TRACE_ID_KEY,
144 format_trace_id(span_context.trace_id),
145 )
146 setter.set(
147 carrier, self.SPAN_ID_KEY, format_span_id(span_context.span_id)
148 )
149 span_parent = getattr(span, "parent", None)
150 if span_parent is not None:
151 setter.set(
152 carrier,
153 self.PARENT_SPAN_ID_KEY,
154 format_span_id(span_parent.span_id),
155 )
156 setter.set(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
157
158 @property
159 def fields(self) -> typing.Set[str]:
160 return {
161 self.TRACE_ID_KEY,
162 self.SPAN_ID_KEY,
163 self.PARENT_SPAN_ID_KEY,
164 self.SAMPLED_KEY,
165 }
166
167
168 def _extract_first_element(
169 items: typing.Iterable[CarrierT],
170 ) -> typing.Optional[CarrierT]:
171 if items is None:
172 return None
173 return next(iter(items), None)
174
[end of propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py
--- a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py
+++ b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py
@@ -15,6 +15,8 @@
import typing
from re import compile as re_compile
+from deprecated import deprecated
+
import opentelemetry.trace as trace
from opentelemetry.context import Context
from opentelemetry.propagators.textmap import (
@@ -28,10 +30,11 @@
from opentelemetry.trace import format_span_id, format_trace_id
-class B3Format(TextMapPropagator):
- """Propagator for the B3 HTTP header format.
+class B3MultiFormat(TextMapPropagator):
+ """Propagator for the B3 HTTP multi-header format.
See: https://github.com/openzipkin/b3-propagation
+ https://github.com/openzipkin/b3-propagation#multiple-headers
"""
SINGLE_HEADER_KEY = "b3"
@@ -165,6 +168,53 @@
}
+class B3SingleFormat(B3MultiFormat):
+ """Propagator for the B3 HTTP single-header format.
+
+ See: https://github.com/openzipkin/b3-propagation
+ https://github.com/openzipkin/b3-propagation#single-header
+ """
+
+ def inject(
+ self,
+ carrier: CarrierT,
+ context: typing.Optional[Context] = None,
+ setter: Setter = default_setter,
+ ) -> None:
+ span = trace.get_current_span(context=context)
+
+ span_context = span.get_span_context()
+ if span_context == trace.INVALID_SPAN_CONTEXT:
+ return
+
+ sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0
+
+ fields = [
+ format_trace_id(span_context.trace_id),
+ format_span_id(span_context.span_id),
+ "1" if sampled else "0",
+ ]
+
+ span_parent = getattr(span, "parent", None)
+ if span_parent:
+ fields.append(format_span_id(span_parent.span_id))
+
+ setter.set(carrier, self.SINGLE_HEADER_KEY, "-".join(fields))
+
+ @property
+ def fields(self) -> typing.Set[str]:
+ return {self.SINGLE_HEADER_KEY}
+
+
+class B3Format(B3MultiFormat):
+ @deprecated(
+ version="1.2.0",
+ reason="B3Format is deprecated in favor of B3MultiFormat",
+ )
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+
+
def _extract_first_element(
items: typing.Iterable[CarrierT],
) -> typing.Optional[CarrierT]:
|
{"golden_diff": "diff --git a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py\n--- a/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py\n+++ b/propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py\n@@ -15,6 +15,8 @@\n import typing\n from re import compile as re_compile\n \n+from deprecated import deprecated\n+\n import opentelemetry.trace as trace\n from opentelemetry.context import Context\n from opentelemetry.propagators.textmap import (\n@@ -28,10 +30,11 @@\n from opentelemetry.trace import format_span_id, format_trace_id\n \n \n-class B3Format(TextMapPropagator):\n- \"\"\"Propagator for the B3 HTTP header format.\n+class B3MultiFormat(TextMapPropagator):\n+ \"\"\"Propagator for the B3 HTTP multi-header format.\n \n See: https://github.com/openzipkin/b3-propagation\n+ https://github.com/openzipkin/b3-propagation#multiple-headers\n \"\"\"\n \n SINGLE_HEADER_KEY = \"b3\"\n@@ -165,6 +168,53 @@\n }\n \n \n+class B3SingleFormat(B3MultiFormat):\n+ \"\"\"Propagator for the B3 HTTP single-header format.\n+\n+ See: https://github.com/openzipkin/b3-propagation\n+ https://github.com/openzipkin/b3-propagation#single-header\n+ \"\"\"\n+\n+ def inject(\n+ self,\n+ carrier: CarrierT,\n+ context: typing.Optional[Context] = None,\n+ setter: Setter = default_setter,\n+ ) -> None:\n+ span = trace.get_current_span(context=context)\n+\n+ span_context = span.get_span_context()\n+ if span_context == trace.INVALID_SPAN_CONTEXT:\n+ return\n+\n+ sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0\n+\n+ fields = [\n+ format_trace_id(span_context.trace_id),\n+ format_span_id(span_context.span_id),\n+ \"1\" if sampled else \"0\",\n+ ]\n+\n+ span_parent = getattr(span, \"parent\", None)\n+ if span_parent:\n+ fields.append(format_span_id(span_parent.span_id))\n+\n+ setter.set(carrier, self.SINGLE_HEADER_KEY, \"-\".join(fields))\n+\n+ @property\n+ def fields(self) -> typing.Set[str]:\n+ return {self.SINGLE_HEADER_KEY}\n+\n+\n+class B3Format(B3MultiFormat):\n+ @deprecated(\n+ version=\"1.2.0\",\n+ reason=\"B3Format is deprecated in favor of B3MultiFormat\",\n+ )\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+\n+\n def _extract_first_element(\n items: typing.Iterable[CarrierT],\n ) -> typing.Optional[CarrierT]:\n", "issue": "B3 propagator package does not confirm to spec.\nAccording to the spec, `OTEL_PROPAGATORS` env var should support two types of B3 propagators, `b3` and `b3multi`. \r\n\r\n`b3` is supposed to inject a single header with `-` separate values. https://github.com/openzipkin/b3-propagation#single-header\r\n`b3multi` is supposed to inject one header per value. https://github.com/openzipkin/b3-propagation#multiple-headers\r\n\r\nCurrently, we have a single implement at `opentelemetry.propgators.b3.B3Format` which actually is the multi-header implementation. \r\n\r\nThis implementation is exposed as an propagator entry-point with the name `b3` instead of `b3multi`.\r\n\r\nThis means setting `OTEL_PROPAGATORS=b3` will actually use a multi-header B3 propagator instead of single header while as setting `OTEL_PROPAGATORS=b3multi` will raise an exception.\r\n\r\nIMO, this is a bug and should be fixed as follows:\r\n\r\n1. Add two new propagators called `B3SingleFormat` and `B3MultiFormat`.\r\n2. Deprecate the existing `B3Format` propagator without changing behavior. Deprecation notice should recommend using `B3MultiFormat`.\r\n3. Change `b3` entrypoint to point to `B3SingleFormat` implementation (*breaking behavioral change*).\r\n4. Add a new `b3multi` entrypoint to point to `B3MultiFormat` implementation.\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\nfrom re import compile as re_compile\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\nfrom opentelemetry.trace import format_span_id, format_trace_id\n\n\nclass B3Format(TextMapPropagator):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n\n def extract(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n if context is None:\n context = Context()\n trace_id = trace.INVALID_TRACE_ID\n span_id = trace.INVALID_SPAN_ID\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n getter.get(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n trace_id = (\n _extract_first_element(getter.get(carrier, self.TRACE_ID_KEY))\n or trace_id\n )\n span_id = (\n _extract_first_element(getter.get(carrier, self.SPAN_ID_KEY))\n or span_id\n )\n sampled = (\n _extract_first_element(getter.get(carrier, self.SAMPLED_KEY))\n or sampled\n )\n flags = (\n _extract_first_element(getter.get(carrier, self.FLAGS_KEY))\n or flags\n )\n\n if (\n trace_id == trace.INVALID_TRACE_ID\n or span_id == trace.INVALID_SPAN_ID\n or self._trace_id_regex.fullmatch(trace_id) is None\n or self._span_id_regex.fullmatch(span_id) is None\n ):\n return context\n\n trace_id = int(trace_id, 16)\n span_id = int(span_id, 16)\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n\n return trace.set_span_in_context(\n trace.NonRecordingSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n ),\n context,\n )\n\n def inject(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n span_context = span.get_span_context()\n if span_context == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span_context.trace_flags) != 0\n setter.set(\n carrier,\n self.TRACE_ID_KEY,\n format_trace_id(span_context.trace_id),\n )\n setter.set(\n carrier, self.SPAN_ID_KEY, format_span_id(span_context.span_id)\n )\n span_parent = getattr(span, \"parent\", None)\n if span_parent is not None:\n setter.set(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span_parent.span_id),\n )\n setter.set(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n @property\n def fields(self) -> typing.Set[str]:\n return {\n self.TRACE_ID_KEY,\n self.SPAN_ID_KEY,\n self.PARENT_SPAN_ID_KEY,\n self.SAMPLED_KEY,\n }\n\n\ndef _extract_first_element(\n items: typing.Iterable[CarrierT],\n) -> typing.Optional[CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-b3/src/opentelemetry/propagators/b3/__init__.py"}]}
| 2,621 | 701 |
gh_patches_debug_19018
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-512
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django log levels are ignored by core agent
No matter what I did to the Django log levels, I kept getting this message when running Django management tasks:
```
[2020-04-02T09:36:06][core_agent][INFO] Initializing logger with log level: Info
```
This is super annoying, because this causes programs in my crontab to start sending out lots of e-mails because previously quiet tasks would now start producing output.
After a lot of trial and error, I figured out that you can set `SCOUT_LOG_LEVEL` in Django settings to suppress this message.
This really should be fixed so the agent also honors the Django logging settings, or at least documented under the logging section in the Python documentation. It's really non-obvious that this the agent is a separate thing which ignores the Django logging settings.
</issue>
<code>
[start of src/scout_apm/core/core_agent_manager.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import hashlib
5 import json
6 import logging
7 import os
8 import subprocess
9 import tarfile
10 import time
11
12 from urllib3.exceptions import HTTPError
13
14 from scout_apm.compat import urllib3_cert_pool_manager
15 from scout_apm.core.config import scout_config
16
17 logger = logging.getLogger(__name__)
18
19
20 class CoreAgentManager(object):
21 def __init__(self):
22 self.core_agent_bin_path = None
23 self.core_agent_bin_version = None
24 self.core_agent_dir = "{}/{}".format(
25 scout_config.value("core_agent_dir"),
26 scout_config.value("core_agent_full_name"),
27 )
28 self.downloader = CoreAgentDownloader(
29 self.core_agent_dir, scout_config.value("core_agent_full_name")
30 )
31
32 def launch(self):
33 if not scout_config.value("core_agent_launch"):
34 logger.debug(
35 "Not attempting to launch Core Agent "
36 "due to 'core_agent_launch' setting."
37 )
38 return False
39
40 if not self.verify():
41 if not scout_config.value("core_agent_download"):
42 logger.debug(
43 "Not attempting to download Core Agent due "
44 "to 'core_agent_download' setting."
45 )
46 return False
47
48 self.download()
49
50 if not self.verify():
51 logger.debug("Failed to verify Core Agent. Not launching Core Agent.")
52 return False
53
54 return self.run()
55
56 def download(self):
57 self.downloader.download()
58
59 def run(self):
60 try:
61 subprocess.check_call(
62 (
63 self.agent_binary()
64 + self.daemonize_flag()
65 + self.log_level()
66 + self.log_file()
67 + self.config_file()
68 + self.socket_path()
69 ),
70 close_fds=True,
71 )
72 except Exception:
73 # TODO detect failure of launch properly
74 logger.exception("Error running Core Agent")
75 return False
76 return True
77
78 def agent_binary(self):
79 return [self.core_agent_bin_path, "start"]
80
81 def daemonize_flag(self):
82 return ["--daemonize", "true"]
83
84 def socket_path(self):
85 # Old deprecated name "socket_path"
86 socket_path = scout_config.value("socket_path")
87 if socket_path is None:
88 socket_path = scout_config.value("core_agent_socket_path")
89 return ["--socket", socket_path]
90
91 def log_level(self):
92 # Old deprecated name "log_level"
93 log_level = scout_config.value("log_level")
94 if log_level is None:
95 log_level = scout_config.value("core_agent_log_level")
96 return ["--log-level", log_level]
97
98 def log_file(self):
99 # Old deprecated name "log_file"
100 path = scout_config.value("log_file")
101 if path is None:
102 path = scout_config.value("core_agent_log_file")
103
104 if path is not None:
105 return ["--log-file", path]
106 else:
107 return []
108
109 def config_file(self):
110 # Old deprecated name "config_file"
111 path = scout_config.value("config_file")
112 if path is None:
113 path = scout_config.value("core_agent_config_file")
114
115 if path is not None:
116 return ["--config-file", path]
117 else:
118 return []
119
120 def verify(self):
121 manifest = CoreAgentManifest(self.core_agent_dir + "/manifest.json")
122 if not manifest.is_valid():
123 logger.debug(
124 "Core Agent verification failed: CoreAgentManifest is not valid."
125 )
126 self.core_agent_bin_path = None
127 self.core_agent_bin_version = None
128 return False
129
130 bin_path = self.core_agent_dir + "/" + manifest.bin_name
131 if sha256_digest(bin_path) == manifest.sha256:
132 self.core_agent_bin_path = bin_path
133 self.core_agent_bin_version = manifest.bin_version
134 return True
135 else:
136 logger.debug("Core Agent verification failed: SHA mismatch.")
137 self.core_agent_bin_path = None
138 self.core_agent_bin_version = None
139 return False
140
141
142 class CoreAgentDownloader(object):
143 def __init__(self, download_destination, core_agent_full_name):
144 self.stale_download_secs = 120
145 self.destination = download_destination
146 self.core_agent_full_name = core_agent_full_name
147 self.package_location = self.destination + "/{}.tgz".format(
148 self.core_agent_full_name
149 )
150 self.download_lock_path = self.destination + "/download.lock"
151 self.download_lock_fd = None
152
153 def download(self):
154 self.create_core_agent_dir()
155 self.obtain_download_lock()
156 if self.download_lock_fd is not None:
157 try:
158 downloaded = self.download_package()
159 if downloaded:
160 self.untar()
161 except (OSError, HTTPError):
162 logger.exception("Exception raised while downloading Core Agent")
163 finally:
164 self.release_download_lock()
165
166 def create_core_agent_dir(self):
167 try:
168 os.makedirs(self.destination, scout_config.core_agent_permissions())
169 except OSError:
170 pass
171
172 def obtain_download_lock(self):
173 self.clean_stale_download_lock()
174 try:
175 self.download_lock_fd = os.open(
176 self.download_lock_path,
177 os.O_RDWR | os.O_CREAT | os.O_EXCL | os.O_NONBLOCK,
178 )
179 except OSError as exc:
180 logger.debug(
181 "Could not obtain download lock on %s",
182 self.download_lock_path,
183 exc_info=exc,
184 )
185 self.download_lock_fd = None
186
187 def clean_stale_download_lock(self):
188 try:
189 delta = time.time() - os.stat(self.download_lock_path).st_ctime
190 if delta > self.stale_download_secs:
191 logger.debug("Clearing stale download lock file.")
192 os.unlink(self.download_lock_path)
193 except OSError:
194 pass
195
196 def release_download_lock(self):
197 if self.download_lock_fd is not None:
198 os.unlink(self.download_lock_path)
199 os.close(self.download_lock_fd)
200
201 def download_package(self):
202 full_url = self.full_url()
203 logger.debug("Downloading: %s to %s", full_url, self.package_location)
204 http = urllib3_cert_pool_manager()
205 response = http.request(
206 "GET", full_url, preload_content=False, timeout=10.0, retries=3
207 )
208 try:
209 if response.status != 200:
210 return False
211 with open(self.package_location, "wb") as fp:
212 for chunk in response.stream():
213 fp.write(chunk)
214 finally:
215 response.release_conn()
216 return True
217
218 def untar(self):
219 t = tarfile.open(self.package_location, "r")
220 t.extractall(self.destination)
221
222 def full_url(self):
223 return "{root_url}/{core_agent_full_name}.tgz".format(
224 root_url=self.root_url(), core_agent_full_name=self.core_agent_full_name
225 )
226
227 def root_url(self):
228 return scout_config.value("download_url")
229
230
231 class CoreAgentManifest(object):
232 def __init__(self, path):
233 self.manifest_path = path
234 self.bin_name = None
235 self.bin_version = None
236 self.sha256 = None
237 self.valid = False
238 try:
239 self.parse()
240 # noqa for this issue: https://github.com/PyCQA/flake8-bugbear/issues/110
241 except (ValueError, TypeError, OSError, IOError) as exc: # noqa: B014
242 logger.debug("Error parsing Core Agent Manifest", exc_info=exc)
243
244 def parse(self):
245 logger.debug("Parsing Core Agent manifest path: %s", self.manifest_path)
246 with open(self.manifest_path) as manifest_file:
247 self.raw = manifest_file.read()
248 self.json = json.loads(self.raw)
249 self.version = self.json["version"]
250 self.bin_version = self.json["core_agent_version"]
251 self.bin_name = self.json["core_agent_binary"]
252 self.sha256 = self.json["core_agent_binary_sha256"]
253 self.valid = True
254 logger.debug("Core Agent manifest json: %s", self.json)
255
256 def is_valid(self):
257 return self.valid
258
259
260 def sha256_digest(filename, block_size=65536):
261 try:
262 sha256 = hashlib.sha256()
263 with open(filename, "rb") as f:
264 for block in iter(lambda: f.read(block_size), b""):
265 sha256.update(block)
266 return sha256.hexdigest()
267 except OSError as exc:
268 logger.debug("Error on digest", exc_info=exc)
269 return None
270
[end of src/scout_apm/core/core_agent_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/core/core_agent_manager.py b/src/scout_apm/core/core_agent_manager.py
--- a/src/scout_apm/core/core_agent_manager.py
+++ b/src/scout_apm/core/core_agent_manager.py
@@ -58,17 +58,19 @@
def run(self):
try:
- subprocess.check_call(
- (
- self.agent_binary()
- + self.daemonize_flag()
- + self.log_level()
- + self.log_file()
- + self.config_file()
- + self.socket_path()
- ),
- close_fds=True,
- )
+ with open(os.devnull) as devnull:
+ subprocess.check_call(
+ (
+ self.agent_binary()
+ + self.daemonize_flag()
+ + self.log_level()
+ + self.log_file()
+ + self.config_file()
+ + self.socket_path()
+ ),
+ close_fds=True,
+ stdout=devnull,
+ )
except Exception:
# TODO detect failure of launch properly
logger.exception("Error running Core Agent")
|
{"golden_diff": "diff --git a/src/scout_apm/core/core_agent_manager.py b/src/scout_apm/core/core_agent_manager.py\n--- a/src/scout_apm/core/core_agent_manager.py\n+++ b/src/scout_apm/core/core_agent_manager.py\n@@ -58,17 +58,19 @@\n \n def run(self):\n try:\n- subprocess.check_call(\n- (\n- self.agent_binary()\n- + self.daemonize_flag()\n- + self.log_level()\n- + self.log_file()\n- + self.config_file()\n- + self.socket_path()\n- ),\n- close_fds=True,\n- )\n+ with open(os.devnull) as devnull:\n+ subprocess.check_call(\n+ (\n+ self.agent_binary()\n+ + self.daemonize_flag()\n+ + self.log_level()\n+ + self.log_file()\n+ + self.config_file()\n+ + self.socket_path()\n+ ),\n+ close_fds=True,\n+ stdout=devnull,\n+ )\n except Exception:\n # TODO detect failure of launch properly\n logger.exception(\"Error running Core Agent\")\n", "issue": "Django log levels are ignored by core agent\nNo matter what I did to the Django log levels, I kept getting this message when running Django management tasks:\r\n\r\n```\r\n[2020-04-02T09:36:06][core_agent][INFO] Initializing logger with log level: Info\r\n```\r\n\r\nThis is super annoying, because this causes programs in my crontab to start sending out lots of e-mails because previously quiet tasks would now start producing output.\r\n\r\nAfter a lot of trial and error, I figured out that you can set `SCOUT_LOG_LEVEL` in Django settings to suppress this message.\r\n\r\nThis really should be fixed so the agent also honors the Django logging settings, or at least documented under the logging section in the Python documentation. It's really non-obvious that this the agent is a separate thing which ignores the Django logging settings.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport hashlib\nimport json\nimport logging\nimport os\nimport subprocess\nimport tarfile\nimport time\n\nfrom urllib3.exceptions import HTTPError\n\nfrom scout_apm.compat import urllib3_cert_pool_manager\nfrom scout_apm.core.config import scout_config\n\nlogger = logging.getLogger(__name__)\n\n\nclass CoreAgentManager(object):\n def __init__(self):\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n self.core_agent_dir = \"{}/{}\".format(\n scout_config.value(\"core_agent_dir\"),\n scout_config.value(\"core_agent_full_name\"),\n )\n self.downloader = CoreAgentDownloader(\n self.core_agent_dir, scout_config.value(\"core_agent_full_name\")\n )\n\n def launch(self):\n if not scout_config.value(\"core_agent_launch\"):\n logger.debug(\n \"Not attempting to launch Core Agent \"\n \"due to 'core_agent_launch' setting.\"\n )\n return False\n\n if not self.verify():\n if not scout_config.value(\"core_agent_download\"):\n logger.debug(\n \"Not attempting to download Core Agent due \"\n \"to 'core_agent_download' setting.\"\n )\n return False\n\n self.download()\n\n if not self.verify():\n logger.debug(\"Failed to verify Core Agent. Not launching Core Agent.\")\n return False\n\n return self.run()\n\n def download(self):\n self.downloader.download()\n\n def run(self):\n try:\n subprocess.check_call(\n (\n self.agent_binary()\n + self.daemonize_flag()\n + self.log_level()\n + self.log_file()\n + self.config_file()\n + self.socket_path()\n ),\n close_fds=True,\n )\n except Exception:\n # TODO detect failure of launch properly\n logger.exception(\"Error running Core Agent\")\n return False\n return True\n\n def agent_binary(self):\n return [self.core_agent_bin_path, \"start\"]\n\n def daemonize_flag(self):\n return [\"--daemonize\", \"true\"]\n\n def socket_path(self):\n # Old deprecated name \"socket_path\"\n socket_path = scout_config.value(\"socket_path\")\n if socket_path is None:\n socket_path = scout_config.value(\"core_agent_socket_path\")\n return [\"--socket\", socket_path]\n\n def log_level(self):\n # Old deprecated name \"log_level\"\n log_level = scout_config.value(\"log_level\")\n if log_level is None:\n log_level = scout_config.value(\"core_agent_log_level\")\n return [\"--log-level\", log_level]\n\n def log_file(self):\n # Old deprecated name \"log_file\"\n path = scout_config.value(\"log_file\")\n if path is None:\n path = scout_config.value(\"core_agent_log_file\")\n\n if path is not None:\n return [\"--log-file\", path]\n else:\n return []\n\n def config_file(self):\n # Old deprecated name \"config_file\"\n path = scout_config.value(\"config_file\")\n if path is None:\n path = scout_config.value(\"core_agent_config_file\")\n\n if path is not None:\n return [\"--config-file\", path]\n else:\n return []\n\n def verify(self):\n manifest = CoreAgentManifest(self.core_agent_dir + \"/manifest.json\")\n if not manifest.is_valid():\n logger.debug(\n \"Core Agent verification failed: CoreAgentManifest is not valid.\"\n )\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n bin_path = self.core_agent_dir + \"/\" + manifest.bin_name\n if sha256_digest(bin_path) == manifest.sha256:\n self.core_agent_bin_path = bin_path\n self.core_agent_bin_version = manifest.bin_version\n return True\n else:\n logger.debug(\"Core Agent verification failed: SHA mismatch.\")\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n\nclass CoreAgentDownloader(object):\n def __init__(self, download_destination, core_agent_full_name):\n self.stale_download_secs = 120\n self.destination = download_destination\n self.core_agent_full_name = core_agent_full_name\n self.package_location = self.destination + \"/{}.tgz\".format(\n self.core_agent_full_name\n )\n self.download_lock_path = self.destination + \"/download.lock\"\n self.download_lock_fd = None\n\n def download(self):\n self.create_core_agent_dir()\n self.obtain_download_lock()\n if self.download_lock_fd is not None:\n try:\n downloaded = self.download_package()\n if downloaded:\n self.untar()\n except (OSError, HTTPError):\n logger.exception(\"Exception raised while downloading Core Agent\")\n finally:\n self.release_download_lock()\n\n def create_core_agent_dir(self):\n try:\n os.makedirs(self.destination, scout_config.core_agent_permissions())\n except OSError:\n pass\n\n def obtain_download_lock(self):\n self.clean_stale_download_lock()\n try:\n self.download_lock_fd = os.open(\n self.download_lock_path,\n os.O_RDWR | os.O_CREAT | os.O_EXCL | os.O_NONBLOCK,\n )\n except OSError as exc:\n logger.debug(\n \"Could not obtain download lock on %s\",\n self.download_lock_path,\n exc_info=exc,\n )\n self.download_lock_fd = None\n\n def clean_stale_download_lock(self):\n try:\n delta = time.time() - os.stat(self.download_lock_path).st_ctime\n if delta > self.stale_download_secs:\n logger.debug(\"Clearing stale download lock file.\")\n os.unlink(self.download_lock_path)\n except OSError:\n pass\n\n def release_download_lock(self):\n if self.download_lock_fd is not None:\n os.unlink(self.download_lock_path)\n os.close(self.download_lock_fd)\n\n def download_package(self):\n full_url = self.full_url()\n logger.debug(\"Downloading: %s to %s\", full_url, self.package_location)\n http = urllib3_cert_pool_manager()\n response = http.request(\n \"GET\", full_url, preload_content=False, timeout=10.0, retries=3\n )\n try:\n if response.status != 200:\n return False\n with open(self.package_location, \"wb\") as fp:\n for chunk in response.stream():\n fp.write(chunk)\n finally:\n response.release_conn()\n return True\n\n def untar(self):\n t = tarfile.open(self.package_location, \"r\")\n t.extractall(self.destination)\n\n def full_url(self):\n return \"{root_url}/{core_agent_full_name}.tgz\".format(\n root_url=self.root_url(), core_agent_full_name=self.core_agent_full_name\n )\n\n def root_url(self):\n return scout_config.value(\"download_url\")\n\n\nclass CoreAgentManifest(object):\n def __init__(self, path):\n self.manifest_path = path\n self.bin_name = None\n self.bin_version = None\n self.sha256 = None\n self.valid = False\n try:\n self.parse()\n # noqa for this issue: https://github.com/PyCQA/flake8-bugbear/issues/110\n except (ValueError, TypeError, OSError, IOError) as exc: # noqa: B014\n logger.debug(\"Error parsing Core Agent Manifest\", exc_info=exc)\n\n def parse(self):\n logger.debug(\"Parsing Core Agent manifest path: %s\", self.manifest_path)\n with open(self.manifest_path) as manifest_file:\n self.raw = manifest_file.read()\n self.json = json.loads(self.raw)\n self.version = self.json[\"version\"]\n self.bin_version = self.json[\"core_agent_version\"]\n self.bin_name = self.json[\"core_agent_binary\"]\n self.sha256 = self.json[\"core_agent_binary_sha256\"]\n self.valid = True\n logger.debug(\"Core Agent manifest json: %s\", self.json)\n\n def is_valid(self):\n return self.valid\n\n\ndef sha256_digest(filename, block_size=65536):\n try:\n sha256 = hashlib.sha256()\n with open(filename, \"rb\") as f:\n for block in iter(lambda: f.read(block_size), b\"\"):\n sha256.update(block)\n return sha256.hexdigest()\n except OSError as exc:\n logger.debug(\"Error on digest\", exc_info=exc)\n return None\n", "path": "src/scout_apm/core/core_agent_manager.py"}]}
| 3,271 | 244 |
gh_patches_debug_32234
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-2252
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[REFACTOR]: remove dead code in modin/pandas/index folder
<!--
General questions should be asked on the mailing list [email protected].
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
- **Modin installed from (source or binary)**:
- **Modin version**:
- **Python version**:
- **Exact command to reproduce**:
<!--
You can obtain the Modin version with
python -c "import modin; print(modin.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
### Source code / logs
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
</issue>
<code>
[start of modin/pandas/index/partitioned_index.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14
15 class PartitionedIndex(object):
16
17 _index_lengths_cache = None
18
19 def _get_partition_lengths(self):
20 if self._index_lengths_cache is None:
21 self._index_lengths_cache = [
22 obj.apply(len).get() for obj in self.index_partitions[:0]
23 ]
24 return self._index_lengths_cache
25
26 def _set_partition_lengths(self, new_value):
27 self._partition_length_cache = new_value
28
29 index_lengths = property(_get_partition_lengths, _set_partition_lengths)
30
31 def __getitem__(self, key):
32 cls = type(self)
33 return cls(self.index_partitions[key])
34
35
36 class RayPartitionedIndex(PartitionedIndex):
37 def __init__(self, index_partitions):
38 self.index_partitions = index_partitions
39
[end of modin/pandas/index/partitioned_index.py]
[start of modin/pandas/index/__init__.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
[end of modin/pandas/index/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/pandas/index/__init__.py b/modin/pandas/index/__init__.py
deleted file mode 100644
--- a/modin/pandas/index/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Licensed to Modin Development Team under one or more contributor license agreements.
-# See the NOTICE file distributed with this work for additional information regarding
-# copyright ownership. The Modin Development Team licenses this file to you under the
-# Apache License, Version 2.0 (the "License"); you may not use this file except in
-# compliance with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
diff --git a/modin/pandas/index/partitioned_index.py b/modin/pandas/index/partitioned_index.py
deleted file mode 100644
--- a/modin/pandas/index/partitioned_index.py
+++ /dev/null
@@ -1,38 +0,0 @@
-# Licensed to Modin Development Team under one or more contributor license agreements.
-# See the NOTICE file distributed with this work for additional information regarding
-# copyright ownership. The Modin Development Team licenses this file to you under the
-# Apache License, Version 2.0 (the "License"); you may not use this file except in
-# compliance with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-
-class PartitionedIndex(object):
-
- _index_lengths_cache = None
-
- def _get_partition_lengths(self):
- if self._index_lengths_cache is None:
- self._index_lengths_cache = [
- obj.apply(len).get() for obj in self.index_partitions[:0]
- ]
- return self._index_lengths_cache
-
- def _set_partition_lengths(self, new_value):
- self._partition_length_cache = new_value
-
- index_lengths = property(_get_partition_lengths, _set_partition_lengths)
-
- def __getitem__(self, key):
- cls = type(self)
- return cls(self.index_partitions[key])
-
-
-class RayPartitionedIndex(PartitionedIndex):
- def __init__(self, index_partitions):
- self.index_partitions = index_partitions
|
{"golden_diff": "diff --git a/modin/pandas/index/__init__.py b/modin/pandas/index/__init__.py\ndeleted file mode 100644\n--- a/modin/pandas/index/__init__.py\n+++ /dev/null\n@@ -1,12 +0,0 @@\n-# Licensed to Modin Development Team under one or more contributor license agreements.\n-# See the NOTICE file distributed with this work for additional information regarding\n-# copyright ownership. The Modin Development Team licenses this file to you under the\n-# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n-# compliance with the License. You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software distributed under\n-# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n-# ANY KIND, either express or implied. See the License for the specific language\n-# governing permissions and limitations under the License.\ndiff --git a/modin/pandas/index/partitioned_index.py b/modin/pandas/index/partitioned_index.py\ndeleted file mode 100644\n--- a/modin/pandas/index/partitioned_index.py\n+++ /dev/null\n@@ -1,38 +0,0 @@\n-# Licensed to Modin Development Team under one or more contributor license agreements.\n-# See the NOTICE file distributed with this work for additional information regarding\n-# copyright ownership. The Modin Development Team licenses this file to you under the\n-# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n-# compliance with the License. You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software distributed under\n-# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n-# ANY KIND, either express or implied. See the License for the specific language\n-# governing permissions and limitations under the License.\n-\n-\n-class PartitionedIndex(object):\n-\n- _index_lengths_cache = None\n-\n- def _get_partition_lengths(self):\n- if self._index_lengths_cache is None:\n- self._index_lengths_cache = [\n- obj.apply(len).get() for obj in self.index_partitions[:0]\n- ]\n- return self._index_lengths_cache\n-\n- def _set_partition_lengths(self, new_value):\n- self._partition_length_cache = new_value\n-\n- index_lengths = property(_get_partition_lengths, _set_partition_lengths)\n-\n- def __getitem__(self, key):\n- cls = type(self)\n- return cls(self.index_partitions[key])\n-\n-\n-class RayPartitionedIndex(PartitionedIndex):\n- def __init__(self, index_partitions):\n- self.index_partitions = index_partitions\n", "issue": "[REFACTOR]: remove dead code in modin/pandas/index folder\n<!--\r\nGeneral questions should be asked on the mailing list [email protected].\r\n\r\nBefore submitting an issue, please fill out the following form.\r\n-->\r\n\r\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:\r\n- **Modin installed from (source or binary)**:\r\n- **Modin version**:\r\n- **Python version**:\r\n- **Exact command to reproduce**:\r\n\r\n<!--\r\nYou can obtain the Modin version with\r\n\r\npython -c \"import modin; print(modin.__version__)\"\r\n-->\r\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\n\r\n### Source code / logs\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\nclass PartitionedIndex(object):\n\n _index_lengths_cache = None\n\n def _get_partition_lengths(self):\n if self._index_lengths_cache is None:\n self._index_lengths_cache = [\n obj.apply(len).get() for obj in self.index_partitions[:0]\n ]\n return self._index_lengths_cache\n\n def _set_partition_lengths(self, new_value):\n self._partition_length_cache = new_value\n\n index_lengths = property(_get_partition_lengths, _set_partition_lengths)\n\n def __getitem__(self, key):\n cls = type(self)\n return cls(self.index_partitions[key])\n\n\nclass RayPartitionedIndex(PartitionedIndex):\n def __init__(self, index_partitions):\n self.index_partitions = index_partitions\n", "path": "modin/pandas/index/partitioned_index.py"}, {"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n", "path": "modin/pandas/index/__init__.py"}]}
| 1,358 | 659 |
gh_patches_debug_2951
|
rasdani/github-patches
|
git_diff
|
e2nIEE__pandapower-1738
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plotting.geo convert_gis_to_geodata leads to issue if run after convert_geodata_to_gis
```python
import pandapower.plotting.geo as geo
import pandapower.networks as pn
net = pn.mv_oberrhein()
geo.convert_geodata_to_gis(net)
geo.convert_gis_to_geodata(net)
```
results in `AttributeError: 'Series' object has no attribute 'geometry'`
</issue>
<code>
[start of pandapower/plotting/geo.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 import sys
7 from numpy import array, setdiff1d
8
9 from pandapower.auxiliary import soft_dependency_error
10
11 try:
12 from shapely.geometry import Point, LineString
13 shapely_INSTALLED = True
14 except ImportError:
15 shapely_INSTALLED = False
16
17 try:
18 from geopandas import GeoDataFrame, GeoSeries
19 geopandas_INSTALLED = True
20 except ImportError:
21 geopandas_INSTALLED = False
22
23 try:
24 from pyproj import Proj, transform
25 pyproj_INSTALLED = True
26 except ImportError:
27 pyproj_INSTALLED = False
28
29
30 def _node_geometries_from_geodata(node_geo, epsg=31467):
31 """
32 Creates a geopandas geodataframe from a given dataframe of with node coordinates as x and y
33 values.
34
35 :param node_geo: The dataframe containing the node coordinates (x and y values)
36 :type node_geo: pandas.dataframe
37 :param epsg: The epsg projection of the node coordinates
38 :type epsg: int, default 31467 (= Gauss-Krüger Zone 3)
39 :return: node_geodata - a geodataframe containing the node_geo and Points in the geometry column
40 """
41 missing_packages = array(["shapely", "geopandas"])[~array([
42 shapely_INSTALLED, geopandas_INSTALLED])]
43 if len(missing_packages):
44 soft_dependency_error(str(sys._getframe().f_code.co_name)+"()", missing_packages)
45 geoms = [Point(x, y) for x, y in node_geo[["x", "y"]].values]
46 return GeoDataFrame(node_geo, crs=f"epsg:{epsg}", geometry=geoms, index=node_geo.index)
47
48
49 def _branch_geometries_from_geodata(branch_geo, epsg=31467):
50 missing_packages = array(["shapely", "geopandas"])[~array([
51 shapely_INSTALLED, geopandas_INSTALLED])]
52 if len(missing_packages):
53 soft_dependency_error(str(sys._getframe().f_code.co_name)+"()", missing_packages)
54 geoms = GeoSeries([LineString(x) for x in branch_geo.coords.values], index=branch_geo.index,
55 crs=f"epsg:{epsg}")
56 return GeoDataFrame(branch_geo, crs=f"epsg:{epsg}", geometry=geoms, index=branch_geo.index)
57
58
59 def _transform_node_geometry_to_geodata(node_geo):
60 """
61 Create x and y values from geodataframe
62
63 :param node_geo: The dataframe containing the node geometries (as shapely points)
64 :type node_geo: geopandas.GeoDataFrame
65 :return: bus_geo - The given geodataframe with x and y values
66 """
67 node_geo["x"] = [p.x for p in node_geo.geometry]
68 node_geo["y"] = [p.y for p in node_geo.geometry]
69 return node_geo
70
71
72 def _transform_branch_geometry_to_coords(branch_geo):
73 """
74 Create coords entries from geodataframe geometries
75
76 :param branch_geo: The dataframe containing the branch geometries (as shapely LineStrings)
77 :type branch_geo: geopandas.GeoDataFrame
78 :return: branch_geo - The given geodataframe with coords
79 """
80 branch_geo["coords"] = branch_geo["coords"].geometry.apply(lambda x: list(x.coords))
81 return branch_geo
82
83
84 def _convert_xy_epsg(x, y, epsg_in=4326, epsg_out=31467):
85 """
86 Converts the given x and y coordinates according to the defined epsg projections.
87
88 :param x: x-values of coordinates
89 :type x: iterable
90 :param y: y-values of coordinates
91 :type y: iterable
92 :param epsg_in: current epsg projection
93 :type epsg_in: int, default 4326 (= WGS84)
94 :param epsg_out: epsg projection to be transformed to
95 :type epsg_out: int, default 31467 (= Gauss-Krüger Zone 3)
96 :return: transformed_coords - x and y values in new coordinate system
97 """
98 if not pyproj_INSTALLED:
99 soft_dependency_error(str(sys._getframe().f_code.co_name)+"()", "pyproj")
100 in_proj = Proj(init='epsg:%i' % epsg_in)
101 out_proj = Proj(init='epsg:%i' % epsg_out)
102 return transform(in_proj, out_proj, x, y)
103
104
105 def convert_gis_to_geodata(net, node_geodata=True, branch_geodata=True):
106 """
107 Extracts information on bus and line geodata from the geometries of a geopandas geodataframe.
108
109 :param net: The net for which to convert the geodata
110 :type net: pandapowerNet
111 :param node_geodata: flag if to extract x and y values for bus geodata
112 :type node_geodata: bool, default True
113 :param branch_geodata: flag if to extract coordinates values for line geodata
114 :type branch_geodata: bool, default True
115 :return: No output.
116 """
117 if node_geodata:
118 _transform_node_geometry_to_geodata(net.bus_geodata)
119 if branch_geodata:
120 _transform_branch_geometry_to_coords(net.line_geodata)
121
122
123 def convert_geodata_to_gis(net, epsg=31467, node_geodata=True, branch_geodata=True):
124 """
125 Transforms the bus and line geodata of a net into a geopandaas geodataframe with the respective
126 geometries.
127
128 :param net: The net for which to convert the geodata
129 :type net: pandapowerNet
130 :param epsg: current epsg projection
131 :type epsg: int, default 4326 (= WGS84)
132 :param node_geodata: flag if to transform the bus geodata table
133 :type node_geodata: bool, default True
134 :param branch_geodata: flag if to transform the line geodata table
135 :type branch_geodata: bool, default True
136 :return: No output.
137 """
138 if node_geodata:
139 net["bus_geodata"] = _node_geometries_from_geodata(net["bus_geodata"], epsg)
140 if branch_geodata:
141 net["line_geodata"] = _branch_geometries_from_geodata(net["line_geodata"], epsg)
142 net["gis_epsg_code"] = epsg
143
144
145 def convert_epsg_bus_geodata(net, epsg_in=4326, epsg_out=31467):
146 """
147 Converts bus geodata in net from epsg_in to epsg_out
148
149 :param net: The pandapower network
150 :type net: pandapowerNet
151 :param epsg_in: current epsg projection
152 :type epsg_in: int, default 4326 (= WGS84)
153 :param epsg_out: epsg projection to be transformed to
154 :type epsg_out: int, default 31467 (= Gauss-Krüger Zone 3)
155 :return: net - the given pandapower network (no copy!)
156 """
157 net['bus_geodata'].loc[:, "x"], net['bus_geodata'].loc[:, "y"] = _convert_xy_epsg(
158 net['bus_geodata'].loc[:, "x"], net['bus_geodata'].loc[:, "y"], epsg_in, epsg_out)
159 return net
160
[end of pandapower/plotting/geo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pandapower/plotting/geo.py b/pandapower/plotting/geo.py
--- a/pandapower/plotting/geo.py
+++ b/pandapower/plotting/geo.py
@@ -77,7 +77,7 @@
:type branch_geo: geopandas.GeoDataFrame
:return: branch_geo - The given geodataframe with coords
"""
- branch_geo["coords"] = branch_geo["coords"].geometry.apply(lambda x: list(x.coords))
+ branch_geo["coords"] = branch_geo.geometry.apply(lambda x: list(x.coords))
return branch_geo
|
{"golden_diff": "diff --git a/pandapower/plotting/geo.py b/pandapower/plotting/geo.py\n--- a/pandapower/plotting/geo.py\n+++ b/pandapower/plotting/geo.py\n@@ -77,7 +77,7 @@\n :type branch_geo: geopandas.GeoDataFrame\n :return: branch_geo - The given geodataframe with coords\n \"\"\"\n- branch_geo[\"coords\"] = branch_geo[\"coords\"].geometry.apply(lambda x: list(x.coords))\n+ branch_geo[\"coords\"] = branch_geo.geometry.apply(lambda x: list(x.coords))\n return branch_geo\n", "issue": "plotting.geo convert_gis_to_geodata leads to issue if run after convert_geodata_to_gis\n```python\r\nimport pandapower.plotting.geo as geo\r\nimport pandapower.networks as pn\r\n\r\nnet = pn.mv_oberrhein()\r\n\r\ngeo.convert_geodata_to_gis(net)\r\ngeo.convert_gis_to_geodata(net)\r\n```\r\n\r\nresults in `AttributeError: 'Series' object has no attribute 'geometry'`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2022 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nimport sys\nfrom numpy import array, setdiff1d\n\nfrom pandapower.auxiliary import soft_dependency_error\n\ntry:\n from shapely.geometry import Point, LineString\n shapely_INSTALLED = True\nexcept ImportError:\n shapely_INSTALLED = False\n\ntry:\n from geopandas import GeoDataFrame, GeoSeries\n geopandas_INSTALLED = True\nexcept ImportError:\n geopandas_INSTALLED = False\n\ntry:\n from pyproj import Proj, transform\n pyproj_INSTALLED = True\nexcept ImportError:\n pyproj_INSTALLED = False\n\n\ndef _node_geometries_from_geodata(node_geo, epsg=31467):\n \"\"\"\n Creates a geopandas geodataframe from a given dataframe of with node coordinates as x and y\n values.\n\n :param node_geo: The dataframe containing the node coordinates (x and y values)\n :type node_geo: pandas.dataframe\n :param epsg: The epsg projection of the node coordinates\n :type epsg: int, default 31467 (= Gauss-Kr\u00fcger Zone 3)\n :return: node_geodata - a geodataframe containing the node_geo and Points in the geometry column\n \"\"\"\n missing_packages = array([\"shapely\", \"geopandas\"])[~array([\n shapely_INSTALLED, geopandas_INSTALLED])]\n if len(missing_packages):\n soft_dependency_error(str(sys._getframe().f_code.co_name)+\"()\", missing_packages)\n geoms = [Point(x, y) for x, y in node_geo[[\"x\", \"y\"]].values]\n return GeoDataFrame(node_geo, crs=f\"epsg:{epsg}\", geometry=geoms, index=node_geo.index)\n\n\ndef _branch_geometries_from_geodata(branch_geo, epsg=31467):\n missing_packages = array([\"shapely\", \"geopandas\"])[~array([\n shapely_INSTALLED, geopandas_INSTALLED])]\n if len(missing_packages):\n soft_dependency_error(str(sys._getframe().f_code.co_name)+\"()\", missing_packages)\n geoms = GeoSeries([LineString(x) for x in branch_geo.coords.values], index=branch_geo.index,\n crs=f\"epsg:{epsg}\")\n return GeoDataFrame(branch_geo, crs=f\"epsg:{epsg}\", geometry=geoms, index=branch_geo.index)\n\n\ndef _transform_node_geometry_to_geodata(node_geo):\n \"\"\"\n Create x and y values from geodataframe\n\n :param node_geo: The dataframe containing the node geometries (as shapely points)\n :type node_geo: geopandas.GeoDataFrame\n :return: bus_geo - The given geodataframe with x and y values\n \"\"\"\n node_geo[\"x\"] = [p.x for p in node_geo.geometry]\n node_geo[\"y\"] = [p.y for p in node_geo.geometry]\n return node_geo\n\n\ndef _transform_branch_geometry_to_coords(branch_geo):\n \"\"\"\n Create coords entries from geodataframe geometries\n\n :param branch_geo: The dataframe containing the branch geometries (as shapely LineStrings)\n :type branch_geo: geopandas.GeoDataFrame\n :return: branch_geo - The given geodataframe with coords\n \"\"\"\n branch_geo[\"coords\"] = branch_geo[\"coords\"].geometry.apply(lambda x: list(x.coords))\n return branch_geo\n\n\ndef _convert_xy_epsg(x, y, epsg_in=4326, epsg_out=31467):\n \"\"\"\n Converts the given x and y coordinates according to the defined epsg projections.\n\n :param x: x-values of coordinates\n :type x: iterable\n :param y: y-values of coordinates\n :type y: iterable\n :param epsg_in: current epsg projection\n :type epsg_in: int, default 4326 (= WGS84)\n :param epsg_out: epsg projection to be transformed to\n :type epsg_out: int, default 31467 (= Gauss-Kr\u00fcger Zone 3)\n :return: transformed_coords - x and y values in new coordinate system\n \"\"\"\n if not pyproj_INSTALLED:\n soft_dependency_error(str(sys._getframe().f_code.co_name)+\"()\", \"pyproj\")\n in_proj = Proj(init='epsg:%i' % epsg_in)\n out_proj = Proj(init='epsg:%i' % epsg_out)\n return transform(in_proj, out_proj, x, y)\n\n\ndef convert_gis_to_geodata(net, node_geodata=True, branch_geodata=True):\n \"\"\"\n Extracts information on bus and line geodata from the geometries of a geopandas geodataframe.\n\n :param net: The net for which to convert the geodata\n :type net: pandapowerNet\n :param node_geodata: flag if to extract x and y values for bus geodata\n :type node_geodata: bool, default True\n :param branch_geodata: flag if to extract coordinates values for line geodata\n :type branch_geodata: bool, default True\n :return: No output.\n \"\"\"\n if node_geodata:\n _transform_node_geometry_to_geodata(net.bus_geodata)\n if branch_geodata:\n _transform_branch_geometry_to_coords(net.line_geodata)\n\n\ndef convert_geodata_to_gis(net, epsg=31467, node_geodata=True, branch_geodata=True):\n \"\"\"\n Transforms the bus and line geodata of a net into a geopandaas geodataframe with the respective\n geometries.\n\n :param net: The net for which to convert the geodata\n :type net: pandapowerNet\n :param epsg: current epsg projection\n :type epsg: int, default 4326 (= WGS84)\n :param node_geodata: flag if to transform the bus geodata table\n :type node_geodata: bool, default True\n :param branch_geodata: flag if to transform the line geodata table\n :type branch_geodata: bool, default True\n :return: No output.\n \"\"\"\n if node_geodata:\n net[\"bus_geodata\"] = _node_geometries_from_geodata(net[\"bus_geodata\"], epsg)\n if branch_geodata:\n net[\"line_geodata\"] = _branch_geometries_from_geodata(net[\"line_geodata\"], epsg)\n net[\"gis_epsg_code\"] = epsg\n\n\ndef convert_epsg_bus_geodata(net, epsg_in=4326, epsg_out=31467):\n \"\"\"\n Converts bus geodata in net from epsg_in to epsg_out\n\n :param net: The pandapower network\n :type net: pandapowerNet\n :param epsg_in: current epsg projection\n :type epsg_in: int, default 4326 (= WGS84)\n :param epsg_out: epsg projection to be transformed to\n :type epsg_out: int, default 31467 (= Gauss-Kr\u00fcger Zone 3)\n :return: net - the given pandapower network (no copy!)\n \"\"\"\n net['bus_geodata'].loc[:, \"x\"], net['bus_geodata'].loc[:, \"y\"] = _convert_xy_epsg(\n net['bus_geodata'].loc[:, \"x\"], net['bus_geodata'].loc[:, \"y\"], epsg_in, epsg_out)\n return net\n", "path": "pandapower/plotting/geo.py"}]}
| 2,728 | 140 |
gh_patches_debug_9364
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-2258
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Azure VM - We are not getting power state
The VM data we are getting back does not tell you if the VM is running or not.
I think perhaps you have to tell the `list_all` api what you want - we want `instanceview`
https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/instanceview
Not sure how this happens via SDK.
</issue>
<code>
[start of tools/c7n_azure/c7n_azure/resources/vm.py]
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from c7n_azure.query import QueryResourceManager
16 from c7n_azure.provider import resources
17 from c7n.filters.core import ValueFilter, type_schema
18
19 @resources.register('vm')
20 class VirtualMachine(QueryResourceManager):
21
22 class resource_type(object):
23 service = 'azure.mgmt.compute'
24 client = 'ComputeManagementClient'
25 enum_spec = ('virtual_machines', 'list_all')
26 id = 'id'
27 name = 'name'
28 default_report_fields = (
29 'name',
30 'location',
31 'resourceGroup',
32 'properties.hardwareProfile.vmSize',
33 )
34
[end of tools/c7n_azure/c7n_azure/resources/vm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/c7n_azure/c7n_azure/resources/vm.py b/tools/c7n_azure/c7n_azure/resources/vm.py
--- a/tools/c7n_azure/c7n_azure/resources/vm.py
+++ b/tools/c7n_azure/c7n_azure/resources/vm.py
@@ -31,3 +31,15 @@
'resourceGroup',
'properties.hardwareProfile.vmSize',
)
+
[email protected]_registry.register('instance-view')
+class InstanceViewFilter(ValueFilter):
+ schema = type_schema('instance-view', rinherit=ValueFilter.schema)
+
+ def __call__(self, i):
+ if 'instanceView' not in i:
+ client = self.manager.get_client()
+ instance = client.virtual_machines.get(i['resourceGroup'], i['name'], expand='instanceview').instance_view
+ i['instanceView'] = instance.serialize()
+
+ return super(InstanceViewFilter, self).__call__(i['instanceView'])
|
{"golden_diff": "diff --git a/tools/c7n_azure/c7n_azure/resources/vm.py b/tools/c7n_azure/c7n_azure/resources/vm.py\n--- a/tools/c7n_azure/c7n_azure/resources/vm.py\n+++ b/tools/c7n_azure/c7n_azure/resources/vm.py\n@@ -31,3 +31,15 @@\n 'resourceGroup',\n 'properties.hardwareProfile.vmSize',\n )\n+\[email protected]_registry.register('instance-view')\n+class InstanceViewFilter(ValueFilter):\n+ schema = type_schema('instance-view', rinherit=ValueFilter.schema)\n+\n+ def __call__(self, i):\n+ if 'instanceView' not in i:\n+ client = self.manager.get_client()\n+ instance = client.virtual_machines.get(i['resourceGroup'], i['name'], expand='instanceview').instance_view\n+ i['instanceView'] = instance.serialize()\n+\n+ return super(InstanceViewFilter, self).__call__(i['instanceView'])\n", "issue": "Azure VM - We are not getting power state\nThe VM data we are getting back does not tell you if the VM is running or not.\r\n\r\nI think perhaps you have to tell the `list_all` api what you want - we want `instanceview`\r\n\r\nhttps://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/instanceview\r\n\r\nNot sure how this happens via SDK.\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.query import QueryResourceManager\nfrom c7n_azure.provider import resources\nfrom c7n.filters.core import ValueFilter, type_schema\n\[email protected]('vm')\nclass VirtualMachine(QueryResourceManager):\n\n class resource_type(object):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n enum_spec = ('virtual_machines', 'list_all')\n id = 'id'\n name = 'name'\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n 'properties.hardwareProfile.vmSize',\n )\n", "path": "tools/c7n_azure/c7n_azure/resources/vm.py"}]}
| 952 | 223 |
gh_patches_debug_22534
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-12056
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IPython/utils/py3compat.py unused `with_metaclass` function.
The function `with_metaclass` seem to be unused;
we should : check if it is used, if not remove the function .
</issue>
<code>
[start of IPython/utils/py3compat.py]
1 # coding: utf-8
2 """Compatibility tricks for Python 3. Mainly to do with unicode.
3
4 This file is deprecated and will be removed in a future version.
5 """
6 import functools
7 import os
8 import sys
9 import re
10 import shutil
11 import types
12 import platform
13
14 from .encoding import DEFAULT_ENCODING
15
16
17 def decode(s, encoding=None):
18 encoding = encoding or DEFAULT_ENCODING
19 return s.decode(encoding, "replace")
20
21 def encode(u, encoding=None):
22 encoding = encoding or DEFAULT_ENCODING
23 return u.encode(encoding, "replace")
24
25
26 def cast_unicode(s, encoding=None):
27 if isinstance(s, bytes):
28 return decode(s, encoding)
29 return s
30
31 def cast_bytes(s, encoding=None):
32 if not isinstance(s, bytes):
33 return encode(s, encoding)
34 return s
35
36 def buffer_to_bytes(buf):
37 """Cast a buffer object to bytes"""
38 if not isinstance(buf, bytes):
39 buf = bytes(buf)
40 return buf
41
42 def _modify_str_or_docstring(str_change_func):
43 @functools.wraps(str_change_func)
44 def wrapper(func_or_str):
45 if isinstance(func_or_str, (str,)):
46 func = None
47 doc = func_or_str
48 else:
49 func = func_or_str
50 doc = func.__doc__
51
52 # PYTHONOPTIMIZE=2 strips docstrings, so they can disappear unexpectedly
53 if doc is not None:
54 doc = str_change_func(doc)
55
56 if func:
57 func.__doc__ = doc
58 return func
59 return doc
60 return wrapper
61
62 def safe_unicode(e):
63 """unicode(e) with various fallbacks. Used for exceptions, which may not be
64 safe to call unicode() on.
65 """
66 try:
67 return str(e)
68 except UnicodeError:
69 pass
70
71 try:
72 return repr(e)
73 except UnicodeError:
74 pass
75
76 return u'Unrecoverably corrupt evalue'
77
78 # shutil.which from Python 3.4
79 def _shutil_which(cmd, mode=os.F_OK | os.X_OK, path=None):
80 """Given a command, mode, and a PATH string, return the path which
81 conforms to the given mode on the PATH, or None if there is no such
82 file.
83
84 `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
85 of os.environ.get("PATH"), or can be overridden with a custom search
86 path.
87
88 This is a backport of shutil.which from Python 3.4
89 """
90 # Check that a given file can be accessed with the correct mode.
91 # Additionally check that `file` is not a directory, as on Windows
92 # directories pass the os.access check.
93 def _access_check(fn, mode):
94 return (os.path.exists(fn) and os.access(fn, mode)
95 and not os.path.isdir(fn))
96
97 # If we're given a path with a directory part, look it up directly rather
98 # than referring to PATH directories. This includes checking relative to the
99 # current directory, e.g. ./script
100 if os.path.dirname(cmd):
101 if _access_check(cmd, mode):
102 return cmd
103 return None
104
105 if path is None:
106 path = os.environ.get("PATH", os.defpath)
107 if not path:
108 return None
109 path = path.split(os.pathsep)
110
111 if sys.platform == "win32":
112 # The current directory takes precedence on Windows.
113 if not os.curdir in path:
114 path.insert(0, os.curdir)
115
116 # PATHEXT is necessary to check on Windows.
117 pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
118 # See if the given file matches any of the expected path extensions.
119 # This will allow us to short circuit when given "python.exe".
120 # If it does match, only test that one, otherwise we have to try
121 # others.
122 if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
123 files = [cmd]
124 else:
125 files = [cmd + ext for ext in pathext]
126 else:
127 # On other platforms you don't have things like PATHEXT to tell you
128 # what file suffixes are executable, so just pass on cmd as-is.
129 files = [cmd]
130
131 seen = set()
132 for dir in path:
133 normdir = os.path.normcase(dir)
134 if not normdir in seen:
135 seen.add(normdir)
136 for thefile in files:
137 name = os.path.join(dir, thefile)
138 if _access_check(name, mode):
139 return name
140 return None
141
142 PY3 = True
143
144 # keep reference to builtin_mod because the kernel overrides that value
145 # to forward requests to a frontend.
146 def input(prompt=''):
147 return builtin_mod.input(prompt)
148
149 builtin_mod_name = "builtins"
150 import builtins as builtin_mod
151
152
153 which = shutil.which
154
155 def isidentifier(s, dotted=False):
156 if dotted:
157 return all(isidentifier(a) for a in s.split("."))
158 return s.isidentifier()
159
160 getcwd = os.getcwd
161
162 MethodType = types.MethodType
163
164 def execfile(fname, glob, loc=None, compiler=None):
165 loc = loc if (loc is not None) else glob
166 with open(fname, 'rb') as f:
167 compiler = compiler or compile
168 exec(compiler(f.read(), fname, 'exec'), glob, loc)
169
170 # Refactor print statements in doctests.
171 _print_statement_re = re.compile(r"\bprint (?P<expr>.*)$", re.MULTILINE)
172 def _print_statement_sub(match):
173 expr = match.groups('expr')
174 return "print(%s)" % expr
175
176 # Abstract u'abc' syntax:
177 @_modify_str_or_docstring
178 def u_format(s):
179 """"{u}'abc'" --> "'abc'" (Python 3)
180
181 Accepts a string or a function, so it can be used as a decorator."""
182 return s.format(u='')
183
184 def get_closure(f):
185 """Get a function's closure attribute"""
186 return f.__closure__
187
188
189 PY2 = not PY3
190 PYPY = platform.python_implementation() == "PyPy"
191
192
193
194
195 # Parts below taken from six:
196 # Copyright (c) 2010-2013 Benjamin Peterson
197 #
198 # Permission is hereby granted, free of charge, to any person obtaining a copy
199 # of this software and associated documentation files (the "Software"), to deal
200 # in the Software without restriction, including without limitation the rights
201 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
202 # copies of the Software, and to permit persons to whom the Software is
203 # furnished to do so, subject to the following conditions:
204 #
205 # The above copyright notice and this permission notice shall be included in all
206 # copies or substantial portions of the Software.
207 #
208 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
209 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
210 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
211 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
212 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
213 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
214 # SOFTWARE.
215
216 def with_metaclass(meta, *bases):
217 """Create a base class with a metaclass."""
218 return meta("_NewBase", bases, {})
219
[end of IPython/utils/py3compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/IPython/utils/py3compat.py b/IPython/utils/py3compat.py
--- a/IPython/utils/py3compat.py
+++ b/IPython/utils/py3compat.py
@@ -189,30 +189,3 @@
PY2 = not PY3
PYPY = platform.python_implementation() == "PyPy"
-
-
-
-# Parts below taken from six:
-# Copyright (c) 2010-2013 Benjamin Peterson
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-def with_metaclass(meta, *bases):
- """Create a base class with a metaclass."""
- return meta("_NewBase", bases, {})
|
{"golden_diff": "diff --git a/IPython/utils/py3compat.py b/IPython/utils/py3compat.py\n--- a/IPython/utils/py3compat.py\n+++ b/IPython/utils/py3compat.py\n@@ -189,30 +189,3 @@\n PY2 = not PY3\n PYPY = platform.python_implementation() == \"PyPy\"\n \n-\n-\n-\n-# Parts below taken from six:\n-# Copyright (c) 2010-2013 Benjamin Peterson\n-#\n-# Permission is hereby granted, free of charge, to any person obtaining a copy\n-# of this software and associated documentation files (the \"Software\"), to deal\n-# in the Software without restriction, including without limitation the rights\n-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n-# copies of the Software, and to permit persons to whom the Software is\n-# furnished to do so, subject to the following conditions:\n-#\n-# The above copyright notice and this permission notice shall be included in all\n-# copies or substantial portions of the Software.\n-#\n-# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n-# SOFTWARE.\n-\n-def with_metaclass(meta, *bases):\n- \"\"\"Create a base class with a metaclass.\"\"\"\n- return meta(\"_NewBase\", bases, {})\n", "issue": "IPython/utils/py3compat.py unused `with_metaclass` function. \nThe function `with_metaclass` seem to be unused; \r\n\r\nwe should : check if it is used, if not remove the function .\n", "before_files": [{"content": "# coding: utf-8\n\"\"\"Compatibility tricks for Python 3. Mainly to do with unicode.\n\nThis file is deprecated and will be removed in a future version.\n\"\"\"\nimport functools\nimport os\nimport sys\nimport re\nimport shutil\nimport types\nimport platform\n\nfrom .encoding import DEFAULT_ENCODING\n\n\ndef decode(s, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return s.decode(encoding, \"replace\")\n\ndef encode(u, encoding=None):\n encoding = encoding or DEFAULT_ENCODING\n return u.encode(encoding, \"replace\")\n\n\ndef cast_unicode(s, encoding=None):\n if isinstance(s, bytes):\n return decode(s, encoding)\n return s\n\ndef cast_bytes(s, encoding=None):\n if not isinstance(s, bytes):\n return encode(s, encoding)\n return s\n\ndef buffer_to_bytes(buf):\n \"\"\"Cast a buffer object to bytes\"\"\"\n if not isinstance(buf, bytes):\n buf = bytes(buf)\n return buf\n\ndef _modify_str_or_docstring(str_change_func):\n @functools.wraps(str_change_func)\n def wrapper(func_or_str):\n if isinstance(func_or_str, (str,)):\n func = None\n doc = func_or_str\n else:\n func = func_or_str\n doc = func.__doc__\n\n # PYTHONOPTIMIZE=2 strips docstrings, so they can disappear unexpectedly\n if doc is not None:\n doc = str_change_func(doc)\n\n if func:\n func.__doc__ = doc\n return func\n return doc\n return wrapper\n\ndef safe_unicode(e):\n \"\"\"unicode(e) with various fallbacks. Used for exceptions, which may not be\n safe to call unicode() on.\n \"\"\"\n try:\n return str(e)\n except UnicodeError:\n pass\n\n try:\n return repr(e)\n except UnicodeError:\n pass\n\n return u'Unrecoverably corrupt evalue'\n\n# shutil.which from Python 3.4\ndef _shutil_which(cmd, mode=os.F_OK | os.X_OK, path=None):\n \"\"\"Given a command, mode, and a PATH string, return the path which\n conforms to the given mode on the PATH, or None if there is no such\n file.\n\n `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result\n of os.environ.get(\"PATH\"), or can be overridden with a custom search\n path.\n\n This is a backport of shutil.which from Python 3.4\n \"\"\"\n # Check that a given file can be accessed with the correct mode.\n # Additionally check that `file` is not a directory, as on Windows\n # directories pass the os.access check.\n def _access_check(fn, mode):\n return (os.path.exists(fn) and os.access(fn, mode)\n and not os.path.isdir(fn))\n\n # If we're given a path with a directory part, look it up directly rather\n # than referring to PATH directories. This includes checking relative to the\n # current directory, e.g. ./script\n if os.path.dirname(cmd):\n if _access_check(cmd, mode):\n return cmd\n return None\n\n if path is None:\n path = os.environ.get(\"PATH\", os.defpath)\n if not path:\n return None\n path = path.split(os.pathsep)\n\n if sys.platform == \"win32\":\n # The current directory takes precedence on Windows.\n if not os.curdir in path:\n path.insert(0, os.curdir)\n\n # PATHEXT is necessary to check on Windows.\n pathext = os.environ.get(\"PATHEXT\", \"\").split(os.pathsep)\n # See if the given file matches any of the expected path extensions.\n # This will allow us to short circuit when given \"python.exe\".\n # If it does match, only test that one, otherwise we have to try\n # others.\n if any(cmd.lower().endswith(ext.lower()) for ext in pathext):\n files = [cmd]\n else:\n files = [cmd + ext for ext in pathext]\n else:\n # On other platforms you don't have things like PATHEXT to tell you\n # what file suffixes are executable, so just pass on cmd as-is.\n files = [cmd]\n\n seen = set()\n for dir in path:\n normdir = os.path.normcase(dir)\n if not normdir in seen:\n seen.add(normdir)\n for thefile in files:\n name = os.path.join(dir, thefile)\n if _access_check(name, mode):\n return name\n return None\n\nPY3 = True\n\n# keep reference to builtin_mod because the kernel overrides that value\n# to forward requests to a frontend.\ndef input(prompt=''):\n return builtin_mod.input(prompt)\n\nbuiltin_mod_name = \"builtins\"\nimport builtins as builtin_mod\n\n\nwhich = shutil.which\n\ndef isidentifier(s, dotted=False):\n if dotted:\n return all(isidentifier(a) for a in s.split(\".\"))\n return s.isidentifier()\n\ngetcwd = os.getcwd\n\nMethodType = types.MethodType\n\ndef execfile(fname, glob, loc=None, compiler=None):\n loc = loc if (loc is not None) else glob\n with open(fname, 'rb') as f:\n compiler = compiler or compile\n exec(compiler(f.read(), fname, 'exec'), glob, loc)\n\n# Refactor print statements in doctests.\n_print_statement_re = re.compile(r\"\\bprint (?P<expr>.*)$\", re.MULTILINE)\ndef _print_statement_sub(match):\n expr = match.groups('expr')\n return \"print(%s)\" % expr\n\n# Abstract u'abc' syntax:\n@_modify_str_or_docstring\ndef u_format(s):\n \"\"\"\"{u}'abc'\" --> \"'abc'\" (Python 3)\n\n Accepts a string or a function, so it can be used as a decorator.\"\"\"\n return s.format(u='')\n\ndef get_closure(f):\n \"\"\"Get a function's closure attribute\"\"\"\n return f.__closure__\n\n\nPY2 = not PY3\nPYPY = platform.python_implementation() == \"PyPy\"\n\n\n\n\n# Parts below taken from six:\n# Copyright (c) 2010-2013 Benjamin Peterson\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in all\n# copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\ndef with_metaclass(meta, *bases):\n \"\"\"Create a base class with a metaclass.\"\"\"\n return meta(\"_NewBase\", bases, {})\n", "path": "IPython/utils/py3compat.py"}]}
| 2,755 | 378 |
gh_patches_debug_50933
|
rasdani/github-patches
|
git_diff
|
apache__airflow-15117
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove 'user_id', 'role_id' from User and Role in OpenAPI schema
Would be good to remove the 'id' of both User and Role schemas from what is dumped in REST API endpoints. ID of User and Role table are sensitive data that would be fine to hide from the endpoints
</issue>
<code>
[start of airflow/api_connexion/schemas/user_schema.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 from typing import List, NamedTuple
18
19 from flask_appbuilder.security.sqla.models import User
20 from marshmallow import Schema, fields
21 from marshmallow_sqlalchemy import SQLAlchemySchema, auto_field
22
23 from airflow.api_connexion.parameters import validate_istimezone
24 from airflow.api_connexion.schemas.role_and_permission_schema import RoleSchema
25
26
27 class UserCollectionItemSchema(SQLAlchemySchema):
28 """user collection item schema"""
29
30 class Meta:
31 """Meta"""
32
33 model = User
34 dateformat = "iso"
35
36 user_id = auto_field('id', dump_only=True)
37 first_name = auto_field()
38 last_name = auto_field()
39 username = auto_field()
40 active = auto_field(dump_only=True)
41 email = auto_field()
42 last_login = auto_field(dump_only=True)
43 login_count = auto_field(dump_only=True)
44 fail_login_count = auto_field(dump_only=True)
45 roles = fields.List(fields.Nested(RoleSchema, only=('name',)))
46 created_on = auto_field(validate=validate_istimezone, dump_only=True)
47 changed_on = auto_field(validate=validate_istimezone, dump_only=True)
48
49
50 class UserSchema(UserCollectionItemSchema):
51 """User schema"""
52
53 password = auto_field(load_only=True)
54
55
56 class UserCollection(NamedTuple):
57 """User collection"""
58
59 users: List[User]
60 total_entries: int
61
62
63 class UserCollectionSchema(Schema):
64 """User collection schema"""
65
66 users = fields.List(fields.Nested(UserCollectionItemSchema))
67 total_entries = fields.Int()
68
69
70 user_collection_item_schema = UserCollectionItemSchema()
71 user_schema = UserSchema()
72 user_collection_schema = UserCollectionSchema()
73
[end of airflow/api_connexion/schemas/user_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/airflow/api_connexion/schemas/user_schema.py b/airflow/api_connexion/schemas/user_schema.py
--- a/airflow/api_connexion/schemas/user_schema.py
+++ b/airflow/api_connexion/schemas/user_schema.py
@@ -33,7 +33,6 @@
model = User
dateformat = "iso"
- user_id = auto_field('id', dump_only=True)
first_name = auto_field()
last_name = auto_field()
username = auto_field()
|
{"golden_diff": "diff --git a/airflow/api_connexion/schemas/user_schema.py b/airflow/api_connexion/schemas/user_schema.py\n--- a/airflow/api_connexion/schemas/user_schema.py\n+++ b/airflow/api_connexion/schemas/user_schema.py\n@@ -33,7 +33,6 @@\n model = User\n dateformat = \"iso\"\n \n- user_id = auto_field('id', dump_only=True)\n first_name = auto_field()\n last_name = auto_field()\n username = auto_field()\n", "issue": "Remove 'user_id', 'role_id' from User and Role in OpenAPI schema \nWould be good to remove the 'id' of both User and Role schemas from what is dumped in REST API endpoints. ID of User and Role table are sensitive data that would be fine to hide from the endpoints\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom typing import List, NamedTuple\n\nfrom flask_appbuilder.security.sqla.models import User\nfrom marshmallow import Schema, fields\nfrom marshmallow_sqlalchemy import SQLAlchemySchema, auto_field\n\nfrom airflow.api_connexion.parameters import validate_istimezone\nfrom airflow.api_connexion.schemas.role_and_permission_schema import RoleSchema\n\n\nclass UserCollectionItemSchema(SQLAlchemySchema):\n \"\"\"user collection item schema\"\"\"\n\n class Meta:\n \"\"\"Meta\"\"\"\n\n model = User\n dateformat = \"iso\"\n\n user_id = auto_field('id', dump_only=True)\n first_name = auto_field()\n last_name = auto_field()\n username = auto_field()\n active = auto_field(dump_only=True)\n email = auto_field()\n last_login = auto_field(dump_only=True)\n login_count = auto_field(dump_only=True)\n fail_login_count = auto_field(dump_only=True)\n roles = fields.List(fields.Nested(RoleSchema, only=('name',)))\n created_on = auto_field(validate=validate_istimezone, dump_only=True)\n changed_on = auto_field(validate=validate_istimezone, dump_only=True)\n\n\nclass UserSchema(UserCollectionItemSchema):\n \"\"\"User schema\"\"\"\n\n password = auto_field(load_only=True)\n\n\nclass UserCollection(NamedTuple):\n \"\"\"User collection\"\"\"\n\n users: List[User]\n total_entries: int\n\n\nclass UserCollectionSchema(Schema):\n \"\"\"User collection schema\"\"\"\n\n users = fields.List(fields.Nested(UserCollectionItemSchema))\n total_entries = fields.Int()\n\n\nuser_collection_item_schema = UserCollectionItemSchema()\nuser_schema = UserSchema()\nuser_collection_schema = UserCollectionSchema()\n", "path": "airflow/api_connexion/schemas/user_schema.py"}]}
| 1,276 | 115 |
gh_patches_debug_24467
|
rasdani/github-patches
|
git_diff
|
searx__searx-2102
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wikipedia crash on specific search query
Instance: searx.be
Browser: Chrome Version 84.0.4147.89
Hello,
Not sure whether this is a bug, problem with the instance itself , or my browser, but when I search for certain query e.g. `port dover`, I get an error message:
```
Engines cannot retrieve results:
wikipedia (unexpected crash 'NoneType' object has no attribute 'replace')
```

This error doesn't happen if I change the language to Deutsch, Espanol, or other language. Seems like it only happens with English language.


This error doesn't happen with other search query.

I also test this on other searx instances and the results are mixed.
No error:
https://searx.ninja
https://search.snopyta.org/
Both are still on 0.16.0
Same error:
https://searx.fmac.xyz
https://searx.xyz/
</issue>
<code>
[start of searx/engines/wikipedia.py]
1 """
2 Wikipedia (Web)
3
4 @website https://{language}.wikipedia.org
5 @provide-api yes
6
7 @using-api yes
8 @results JSON
9 @stable yes
10 @parse url, infobox
11 """
12
13 from json import loads
14 from lxml.html import fromstring
15 from searx.url_utils import quote, urlencode
16 from searx.utils import match_language
17
18 # search-url
19 base_url = u'https://{language}.wikipedia.org/'
20 search_url = base_url + u'w/api.php?'\
21 'action=query'\
22 '&format=json'\
23 '&{query}'\
24 '&prop=extracts|pageimages|pageprops'\
25 '&ppprop=disambiguation'\
26 '&exintro'\
27 '&explaintext'\
28 '&pithumbsize=300'\
29 '&redirects'
30 supported_languages_url = 'https://meta.wikimedia.org/wiki/List_of_Wikipedias'
31
32
33 # set language in base_url
34 def url_lang(lang):
35 lang_pre = lang.split('-')[0]
36 if lang_pre == 'all' or lang_pre not in supported_languages and lang_pre not in language_aliases:
37 return 'en'
38 return match_language(lang, supported_languages, language_aliases).split('-')[0]
39
40
41 # do search-request
42 def request(query, params):
43 if query.islower():
44 query = u'{0}|{1}'.format(query.decode('utf-8'), query.decode('utf-8').title()).encode('utf-8')
45
46 params['url'] = search_url.format(query=urlencode({'titles': query}),
47 language=url_lang(params['language']))
48
49 return params
50
51
52 # get first meaningful paragraph
53 # this should filter out disambiguation pages and notes above first paragraph
54 # "magic numbers" were obtained by fine tuning
55 def extract_first_paragraph(content, title, image):
56 first_paragraph = None
57
58 failed_attempts = 0
59 for paragraph in content.split('\n'):
60
61 starts_with_title = paragraph.lower().find(title.lower(), 0, len(title) + 35)
62 length = len(paragraph)
63
64 if length >= 200 or (starts_with_title >= 0 and (image or length >= 150)):
65 first_paragraph = paragraph
66 break
67
68 failed_attempts += 1
69 if failed_attempts > 3:
70 return None
71
72 return first_paragraph
73
74
75 # get response from search-request
76 def response(resp):
77 results = []
78
79 search_result = loads(resp.text)
80
81 # wikipedia article's unique id
82 # first valid id is assumed to be the requested article
83 if 'pages' not in search_result['query']:
84 return results
85
86 for article_id in search_result['query']['pages']:
87 page = search_result['query']['pages'][article_id]
88 if int(article_id) > 0:
89 break
90
91 if int(article_id) < 0 or 'disambiguation' in page.get('pageprops', {}):
92 return []
93
94 title = page.get('title')
95
96 image = page.get('thumbnail')
97 if image:
98 image = image.get('source')
99
100 extract = page.get('extract')
101
102 summary = extract_first_paragraph(extract, title, image)
103 summary = summary.replace('() ', '')
104
105 # link to wikipedia article
106 wikipedia_link = base_url.format(language=url_lang(resp.search_params['language'])) \
107 + 'wiki/' + quote(title.replace(' ', '_').encode('utf8'))
108
109 results.append({'url': wikipedia_link, 'title': title})
110
111 results.append({'infobox': title,
112 'id': wikipedia_link,
113 'content': summary,
114 'img_src': image,
115 'urls': [{'title': 'Wikipedia', 'url': wikipedia_link}]})
116
117 return results
118
119
120 # get supported languages from their site
121 def _fetch_supported_languages(resp):
122 supported_languages = {}
123 dom = fromstring(resp.text)
124 tables = dom.xpath('//table[contains(@class,"sortable")]')
125 for table in tables:
126 # exclude header row
127 trs = table.xpath('.//tr')[1:]
128 for tr in trs:
129 td = tr.xpath('./td')
130 code = td[3].xpath('./a')[0].text
131 name = td[2].xpath('./a')[0].text
132 english_name = td[1].xpath('./a')[0].text
133 articles = int(td[4].xpath('./a/b')[0].text.replace(',', ''))
134 # exclude languages with too few articles
135 if articles >= 100:
136 supported_languages[code] = {"name": name, "english_name": english_name, "articles": articles}
137
138 return supported_languages
139
[end of searx/engines/wikipedia.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/wikipedia.py b/searx/engines/wikipedia.py
--- a/searx/engines/wikipedia.py
+++ b/searx/engines/wikipedia.py
@@ -49,29 +49,6 @@
return params
-# get first meaningful paragraph
-# this should filter out disambiguation pages and notes above first paragraph
-# "magic numbers" were obtained by fine tuning
-def extract_first_paragraph(content, title, image):
- first_paragraph = None
-
- failed_attempts = 0
- for paragraph in content.split('\n'):
-
- starts_with_title = paragraph.lower().find(title.lower(), 0, len(title) + 35)
- length = len(paragraph)
-
- if length >= 200 or (starts_with_title >= 0 and (image or length >= 150)):
- first_paragraph = paragraph
- break
-
- failed_attempts += 1
- if failed_attempts > 3:
- return None
-
- return first_paragraph
-
-
# get response from search-request
def response(resp):
results = []
@@ -97,10 +74,7 @@
if image:
image = image.get('source')
- extract = page.get('extract')
-
- summary = extract_first_paragraph(extract, title, image)
- summary = summary.replace('() ', '')
+ summary = page.get('extract', '').split('\n')[0].replace('()', '')
# link to wikipedia article
wikipedia_link = base_url.format(language=url_lang(resp.search_params['language'])) \
|
{"golden_diff": "diff --git a/searx/engines/wikipedia.py b/searx/engines/wikipedia.py\n--- a/searx/engines/wikipedia.py\n+++ b/searx/engines/wikipedia.py\n@@ -49,29 +49,6 @@\n return params\n \n \n-# get first meaningful paragraph\n-# this should filter out disambiguation pages and notes above first paragraph\n-# \"magic numbers\" were obtained by fine tuning\n-def extract_first_paragraph(content, title, image):\n- first_paragraph = None\n-\n- failed_attempts = 0\n- for paragraph in content.split('\\n'):\n-\n- starts_with_title = paragraph.lower().find(title.lower(), 0, len(title) + 35)\n- length = len(paragraph)\n-\n- if length >= 200 or (starts_with_title >= 0 and (image or length >= 150)):\n- first_paragraph = paragraph\n- break\n-\n- failed_attempts += 1\n- if failed_attempts > 3:\n- return None\n-\n- return first_paragraph\n-\n-\n # get response from search-request\n def response(resp):\n results = []\n@@ -97,10 +74,7 @@\n if image:\n image = image.get('source')\n \n- extract = page.get('extract')\n-\n- summary = extract_first_paragraph(extract, title, image)\n- summary = summary.replace('() ', '')\n+ summary = page.get('extract', '').split('\\n')[0].replace('()', '')\n \n # link to wikipedia article\n wikipedia_link = base_url.format(language=url_lang(resp.search_params['language'])) \\\n", "issue": "Wikipedia crash on specific search query\nInstance: searx.be\r\nBrowser: Chrome Version 84.0.4147.89\r\n\r\nHello,\r\n\r\nNot sure whether this is a bug, problem with the instance itself , or my browser, but when I search for certain query e.g. `port dover`, I get an error message:\r\n```\r\nEngines cannot retrieve results:\r\nwikipedia (unexpected crash 'NoneType' object has no attribute 'replace')\r\n```\r\n\r\n\r\n\r\n\r\n\r\nThis error doesn't happen if I change the language to Deutsch, Espanol, or other language. Seems like it only happens with English language. \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nThis error doesn't happen with other search query.\r\n\r\n\r\n\r\n\r\nI also test this on other searx instances and the results are mixed. \r\n\r\nNo error:\r\nhttps://searx.ninja\r\nhttps://search.snopyta.org/\r\nBoth are still on 0.16.0\r\n\r\nSame error:\r\nhttps://searx.fmac.xyz\r\nhttps://searx.xyz/\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n Wikipedia (Web)\n\n @website https://{language}.wikipedia.org\n @provide-api yes\n\n @using-api yes\n @results JSON\n @stable yes\n @parse url, infobox\n\"\"\"\n\nfrom json import loads\nfrom lxml.html import fromstring\nfrom searx.url_utils import quote, urlencode\nfrom searx.utils import match_language\n\n# search-url\nbase_url = u'https://{language}.wikipedia.org/'\nsearch_url = base_url + u'w/api.php?'\\\n 'action=query'\\\n '&format=json'\\\n '&{query}'\\\n '&prop=extracts|pageimages|pageprops'\\\n '&ppprop=disambiguation'\\\n '&exintro'\\\n '&explaintext'\\\n '&pithumbsize=300'\\\n '&redirects'\nsupported_languages_url = 'https://meta.wikimedia.org/wiki/List_of_Wikipedias'\n\n\n# set language in base_url\ndef url_lang(lang):\n lang_pre = lang.split('-')[0]\n if lang_pre == 'all' or lang_pre not in supported_languages and lang_pre not in language_aliases:\n return 'en'\n return match_language(lang, supported_languages, language_aliases).split('-')[0]\n\n\n# do search-request\ndef request(query, params):\n if query.islower():\n query = u'{0}|{1}'.format(query.decode('utf-8'), query.decode('utf-8').title()).encode('utf-8')\n\n params['url'] = search_url.format(query=urlencode({'titles': query}),\n language=url_lang(params['language']))\n\n return params\n\n\n# get first meaningful paragraph\n# this should filter out disambiguation pages and notes above first paragraph\n# \"magic numbers\" were obtained by fine tuning\ndef extract_first_paragraph(content, title, image):\n first_paragraph = None\n\n failed_attempts = 0\n for paragraph in content.split('\\n'):\n\n starts_with_title = paragraph.lower().find(title.lower(), 0, len(title) + 35)\n length = len(paragraph)\n\n if length >= 200 or (starts_with_title >= 0 and (image or length >= 150)):\n first_paragraph = paragraph\n break\n\n failed_attempts += 1\n if failed_attempts > 3:\n return None\n\n return first_paragraph\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n search_result = loads(resp.text)\n\n # wikipedia article's unique id\n # first valid id is assumed to be the requested article\n if 'pages' not in search_result['query']:\n return results\n\n for article_id in search_result['query']['pages']:\n page = search_result['query']['pages'][article_id]\n if int(article_id) > 0:\n break\n\n if int(article_id) < 0 or 'disambiguation' in page.get('pageprops', {}):\n return []\n\n title = page.get('title')\n\n image = page.get('thumbnail')\n if image:\n image = image.get('source')\n\n extract = page.get('extract')\n\n summary = extract_first_paragraph(extract, title, image)\n summary = summary.replace('() ', '')\n\n # link to wikipedia article\n wikipedia_link = base_url.format(language=url_lang(resp.search_params['language'])) \\\n + 'wiki/' + quote(title.replace(' ', '_').encode('utf8'))\n\n results.append({'url': wikipedia_link, 'title': title})\n\n results.append({'infobox': title,\n 'id': wikipedia_link,\n 'content': summary,\n 'img_src': image,\n 'urls': [{'title': 'Wikipedia', 'url': wikipedia_link}]})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n supported_languages = {}\n dom = fromstring(resp.text)\n tables = dom.xpath('//table[contains(@class,\"sortable\")]')\n for table in tables:\n # exclude header row\n trs = table.xpath('.//tr')[1:]\n for tr in trs:\n td = tr.xpath('./td')\n code = td[3].xpath('./a')[0].text\n name = td[2].xpath('./a')[0].text\n english_name = td[1].xpath('./a')[0].text\n articles = int(td[4].xpath('./a/b')[0].text.replace(',', ''))\n # exclude languages with too few articles\n if articles >= 100:\n supported_languages[code] = {\"name\": name, \"english_name\": english_name, \"articles\": articles}\n\n return supported_languages\n", "path": "searx/engines/wikipedia.py"}]}
| 2,337 | 363 |
gh_patches_debug_2094
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-3441
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
annoy.py conversion of cosine distance to cosine similarity is incorrect
in [this function](https://github.com/RaRe-Technologies/gensim/blob/f35faae7a7b0c3c8586fb61208560522e37e0e7e/gensim/similarities/annoy.py#L169) the code to calculate cosine similarity is incorrect
def most_similar(self, vector, num_neighbors):
"""Find `num_neighbors` most similar items.
Parameters
----------
vector : numpy.array
Vector for word/document.
num_neighbors : int
Number of most similar items
Returns
-------
list of (str, float)
List of most similar items in format [(`item`, `cosine_distance`), ... ]
"""
ids, distances = self.index.get_nns_by_vector(
vector, num_neighbors, include_distances=True)
return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]
according to annoy documentation `get_nns_by_vector` with `include_distances=True` will return the distances and not the square power of the distance (this was changed since aug 2016):
_`a.get_distance(i, j)` returns the distance between items i and j. NOTE: this used to return the squared distance, but has been changed as of Aug 2016._
[link](https://github.com/spotify/annoy#:~:text=a.get_distance(i%2C%20j)%20returns%20the%20distance%20between%20items%20i%20and%20j.%20NOTE%3A%20this%20used%20to%20return%20the%20squared%20distance%2C%20but%20has%20been%20changed%20as%20of%20Aug%202016.)
also:
Annoy uses Euclidean distance of normalized vectors for its angular distance, which for two vectors u,v is equal to sqrt(2(1-cos(u,v)))
[link](https://github.com/spotify/annoy#:~:text=Annoy%20uses%20Euclidean%20distance%20of%20normalized%20vectors%20for%20its%20angular%20distance%2C%20which%20for%20two%20vectors%20u%2Cv%20is%20equal%20to%20sqrt(2(1%2Dcos(u%2Cv))))
so this means that in order to calculate the cosine similarity correctly we should do this:
`
return [(self.labels[ids[i]], 1 - distances[i]^2 / 2) for i in range(len(ids))]
`
</issue>
<code>
[start of gensim/similarities/annoy.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) 2013 Radim Rehurek <[email protected]>
5 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
6
7 """
8 This module integrates Spotify's `Annoy <https://github.com/spotify/annoy>`_ (Approximate Nearest Neighbors Oh Yeah)
9 library with Gensim's :class:`~gensim.models.word2vec.Word2Vec`, :class:`~gensim.models.doc2vec.Doc2Vec`,
10 :class:`~gensim.models.fasttext.FastText` and :class:`~gensim.models.keyedvectors.KeyedVectors` word embeddings.
11
12 .. Important::
13 To use this module, you must have the ``annoy`` library installed.
14 To install it, run ``pip install annoy``.
15
16 """
17
18 # Avoid import collisions on py2: this module has the same name as the actual Annoy library.
19 from __future__ import absolute_import
20
21 import os
22
23 try:
24 import cPickle as _pickle
25 except ImportError:
26 import pickle as _pickle
27
28 from gensim import utils
29 from gensim.models.doc2vec import Doc2Vec
30 from gensim.models.word2vec import Word2Vec
31 from gensim.models.fasttext import FastText
32 from gensim.models import KeyedVectors
33
34
35 _NOANNOY = ImportError("Annoy not installed. To use the Annoy indexer, please run `pip install annoy`.")
36
37
38 class AnnoyIndexer():
39 """This class allows the use of `Annoy <https://github.com/spotify/annoy>`_ for fast (approximate)
40 vector retrieval in `most_similar()` calls of
41 :class:`~gensim.models.word2vec.Word2Vec`, :class:`~gensim.models.doc2vec.Doc2Vec`,
42 :class:`~gensim.models.fasttext.FastText` and :class:`~gensim.models.keyedvectors.Word2VecKeyedVectors` models.
43
44 """
45
46 def __init__(self, model=None, num_trees=None):
47 """
48 Parameters
49 ----------
50 model : trained model, optional
51 Use vectors from this model as the source for the index.
52 num_trees : int, optional
53 Number of trees for Annoy indexer.
54
55 Examples
56 --------
57 .. sourcecode:: pycon
58
59 >>> from gensim.similarities.annoy import AnnoyIndexer
60 >>> from gensim.models import Word2Vec
61 >>>
62 >>> sentences = [['cute', 'cat', 'say', 'meow'], ['cute', 'dog', 'say', 'woof']]
63 >>> model = Word2Vec(sentences, min_count=1, seed=1)
64 >>>
65 >>> indexer = AnnoyIndexer(model, 2)
66 >>> model.most_similar("cat", topn=2, indexer=indexer)
67 [('cat', 1.0), ('dog', 0.32011348009109497)]
68
69 """
70 self.index = None
71 self.labels = None
72 self.model = model
73 self.num_trees = num_trees
74
75 if model and num_trees:
76 # Extract the KeyedVectors object from whatever model we were given.
77 if isinstance(self.model, Doc2Vec):
78 kv = self.model.dv
79 elif isinstance(self.model, (Word2Vec, FastText)):
80 kv = self.model.wv
81 elif isinstance(self.model, (KeyedVectors,)):
82 kv = self.model
83 else:
84 raise ValueError("Only a Word2Vec, Doc2Vec, FastText or KeyedVectors instance can be used")
85 self._build_from_model(kv.get_normed_vectors(), kv.index_to_key, kv.vector_size)
86
87 def save(self, fname, protocol=utils.PICKLE_PROTOCOL):
88 """Save AnnoyIndexer instance to disk.
89
90 Parameters
91 ----------
92 fname : str
93 Path to output. Save will produce 2 files:
94 `fname`: Annoy index itself.
95 `fname.dict`: Index metadata.
96 protocol : int, optional
97 Protocol for pickle.
98
99 Notes
100 -----
101 This method saves **only the index**. The trained model isn't preserved.
102
103 """
104 self.index.save(fname)
105 d = {'f': self.model.vector_size, 'num_trees': self.num_trees, 'labels': self.labels}
106 with utils.open(fname + '.dict', 'wb') as fout:
107 _pickle.dump(d, fout, protocol=protocol)
108
109 def load(self, fname):
110 """Load an AnnoyIndexer instance from disk.
111
112 Parameters
113 ----------
114 fname : str
115 The path as previously used by ``save()``.
116
117 Examples
118 --------
119 .. sourcecode:: pycon
120
121 >>> from gensim.similarities.index import AnnoyIndexer
122 >>> from gensim.models import Word2Vec
123 >>> from tempfile import mkstemp
124 >>>
125 >>> sentences = [['cute', 'cat', 'say', 'meow'], ['cute', 'dog', 'say', 'woof']]
126 >>> model = Word2Vec(sentences, min_count=1, seed=1, epochs=10)
127 >>>
128 >>> indexer = AnnoyIndexer(model, 2)
129 >>> _, temp_fn = mkstemp()
130 >>> indexer.save(temp_fn)
131 >>>
132 >>> new_indexer = AnnoyIndexer()
133 >>> new_indexer.load(temp_fn)
134 >>> new_indexer.model = model
135
136 """
137 fname_dict = fname + '.dict'
138 if not (os.path.exists(fname) and os.path.exists(fname_dict)):
139 raise IOError(
140 f"Can't find index files '{fname}' and '{fname_dict}' - unable to restore AnnoyIndexer state."
141 )
142 try:
143 from annoy import AnnoyIndex
144 except ImportError:
145 raise _NOANNOY
146
147 with utils.open(fname_dict, 'rb') as f:
148 d = _pickle.loads(f.read())
149 self.num_trees = d['num_trees']
150 self.index = AnnoyIndex(d['f'], metric='angular')
151 self.index.load(fname)
152 self.labels = d['labels']
153
154 def _build_from_model(self, vectors, labels, num_features):
155 try:
156 from annoy import AnnoyIndex
157 except ImportError:
158 raise _NOANNOY
159
160 index = AnnoyIndex(num_features, metric='angular')
161
162 for vector_num, vector in enumerate(vectors):
163 index.add_item(vector_num, vector)
164
165 index.build(self.num_trees)
166 self.index = index
167 self.labels = labels
168
169 def most_similar(self, vector, num_neighbors):
170 """Find `num_neighbors` most similar items.
171
172 Parameters
173 ----------
174 vector : numpy.array
175 Vector for word/document.
176 num_neighbors : int
177 Number of most similar items
178
179 Returns
180 -------
181 list of (str, float)
182 List of most similar items in format [(`item`, `cosine_distance`), ... ]
183
184 """
185 ids, distances = self.index.get_nns_by_vector(
186 vector, num_neighbors, include_distances=True)
187
188 return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]
189
[end of gensim/similarities/annoy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gensim/similarities/annoy.py b/gensim/similarities/annoy.py
--- a/gensim/similarities/annoy.py
+++ b/gensim/similarities/annoy.py
@@ -185,4 +185,4 @@
ids, distances = self.index.get_nns_by_vector(
vector, num_neighbors, include_distances=True)
- return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]
+ return [(self.labels[ids[i]], 1 - distances[i] ** 2 / 2) for i in range(len(ids))]
|
{"golden_diff": "diff --git a/gensim/similarities/annoy.py b/gensim/similarities/annoy.py\n--- a/gensim/similarities/annoy.py\n+++ b/gensim/similarities/annoy.py\n@@ -185,4 +185,4 @@\n ids, distances = self.index.get_nns_by_vector(\n vector, num_neighbors, include_distances=True)\n \n- return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]\n+ return [(self.labels[ids[i]], 1 - distances[i] ** 2 / 2) for i in range(len(ids))]\n", "issue": "annoy.py conversion of cosine distance to cosine similarity is incorrect\nin [this function](https://github.com/RaRe-Technologies/gensim/blob/f35faae7a7b0c3c8586fb61208560522e37e0e7e/gensim/similarities/annoy.py#L169) the code to calculate cosine similarity is incorrect\r\n\r\n def most_similar(self, vector, num_neighbors):\r\n \"\"\"Find `num_neighbors` most similar items.\r\n\r\n Parameters\r\n ----------\r\n vector : numpy.array\r\n Vector for word/document.\r\n num_neighbors : int\r\n Number of most similar items\r\n\r\n Returns\r\n -------\r\n list of (str, float)\r\n List of most similar items in format [(`item`, `cosine_distance`), ... ]\r\n\r\n \"\"\"\r\n ids, distances = self.index.get_nns_by_vector(\r\n vector, num_neighbors, include_distances=True)\r\n\r\n return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]\r\n\r\naccording to annoy documentation `get_nns_by_vector` with `include_distances=True` will return the distances and not the square power of the distance (this was changed since aug 2016):\r\n\r\n_`a.get_distance(i, j)` returns the distance between items i and j. NOTE: this used to return the squared distance, but has been changed as of Aug 2016._\r\n\r\n[link](https://github.com/spotify/annoy#:~:text=a.get_distance(i%2C%20j)%20returns%20the%20distance%20between%20items%20i%20and%20j.%20NOTE%3A%20this%20used%20to%20return%20the%20squared%20distance%2C%20but%20has%20been%20changed%20as%20of%20Aug%202016.)\r\n\r\nalso:\r\nAnnoy uses Euclidean distance of normalized vectors for its angular distance, which for two vectors u,v is equal to sqrt(2(1-cos(u,v)))\r\n[link](https://github.com/spotify/annoy#:~:text=Annoy%20uses%20Euclidean%20distance%20of%20normalized%20vectors%20for%20its%20angular%20distance%2C%20which%20for%20two%20vectors%20u%2Cv%20is%20equal%20to%20sqrt(2(1%2Dcos(u%2Cv))))\r\n\r\nso this means that in order to calculate the cosine similarity correctly we should do this:\r\n`\r\nreturn [(self.labels[ids[i]], 1 - distances[i]^2 / 2) for i in range(len(ids))]\r\n`\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2013 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nThis module integrates Spotify's `Annoy <https://github.com/spotify/annoy>`_ (Approximate Nearest Neighbors Oh Yeah)\nlibrary with Gensim's :class:`~gensim.models.word2vec.Word2Vec`, :class:`~gensim.models.doc2vec.Doc2Vec`,\n:class:`~gensim.models.fasttext.FastText` and :class:`~gensim.models.keyedvectors.KeyedVectors` word embeddings.\n\n.. Important::\n To use this module, you must have the ``annoy`` library installed.\n To install it, run ``pip install annoy``.\n\n\"\"\"\n\n# Avoid import collisions on py2: this module has the same name as the actual Annoy library.\nfrom __future__ import absolute_import\n\nimport os\n\ntry:\n import cPickle as _pickle\nexcept ImportError:\n import pickle as _pickle\n\nfrom gensim import utils\nfrom gensim.models.doc2vec import Doc2Vec\nfrom gensim.models.word2vec import Word2Vec\nfrom gensim.models.fasttext import FastText\nfrom gensim.models import KeyedVectors\n\n\n_NOANNOY = ImportError(\"Annoy not installed. To use the Annoy indexer, please run `pip install annoy`.\")\n\n\nclass AnnoyIndexer():\n \"\"\"This class allows the use of `Annoy <https://github.com/spotify/annoy>`_ for fast (approximate)\n vector retrieval in `most_similar()` calls of\n :class:`~gensim.models.word2vec.Word2Vec`, :class:`~gensim.models.doc2vec.Doc2Vec`,\n :class:`~gensim.models.fasttext.FastText` and :class:`~gensim.models.keyedvectors.Word2VecKeyedVectors` models.\n\n \"\"\"\n\n def __init__(self, model=None, num_trees=None):\n \"\"\"\n Parameters\n ----------\n model : trained model, optional\n Use vectors from this model as the source for the index.\n num_trees : int, optional\n Number of trees for Annoy indexer.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.similarities.annoy import AnnoyIndexer\n >>> from gensim.models import Word2Vec\n >>>\n >>> sentences = [['cute', 'cat', 'say', 'meow'], ['cute', 'dog', 'say', 'woof']]\n >>> model = Word2Vec(sentences, min_count=1, seed=1)\n >>>\n >>> indexer = AnnoyIndexer(model, 2)\n >>> model.most_similar(\"cat\", topn=2, indexer=indexer)\n [('cat', 1.0), ('dog', 0.32011348009109497)]\n\n \"\"\"\n self.index = None\n self.labels = None\n self.model = model\n self.num_trees = num_trees\n\n if model and num_trees:\n # Extract the KeyedVectors object from whatever model we were given.\n if isinstance(self.model, Doc2Vec):\n kv = self.model.dv\n elif isinstance(self.model, (Word2Vec, FastText)):\n kv = self.model.wv\n elif isinstance(self.model, (KeyedVectors,)):\n kv = self.model\n else:\n raise ValueError(\"Only a Word2Vec, Doc2Vec, FastText or KeyedVectors instance can be used\")\n self._build_from_model(kv.get_normed_vectors(), kv.index_to_key, kv.vector_size)\n\n def save(self, fname, protocol=utils.PICKLE_PROTOCOL):\n \"\"\"Save AnnoyIndexer instance to disk.\n\n Parameters\n ----------\n fname : str\n Path to output. Save will produce 2 files:\n `fname`: Annoy index itself.\n `fname.dict`: Index metadata.\n protocol : int, optional\n Protocol for pickle.\n\n Notes\n -----\n This method saves **only the index**. The trained model isn't preserved.\n\n \"\"\"\n self.index.save(fname)\n d = {'f': self.model.vector_size, 'num_trees': self.num_trees, 'labels': self.labels}\n with utils.open(fname + '.dict', 'wb') as fout:\n _pickle.dump(d, fout, protocol=protocol)\n\n def load(self, fname):\n \"\"\"Load an AnnoyIndexer instance from disk.\n\n Parameters\n ----------\n fname : str\n The path as previously used by ``save()``.\n\n Examples\n --------\n .. sourcecode:: pycon\n\n >>> from gensim.similarities.index import AnnoyIndexer\n >>> from gensim.models import Word2Vec\n >>> from tempfile import mkstemp\n >>>\n >>> sentences = [['cute', 'cat', 'say', 'meow'], ['cute', 'dog', 'say', 'woof']]\n >>> model = Word2Vec(sentences, min_count=1, seed=1, epochs=10)\n >>>\n >>> indexer = AnnoyIndexer(model, 2)\n >>> _, temp_fn = mkstemp()\n >>> indexer.save(temp_fn)\n >>>\n >>> new_indexer = AnnoyIndexer()\n >>> new_indexer.load(temp_fn)\n >>> new_indexer.model = model\n\n \"\"\"\n fname_dict = fname + '.dict'\n if not (os.path.exists(fname) and os.path.exists(fname_dict)):\n raise IOError(\n f\"Can't find index files '{fname}' and '{fname_dict}' - unable to restore AnnoyIndexer state.\"\n )\n try:\n from annoy import AnnoyIndex\n except ImportError:\n raise _NOANNOY\n\n with utils.open(fname_dict, 'rb') as f:\n d = _pickle.loads(f.read())\n self.num_trees = d['num_trees']\n self.index = AnnoyIndex(d['f'], metric='angular')\n self.index.load(fname)\n self.labels = d['labels']\n\n def _build_from_model(self, vectors, labels, num_features):\n try:\n from annoy import AnnoyIndex\n except ImportError:\n raise _NOANNOY\n\n index = AnnoyIndex(num_features, metric='angular')\n\n for vector_num, vector in enumerate(vectors):\n index.add_item(vector_num, vector)\n\n index.build(self.num_trees)\n self.index = index\n self.labels = labels\n\n def most_similar(self, vector, num_neighbors):\n \"\"\"Find `num_neighbors` most similar items.\n\n Parameters\n ----------\n vector : numpy.array\n Vector for word/document.\n num_neighbors : int\n Number of most similar items\n\n Returns\n -------\n list of (str, float)\n List of most similar items in format [(`item`, `cosine_distance`), ... ]\n\n \"\"\"\n ids, distances = self.index.get_nns_by_vector(\n vector, num_neighbors, include_distances=True)\n\n return [(self.labels[ids[i]], 1 - distances[i] / 2) for i in range(len(ids))]\n", "path": "gensim/similarities/annoy.py"}]}
| 3,262 | 147 |
gh_patches_debug_3162
|
rasdani/github-patches
|
git_diff
|
mantl__mantl-1652
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unblocking Neutron ports fails when public_url is unset (or Neutron is named differently)
I am trying to install mantl on openstack and am using the sample.yml playbook. When I run the playbook, it fails here
```
TASK: [calico | unlock neutron ports to allow calico traffic] *****************
failed: [mantl-worker-001] => (item=mantl-edge-01) => {"changed": false, "cmd": "/usr/local/bin/neutron_port_update.py \"192.168.0.0/16\" \"fa:16:3e:47:b1:91\"", "delta": "0:00:00.329717", "end": "2016-06-15 21:18:58.223490", "failed": true, "failed_when_result": true, "item": "mantl-edge-01", "rc": 1, "start": "2016-06-15 21:18:57.893773", "stdout_lines": [], "warnings": []}
stderr: Traceback (most recent call last):
File "/usr/local/bin/neutron_port_update.py", line 128, in <module>
ports = list_ports(token, public_url)
File "/usr/local/bin/neutron_port_update.py", line 82, in list_ports
auth_url = public_url + "v2.0/ports"
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
It seems that public_url is unset. I manually ran neutron_port_update.py, and tried to debug it. public_url gets set by neutron_public_url, and it appears that is looking for an endpoint named neutron. I have attached the json input and there is no endpoint with that name.
[parsed_json.txt](https://github.com/CiscoCloud/mantl/files/317249/parsed_json.txt)
Unblocking Neutron ports fails when public_url is unset (or Neutron is named differently)
I am trying to install mantl on openstack and am using the sample.yml playbook. When I run the playbook, it fails here
```
TASK: [calico | unlock neutron ports to allow calico traffic] *****************
failed: [mantl-worker-001] => (item=mantl-edge-01) => {"changed": false, "cmd": "/usr/local/bin/neutron_port_update.py \"192.168.0.0/16\" \"fa:16:3e:47:b1:91\"", "delta": "0:00:00.329717", "end": "2016-06-15 21:18:58.223490", "failed": true, "failed_when_result": true, "item": "mantl-edge-01", "rc": 1, "start": "2016-06-15 21:18:57.893773", "stdout_lines": [], "warnings": []}
stderr: Traceback (most recent call last):
File "/usr/local/bin/neutron_port_update.py", line 128, in <module>
ports = list_ports(token, public_url)
File "/usr/local/bin/neutron_port_update.py", line 82, in list_ports
auth_url = public_url + "v2.0/ports"
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
It seems that public_url is unset. I manually ran neutron_port_update.py, and tried to debug it. public_url gets set by neutron_public_url, and it appears that is looking for an endpoint named neutron. I have attached the json input and there is no endpoint with that name.
[parsed_json.txt](https://github.com/CiscoCloud/mantl/files/317249/parsed_json.txt)
</issue>
<code>
[start of roles/calico/files/neutron_port_update.py]
1 #!/usr/bin/env python
2 # This script updates the allowed address pairs in Neutron with the
3 # 'neutron port-update' command. This is required by Calico in OpenStack,
4 # otherwise BGP will not be working. We query OpenStack API directly to prevent
5 # installing any dependencies such as python-neutronclient.
6 #
7 # USAGE: script_name arg1 arg2...argN
8 # arg1 - Calico network, i.e. 192.168.0.0/24
9 # arg2...argN - VMs MAC addresses
10 #
11 # Script exit codes (for Ansible)
12 # 0 - port has been updated
13 # 1 - error
14 # 2 - no update to port [default]
15
16 import json
17 import os
18 import requests
19 import sys
20
21 def credentials():
22 """Retrieves credentials"""
23
24 username = os.environ.get('OS_USERNAME')
25 password = os.environ.get('OS_PASSWORD')
26 tenant_name = os.environ.get('OS_TENANT_NAME')
27 auth_url = os.environ.get('OS_AUTH_URL')
28
29 if not all((username, password, tenant_name, auth_url)):
30 sys.stderr.write("ERROR: Unable to get Keystone credentials\n")
31 exit(1)
32
33 return {
34 'username': username,
35 'password': password,
36 'tenant_name': tenant_name,
37 'auth_url': auth_url
38 }
39
40 def get_catalog():
41 """Get service catalog from Keystone with token and all endpoints"""
42
43 creds = credentials()
44 headers = {'Content-Type': 'application/json'}
45 payload = {
46 "auth":
47 {
48 "tenantName": creds['tenant_name'],
49 "passwordCredentials": {
50 "username": creds['username'],
51 "password": creds['password']
52 }
53 }
54 }
55 auth_url = creds['auth_url'] + "/tokens"
56 r = requests.post(auth_url, headers=headers, data=json.dumps(payload))
57
58 parsed_json = json.loads(r.text)
59 if not parsed_json or 'error' in parsed_json:
60 sys.stderr.write("ERROR: Unable to get authentication token\n")
61 exit(1)
62
63 return parsed_json
64
65 def get_token(catalog):
66 """Get Keystone authentication token"""
67
68 return catalog['access']['token']['id']
69
70 def neutron_public_url(catalog):
71 """Get Neutron publicURL"""
72
73 for i in catalog['access']['serviceCatalog']:
74 if i['name'] == 'neutron':
75 for endpoint in i['endpoints']:
76 return endpoint['publicURL']
77
78 def list_ports(token, public_url):
79 """List Neutron ports"""
80
81 headers = {'X-Auth-Token': token}
82 auth_url = public_url + "v2.0/ports"
83 r = requests.get(auth_url, headers=headers)
84
85 if r.text:
86 parsed_json = json.loads(r.text)
87 return parsed_json['ports']
88 else:
89 sys.stderr.write("ERROR: Unable to retrieve Neutron ports list\n")
90 exit(1)
91
92 def update_port(token, public_url, port_id, mac_address, calico_network):
93 """Update Neutron port with the allowed address pairs"""
94
95 headers = {'Content-Type': 'application/json', 'X-Auth-Token': token}
96 payload = {
97 "port": {
98 "allowed_address_pairs": [
99 {
100 "ip_address": calico_network,
101 "mac_address": mac_address
102 }
103 ]
104 }
105 }
106 auth_url = public_url + "v2.0/ports/" + port_id
107 r = requests.put(auth_url, headers=headers, data=json.dumps(payload))
108
109 parsed_json = json.loads(r.text)
110 if r.status_code != 200 or 'NeutronError' in parsed_json:
111 sys.stderr.write("ERROR: Unable to update port: %s\n" % parsed_json['NeutronError'])
112 exit(1)
113 else:
114 return r.status_code
115
116 if __name__ == "__main__":
117
118 if len(sys.argv) < 3:
119 sys.stderr.write("ERROR: Please run script with the correct arguments\n")
120 exit(1)
121
122 calico_network = sys.argv[1]
123 vms_mac_addresses = sys.argv[2:]
124
125 catalog = get_catalog()
126 token = get_token(catalog)
127 public_url = neutron_public_url(catalog)
128 ports = list_ports(token, public_url)
129
130 exit_code = 0 # no update to port
131
132 for port in ports:
133 port_id = port['id']
134 mac_address = port['mac_address']
135 if mac_address in vms_mac_addresses and not port['allowed_address_pairs']:
136 status_code = update_port(token, public_url, port_id, mac_address, calico_network)
137 if status_code == 200:
138 exit_code = 2 # port has been updated
139
140 exit(exit_code)
141
[end of roles/calico/files/neutron_port_update.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/roles/calico/files/neutron_port_update.py b/roles/calico/files/neutron_port_update.py
--- a/roles/calico/files/neutron_port_update.py
+++ b/roles/calico/files/neutron_port_update.py
@@ -71,7 +71,7 @@
"""Get Neutron publicURL"""
for i in catalog['access']['serviceCatalog']:
- if i['name'] == 'neutron':
+ if i['type'] == 'network':
for endpoint in i['endpoints']:
return endpoint['publicURL']
|
{"golden_diff": "diff --git a/roles/calico/files/neutron_port_update.py b/roles/calico/files/neutron_port_update.py\n--- a/roles/calico/files/neutron_port_update.py\n+++ b/roles/calico/files/neutron_port_update.py\n@@ -71,7 +71,7 @@\n \"\"\"Get Neutron publicURL\"\"\"\n \n for i in catalog['access']['serviceCatalog']:\n- if i['name'] == 'neutron':\n+ if i['type'] == 'network':\n for endpoint in i['endpoints']:\n return endpoint['publicURL']\n", "issue": "Unblocking Neutron ports fails when public_url is unset (or Neutron is named differently)\nI am trying to install mantl on openstack and am using the sample.yml playbook. When I run the playbook, it fails here\n\n```\nTASK: [calico | unlock neutron ports to allow calico traffic] *****************\nfailed: [mantl-worker-001] => (item=mantl-edge-01) => {\"changed\": false, \"cmd\": \"/usr/local/bin/neutron_port_update.py \\\"192.168.0.0/16\\\" \\\"fa:16:3e:47:b1:91\\\"\", \"delta\": \"0:00:00.329717\", \"end\": \"2016-06-15 21:18:58.223490\", \"failed\": true, \"failed_when_result\": true, \"item\": \"mantl-edge-01\", \"rc\": 1, \"start\": \"2016-06-15 21:18:57.893773\", \"stdout_lines\": [], \"warnings\": []}\nstderr: Traceback (most recent call last):\n File \"/usr/local/bin/neutron_port_update.py\", line 128, in <module>\n ports = list_ports(token, public_url)\n File \"/usr/local/bin/neutron_port_update.py\", line 82, in list_ports\n auth_url = public_url + \"v2.0/ports\"\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n```\n\nIt seems that public_url is unset. I manually ran neutron_port_update.py, and tried to debug it. public_url gets set by neutron_public_url, and it appears that is looking for an endpoint named neutron. I have attached the json input and there is no endpoint with that name.\n\n[parsed_json.txt](https://github.com/CiscoCloud/mantl/files/317249/parsed_json.txt)\n\nUnblocking Neutron ports fails when public_url is unset (or Neutron is named differently)\nI am trying to install mantl on openstack and am using the sample.yml playbook. When I run the playbook, it fails here\n\n```\nTASK: [calico | unlock neutron ports to allow calico traffic] *****************\nfailed: [mantl-worker-001] => (item=mantl-edge-01) => {\"changed\": false, \"cmd\": \"/usr/local/bin/neutron_port_update.py \\\"192.168.0.0/16\\\" \\\"fa:16:3e:47:b1:91\\\"\", \"delta\": \"0:00:00.329717\", \"end\": \"2016-06-15 21:18:58.223490\", \"failed\": true, \"failed_when_result\": true, \"item\": \"mantl-edge-01\", \"rc\": 1, \"start\": \"2016-06-15 21:18:57.893773\", \"stdout_lines\": [], \"warnings\": []}\nstderr: Traceback (most recent call last):\n File \"/usr/local/bin/neutron_port_update.py\", line 128, in <module>\n ports = list_ports(token, public_url)\n File \"/usr/local/bin/neutron_port_update.py\", line 82, in list_ports\n auth_url = public_url + \"v2.0/ports\"\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\n```\n\nIt seems that public_url is unset. I manually ran neutron_port_update.py, and tried to debug it. public_url gets set by neutron_public_url, and it appears that is looking for an endpoint named neutron. I have attached the json input and there is no endpoint with that name.\n\n[parsed_json.txt](https://github.com/CiscoCloud/mantl/files/317249/parsed_json.txt)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# This script updates the allowed address pairs in Neutron with the\n# 'neutron port-update' command. This is required by Calico in OpenStack,\n# otherwise BGP will not be working. We query OpenStack API directly to prevent\n# installing any dependencies such as python-neutronclient.\n#\n# USAGE: script_name arg1 arg2...argN\n# arg1 - Calico network, i.e. 192.168.0.0/24\n# arg2...argN - VMs MAC addresses\n#\n# Script exit codes (for Ansible)\n# 0 - port has been updated\n# 1 - error\n# 2 - no update to port [default]\n\nimport json\nimport os\nimport requests\nimport sys\n\ndef credentials():\n \"\"\"Retrieves credentials\"\"\"\n\n username = os.environ.get('OS_USERNAME')\n password = os.environ.get('OS_PASSWORD')\n tenant_name = os.environ.get('OS_TENANT_NAME')\n auth_url = os.environ.get('OS_AUTH_URL')\n\n if not all((username, password, tenant_name, auth_url)):\n sys.stderr.write(\"ERROR: Unable to get Keystone credentials\\n\")\n exit(1)\n\n return {\n 'username': username,\n 'password': password,\n 'tenant_name': tenant_name,\n 'auth_url': auth_url\n }\n\ndef get_catalog():\n \"\"\"Get service catalog from Keystone with token and all endpoints\"\"\"\n\n creds = credentials()\n headers = {'Content-Type': 'application/json'}\n payload = {\n \"auth\":\n {\n \"tenantName\": creds['tenant_name'],\n \"passwordCredentials\": {\n \"username\": creds['username'],\n \"password\": creds['password']\n }\n }\n }\n auth_url = creds['auth_url'] + \"/tokens\"\n r = requests.post(auth_url, headers=headers, data=json.dumps(payload))\n\n parsed_json = json.loads(r.text)\n if not parsed_json or 'error' in parsed_json:\n sys.stderr.write(\"ERROR: Unable to get authentication token\\n\")\n exit(1)\n\n return parsed_json\n\ndef get_token(catalog):\n \"\"\"Get Keystone authentication token\"\"\"\n\n return catalog['access']['token']['id']\n\ndef neutron_public_url(catalog):\n \"\"\"Get Neutron publicURL\"\"\"\n\n for i in catalog['access']['serviceCatalog']:\n if i['name'] == 'neutron':\n for endpoint in i['endpoints']:\n return endpoint['publicURL']\n\ndef list_ports(token, public_url):\n \"\"\"List Neutron ports\"\"\"\n\n headers = {'X-Auth-Token': token}\n auth_url = public_url + \"v2.0/ports\"\n r = requests.get(auth_url, headers=headers)\n\n if r.text:\n parsed_json = json.loads(r.text)\n return parsed_json['ports']\n else:\n sys.stderr.write(\"ERROR: Unable to retrieve Neutron ports list\\n\")\n exit(1)\n\ndef update_port(token, public_url, port_id, mac_address, calico_network):\n \"\"\"Update Neutron port with the allowed address pairs\"\"\"\n\n headers = {'Content-Type': 'application/json', 'X-Auth-Token': token}\n payload = {\n \"port\": {\n \"allowed_address_pairs\": [\n {\n \"ip_address\": calico_network,\n \"mac_address\": mac_address\n }\n ]\n }\n }\n auth_url = public_url + \"v2.0/ports/\" + port_id\n r = requests.put(auth_url, headers=headers, data=json.dumps(payload))\n\n parsed_json = json.loads(r.text)\n if r.status_code != 200 or 'NeutronError' in parsed_json:\n sys.stderr.write(\"ERROR: Unable to update port: %s\\n\" % parsed_json['NeutronError'])\n exit(1)\n else:\n return r.status_code\n\nif __name__ == \"__main__\":\n\n if len(sys.argv) < 3:\n sys.stderr.write(\"ERROR: Please run script with the correct arguments\\n\")\n exit(1)\n\n calico_network = sys.argv[1]\n vms_mac_addresses = sys.argv[2:]\n\n catalog = get_catalog()\n token = get_token(catalog)\n public_url = neutron_public_url(catalog)\n ports = list_ports(token, public_url)\n\n exit_code = 0 # no update to port\n\n for port in ports:\n port_id = port['id']\n mac_address = port['mac_address']\n if mac_address in vms_mac_addresses and not port['allowed_address_pairs']:\n status_code = update_port(token, public_url, port_id, mac_address, calico_network)\n if status_code == 200:\n exit_code = 2 # port has been updated\n\n exit(exit_code)\n", "path": "roles/calico/files/neutron_port_update.py"}]}
| 2,780 | 123 |
gh_patches_debug_21691
|
rasdani/github-patches
|
git_diff
|
yt-project__yt-4016
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
REG: fail to configure yt from the command line if $HOME/.config/ doesn't exist
### Bug report
**Bug summary**
This was discovered downstream thanks to yt_astro_analysis' CI
**Code for reproduction**
```shell
yt config set --global yt suppress_stream_logging True
```
**Actual outcome**
```python-traceback
Traceback (most recent call last):
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 106, in write
file_handler.write(config_as_str)
AttributeError: 'str' object has no attribute 'write'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/robcleme/.pyenv/versions/38-yt-dev/bin/yt", line 33, in <module>
sys.exit(load_entry_point('yt', 'console_scripts', 'yt')())
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 1632, in run_main
args.func(args)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 224, in run
self(args)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py", line 1418, in __call__
set_config(args.section, args.option, args.value, self.config_file)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 173, in set_config
write_config(config_file)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 177, in write_config
CONFIG.write(config_file)
File "/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py", line 109, in write
with open(file_handler, mode="w") as fh:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/robcleme/.config/yt/yt.toml'
```
**Expected outcome**
The configuration dir should be created.
I bet this regression is caused by #3626
</issue>
<code>
[start of yt/utilities/configure.py]
1 import os
2 import sys
3 import warnings
4 from typing import Callable, List
5
6 import tomli_w
7 from more_itertools import always_iterable
8
9 from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode
10
11 if sys.version_info >= (3, 11):
12 import tomllib
13 else:
14 import tomli as tomllib
15
16 configuration_callbacks: List[Callable[["YTConfig"], None]] = []
17
18
19 def config_dir():
20 config_root = os.environ.get(
21 "XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config")
22 )
23 conf_dir = os.path.join(config_root, "yt")
24 return conf_dir
25
26
27 class YTConfig:
28 def __init__(self, defaults=None):
29 if defaults is None:
30 defaults = {}
31 self.config_root = ConfigNode(None)
32
33 def get(self, section, *keys, callback=None):
34 node_or_leaf = self.config_root.get(section, *keys)
35 if isinstance(node_or_leaf, ConfigLeaf):
36 if callback is not None:
37 return callback(node_or_leaf)
38 return node_or_leaf.value
39 return node_or_leaf
40
41 def get_most_specific(self, section, *keys, **kwargs):
42 use_fallback = "fallback" in kwargs
43 fallback = kwargs.pop("fallback", None)
44 try:
45 return self.config_root.get_deepest_leaf(section, *keys)
46 except KeyError as err:
47 if use_fallback:
48 return fallback
49 else:
50 raise err
51
52 def update(self, new_values, metadata=None):
53 if metadata is None:
54 metadata = {}
55 self.config_root.update(new_values, metadata)
56
57 def has_section(self, section):
58 try:
59 self.config_root.get_child(section)
60 return True
61 except KeyError:
62 return False
63
64 def add_section(self, section):
65 self.config_root.add_child(section)
66
67 def remove_section(self, section):
68 if self.has_section(section):
69 self.config_root.remove_child(section)
70 return True
71 else:
72 return False
73
74 def set(self, *args, metadata=None):
75 section, *keys, value = args
76 if metadata is None:
77 metadata = {"source": "runtime"}
78 self.config_root.upsert_from_list(
79 [section] + list(keys), value, extra_data=metadata
80 )
81
82 def remove(self, *args):
83 self.config_root.pop_leaf(args)
84
85 def read(self, file_names):
86 file_names_read = []
87 for fname in always_iterable(file_names):
88 if not os.path.exists(fname):
89 continue
90 metadata = {"source": f"file: {fname}"}
91 try:
92 with open(fname, "rb") as fh:
93 data = tomllib.load(fh)
94 except tomllib.TOMLDecodeError as exc:
95 warnings.warn(
96 f"Could not load configuration file {fname} (invalid TOML: {exc})"
97 )
98 else:
99 self.update(data, metadata=metadata)
100 file_names_read.append(fname)
101
102 return file_names_read
103
104 def write(self, file_handler):
105 value = self.config_root.as_dict()
106 config_as_str = tomli_w.dumps(value)
107
108 try:
109 # Assuming file_handler has a write attribute
110 file_handler.write(config_as_str)
111 except AttributeError:
112 # Otherwise we expect a path to a file
113 with open(file_handler, mode="w") as fh:
114 fh.write(config_as_str)
115
116 @staticmethod
117 def get_global_config_file():
118 return os.path.join(config_dir(), "yt.toml")
119
120 @staticmethod
121 def get_local_config_file():
122 return os.path.join(os.path.abspath(os.curdir), "yt.toml")
123
124 def __setitem__(self, args, value):
125 section, *keys = always_iterable(args)
126 self.set(section, *keys, value, metadata=None)
127
128 def __getitem__(self, key):
129 section, *keys = always_iterable(key)
130 return self.get(section, *keys)
131
132 def __contains__(self, item):
133 return item in self.config_root
134
135 # Add support for IPython rich display
136 # see https://ipython.readthedocs.io/en/stable/config/integrating.html
137 def _repr_json_(self):
138 return self.config_root._repr_json_()
139
140
141 CONFIG = YTConfig()
142
143
144 def _cast_bool_helper(value):
145 if value == "True":
146 return True
147 elif value == "False":
148 return False
149 else:
150 raise ValueError("Cannot safely cast to bool")
151
152
153 def _expand_all(s):
154 return os.path.expandvars(os.path.expanduser(s))
155
156
157 def _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):
158 for t in types:
159 try:
160 retval = t(value)
161 return retval
162 except ValueError:
163 pass
164
165
166 def get_config(section, option):
167 *option_path, option_name = option.split(".")
168 return CONFIG.get(section, *option_path, option_name)
169
170
171 def set_config(section, option, value, config_file):
172 if not CONFIG.has_section(section):
173 CONFIG.add_section(section)
174
175 option_path = option.split(".")
176 CONFIG.set(section, *option_path, _cast_value_helper(value))
177 write_config(config_file)
178
179
180 def write_config(config_file):
181 CONFIG.write(config_file)
182
183
184 def rm_config(section, option, config_file):
185 option_path = option.split(".")
186 CONFIG.remove(section, *option_path)
187 write_config(config_file)
188
[end of yt/utilities/configure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt/utilities/configure.py b/yt/utilities/configure.py
--- a/yt/utilities/configure.py
+++ b/yt/utilities/configure.py
@@ -1,6 +1,7 @@
import os
import sys
import warnings
+from pathlib import Path
from typing import Callable, List
import tomli_w
@@ -106,12 +107,19 @@
config_as_str = tomli_w.dumps(value)
try:
- # Assuming file_handler has a write attribute
+ file_path = Path(file_handler)
+ except TypeError:
+ if not hasattr(file_handler, "write"):
+ raise TypeError(
+ f"Expected a path to a file, or a writable object, got {file_handler}"
+ ) from None
file_handler.write(config_as_str)
- except AttributeError:
- # Otherwise we expect a path to a file
- with open(file_handler, mode="w") as fh:
- fh.write(config_as_str)
+ else:
+ pdir = file_path.parent
+ if not pdir.exists():
+ warnings.warn(f"{pdir!s} does not exist, creating it (recursively)")
+ os.makedirs(pdir)
+ file_path.write_text(config_as_str)
@staticmethod
def get_global_config_file():
|
{"golden_diff": "diff --git a/yt/utilities/configure.py b/yt/utilities/configure.py\n--- a/yt/utilities/configure.py\n+++ b/yt/utilities/configure.py\n@@ -1,6 +1,7 @@\n import os\n import sys\n import warnings\n+from pathlib import Path\n from typing import Callable, List\n \n import tomli_w\n@@ -106,12 +107,19 @@\n config_as_str = tomli_w.dumps(value)\n \n try:\n- # Assuming file_handler has a write attribute\n+ file_path = Path(file_handler)\n+ except TypeError:\n+ if not hasattr(file_handler, \"write\"):\n+ raise TypeError(\n+ f\"Expected a path to a file, or a writable object, got {file_handler}\"\n+ ) from None\n file_handler.write(config_as_str)\n- except AttributeError:\n- # Otherwise we expect a path to a file\n- with open(file_handler, mode=\"w\") as fh:\n- fh.write(config_as_str)\n+ else:\n+ pdir = file_path.parent\n+ if not pdir.exists():\n+ warnings.warn(f\"{pdir!s} does not exist, creating it (recursively)\")\n+ os.makedirs(pdir)\n+ file_path.write_text(config_as_str)\n \n @staticmethod\n def get_global_config_file():\n", "issue": "REG: fail to configure yt from the command line if $HOME/.config/ doesn't exist\n### Bug report\r\n\r\n**Bug summary**\r\n\r\nThis was discovered downstream thanks to yt_astro_analysis' CI\r\n\r\n**Code for reproduction**\r\n```shell\r\nyt config set --global yt suppress_stream_logging True\r\n```\r\n\r\n**Actual outcome**\r\n\r\n```python-traceback\r\nTraceback (most recent call last):\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py\", line 106, in write\r\n file_handler.write(config_as_str)\r\nAttributeError: 'str' object has no attribute 'write'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/robcleme/.pyenv/versions/38-yt-dev/bin/yt\", line 33, in <module>\r\n sys.exit(load_entry_point('yt', 'console_scripts', 'yt')())\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py\", line 1632, in run_main\r\n args.func(args)\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py\", line 224, in run\r\n self(args)\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/command_line.py\", line 1418, in __call__\r\n set_config(args.section, args.option, args.value, self.config_file)\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py\", line 173, in set_config\r\n write_config(config_file)\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py\", line 177, in write_config\r\n CONFIG.write(config_file)\r\n File \"/Users/robcleme/dev/yt-project/yt/yt/utilities/configure.py\", line 109, in write\r\n with open(file_handler, mode=\"w\") as fh:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/robcleme/.config/yt/yt.toml'\r\n```\r\n\r\n**Expected outcome**\r\n\r\nThe configuration dir should be created.\r\nI bet this regression is caused by #3626\n", "before_files": [{"content": "import os\nimport sys\nimport warnings\nfrom typing import Callable, List\n\nimport tomli_w\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nif sys.version_info >= (3, 11):\n import tomllib\nelse:\n import tomli as tomllib\n\nconfiguration_callbacks: List[Callable[[\"YTConfig\"], None]] = []\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n return conf_dir\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n try:\n with open(fname, \"rb\") as fh:\n data = tomllib.load(fh)\n except tomllib.TOMLDecodeError as exc:\n warnings.warn(\n f\"Could not load configuration file {fname} (invalid TOML: {exc})\"\n )\n else:\n self.update(data, metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = tomli_w.dumps(value)\n\n try:\n # Assuming file_handler has a write attribute\n file_handler.write(config_as_str)\n except AttributeError:\n # Otherwise we expect a path to a file\n with open(file_handler, mode=\"w\") as fh:\n fh.write(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\nCONFIG = YTConfig()\n\n\ndef _cast_bool_helper(value):\n if value == \"True\":\n return True\n elif value == \"False\":\n return False\n else:\n raise ValueError(\"Cannot safely cast to bool\")\n\n\ndef _expand_all(s):\n return os.path.expandvars(os.path.expanduser(s))\n\n\ndef _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):\n for t in types:\n try:\n retval = t(value)\n return retval\n except ValueError:\n pass\n\n\ndef get_config(section, option):\n *option_path, option_name = option.split(\".\")\n return CONFIG.get(section, *option_path, option_name)\n\n\ndef set_config(section, option, value, config_file):\n if not CONFIG.has_section(section):\n CONFIG.add_section(section)\n\n option_path = option.split(\".\")\n CONFIG.set(section, *option_path, _cast_value_helper(value))\n write_config(config_file)\n\n\ndef write_config(config_file):\n CONFIG.write(config_file)\n\n\ndef rm_config(section, option, config_file):\n option_path = option.split(\".\")\n CONFIG.remove(section, *option_path)\n write_config(config_file)\n", "path": "yt/utilities/configure.py"}]}
| 2,688 | 294 |
gh_patches_debug_11475
|
rasdani/github-patches
|
git_diff
|
huggingface__transformers-4477
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
❓ Warning : This overload of addcdiv_ is deprecated
# ❓ Questions & Help
When running the [official Colab example of GLUE](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb), during training I receive a `UserWarning` :
```
/pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of addcdiv_ is deprecated:
addcdiv_(Number value, Tensor tensor1, Tensor tensor2)
Consider using one of the following signatures instead:
addcdiv_(Tensor tensor1, Tensor tensor2, *, Number value)
```
---
**Is it expected ?**
</issue>
<code>
[start of src/transformers/optimization.py]
1 # coding=utf-8
2 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """PyTorch optimization for BERT model."""
16
17 import logging
18 import math
19
20 import torch
21 from torch.optim import Optimizer
22 from torch.optim.lr_scheduler import LambdaLR
23
24
25 logger = logging.getLogger(__name__)
26
27
28 def get_constant_schedule(optimizer, last_epoch=-1):
29 """ Create a schedule with a constant learning rate.
30 """
31 return LambdaLR(optimizer, lambda _: 1, last_epoch=last_epoch)
32
33
34 def get_constant_schedule_with_warmup(optimizer, num_warmup_steps, last_epoch=-1):
35 """ Create a schedule with a constant learning rate preceded by a warmup
36 period during which the learning rate increases linearly between 0 and 1.
37 """
38
39 def lr_lambda(current_step):
40 if current_step < num_warmup_steps:
41 return float(current_step) / float(max(1.0, num_warmup_steps))
42 return 1.0
43
44 return LambdaLR(optimizer, lr_lambda, last_epoch=last_epoch)
45
46
47 def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1):
48 """ Create a schedule with a learning rate that decreases linearly after
49 linearly increasing during a warmup period.
50 """
51
52 def lr_lambda(current_step):
53 if current_step < num_warmup_steps:
54 return float(current_step) / float(max(1, num_warmup_steps))
55 return max(
56 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps))
57 )
58
59 return LambdaLR(optimizer, lr_lambda, last_epoch)
60
61
62 def get_cosine_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, num_cycles=0.5, last_epoch=-1):
63 """ Create a schedule with a learning rate that decreases following the
64 values of the cosine function between 0 and `pi * cycles` after a warmup
65 period during which it increases linearly between 0 and 1.
66 """
67
68 def lr_lambda(current_step):
69 if current_step < num_warmup_steps:
70 return float(current_step) / float(max(1, num_warmup_steps))
71 progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))
72 return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))
73
74 return LambdaLR(optimizer, lr_lambda, last_epoch)
75
76
77 def get_cosine_with_hard_restarts_schedule_with_warmup(
78 optimizer, num_warmup_steps, num_training_steps, num_cycles=1.0, last_epoch=-1
79 ):
80 """ Create a schedule with a learning rate that decreases following the
81 values of the cosine function with several hard restarts, after a warmup
82 period during which it increases linearly between 0 and 1.
83 """
84
85 def lr_lambda(current_step):
86 if current_step < num_warmup_steps:
87 return float(current_step) / float(max(1, num_warmup_steps))
88 progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))
89 if progress >= 1.0:
90 return 0.0
91 return max(0.0, 0.5 * (1.0 + math.cos(math.pi * ((float(num_cycles) * progress) % 1.0))))
92
93 return LambdaLR(optimizer, lr_lambda, last_epoch)
94
95
96 class AdamW(Optimizer):
97 """ Implements Adam algorithm with weight decay fix.
98
99 Parameters:
100 lr (float): learning rate. Default 1e-3.
101 betas (tuple of 2 floats): Adams beta parameters (b1, b2). Default: (0.9, 0.999)
102 eps (float): Adams epsilon. Default: 1e-6
103 weight_decay (float): Weight decay. Default: 0.0
104 correct_bias (bool): can be set to False to avoid correcting bias in Adam (e.g. like in Bert TF repository). Default True.
105 """
106
107 def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6, weight_decay=0.0, correct_bias=True):
108 if lr < 0.0:
109 raise ValueError("Invalid learning rate: {} - should be >= 0.0".format(lr))
110 if not 0.0 <= betas[0] < 1.0:
111 raise ValueError("Invalid beta parameter: {} - should be in [0.0, 1.0[".format(betas[0]))
112 if not 0.0 <= betas[1] < 1.0:
113 raise ValueError("Invalid beta parameter: {} - should be in [0.0, 1.0[".format(betas[1]))
114 if not 0.0 <= eps:
115 raise ValueError("Invalid epsilon value: {} - should be >= 0.0".format(eps))
116 defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, correct_bias=correct_bias)
117 super().__init__(params, defaults)
118
119 def step(self, closure=None):
120 """Performs a single optimization step.
121
122 Arguments:
123 closure (callable, optional): A closure that reevaluates the model
124 and returns the loss.
125 """
126 loss = None
127 if closure is not None:
128 loss = closure()
129
130 for group in self.param_groups:
131 for p in group["params"]:
132 if p.grad is None:
133 continue
134 grad = p.grad.data
135 if grad.is_sparse:
136 raise RuntimeError("Adam does not support sparse gradients, please consider SparseAdam instead")
137
138 state = self.state[p]
139
140 # State initialization
141 if len(state) == 0:
142 state["step"] = 0
143 # Exponential moving average of gradient values
144 state["exp_avg"] = torch.zeros_like(p.data)
145 # Exponential moving average of squared gradient values
146 state["exp_avg_sq"] = torch.zeros_like(p.data)
147
148 exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"]
149 beta1, beta2 = group["betas"]
150
151 state["step"] += 1
152
153 # Decay the first and second moment running average coefficient
154 # In-place operations to update the averages at the same time
155 exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
156 exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1.0 - beta2)
157 denom = exp_avg_sq.sqrt().add_(group["eps"])
158
159 step_size = group["lr"]
160 if group["correct_bias"]: # No bias correction for Bert
161 bias_correction1 = 1.0 - beta1 ** state["step"]
162 bias_correction2 = 1.0 - beta2 ** state["step"]
163 step_size = step_size * math.sqrt(bias_correction2) / bias_correction1
164
165 p.data.addcdiv_(-step_size, exp_avg, denom)
166
167 # Just adding the square of the weights to the loss function is *not*
168 # the correct way of using L2 regularization/weight decay with Adam,
169 # since that will interact with the m and v parameters in strange ways.
170 #
171 # Instead we want to decay the weights in a manner that doesn't interact
172 # with the m/v parameters. This is equivalent to adding the square
173 # of the weights to the loss with plain (non-momentum) SGD.
174 # Add weight decay at the end (fixed version)
175 if group["weight_decay"] > 0.0:
176 p.data.add_(p.data, alpha=-group["lr"] * group["weight_decay"])
177
178 return loss
179
[end of src/transformers/optimization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py
--- a/src/transformers/optimization.py
+++ b/src/transformers/optimization.py
@@ -162,7 +162,7 @@
bias_correction2 = 1.0 - beta2 ** state["step"]
step_size = step_size * math.sqrt(bias_correction2) / bias_correction1
- p.data.addcdiv_(-step_size, exp_avg, denom)
+ p.data.addcdiv_(exp_avg, denom, value=-step_size)
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
|
{"golden_diff": "diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py\n--- a/src/transformers/optimization.py\n+++ b/src/transformers/optimization.py\n@@ -162,7 +162,7 @@\n bias_correction2 = 1.0 - beta2 ** state[\"step\"]\n step_size = step_size * math.sqrt(bias_correction2) / bias_correction1\n \n- p.data.addcdiv_(-step_size, exp_avg, denom)\n+ p.data.addcdiv_(exp_avg, denom, value=-step_size)\n \n # Just adding the square of the weights to the loss function is *not*\n # the correct way of using L2 regularization/weight decay with Adam,\n", "issue": "\u2753 Warning : This overload of addcdiv_ is deprecated\n# \u2753 Questions & Help\r\n\r\nWhen running the [official Colab example of GLUE](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb), during training I receive a `UserWarning` :\r\n\r\n```\r\n/pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of addcdiv_ is deprecated:\r\n\taddcdiv_(Number value, Tensor tensor1, Tensor tensor2)\r\nConsider using one of the following signatures instead:\r\n\taddcdiv_(Tensor tensor1, Tensor tensor2, *, Number value)\r\n```\r\n\r\n---\r\n\r\n**Is it expected ?**\r\n\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"PyTorch optimization for BERT model.\"\"\"\n\nimport logging\nimport math\n\nimport torch\nfrom torch.optim import Optimizer\nfrom torch.optim.lr_scheduler import LambdaLR\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_constant_schedule(optimizer, last_epoch=-1):\n \"\"\" Create a schedule with a constant learning rate.\n \"\"\"\n return LambdaLR(optimizer, lambda _: 1, last_epoch=last_epoch)\n\n\ndef get_constant_schedule_with_warmup(optimizer, num_warmup_steps, last_epoch=-1):\n \"\"\" Create a schedule with a constant learning rate preceded by a warmup\n period during which the learning rate increases linearly between 0 and 1.\n \"\"\"\n\n def lr_lambda(current_step):\n if current_step < num_warmup_steps:\n return float(current_step) / float(max(1.0, num_warmup_steps))\n return 1.0\n\n return LambdaLR(optimizer, lr_lambda, last_epoch=last_epoch)\n\n\ndef get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1):\n \"\"\" Create a schedule with a learning rate that decreases linearly after\n linearly increasing during a warmup period.\n \"\"\"\n\n def lr_lambda(current_step):\n if current_step < num_warmup_steps:\n return float(current_step) / float(max(1, num_warmup_steps))\n return max(\n 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps))\n )\n\n return LambdaLR(optimizer, lr_lambda, last_epoch)\n\n\ndef get_cosine_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, num_cycles=0.5, last_epoch=-1):\n \"\"\" Create a schedule with a learning rate that decreases following the\n values of the cosine function between 0 and `pi * cycles` after a warmup\n period during which it increases linearly between 0 and 1.\n \"\"\"\n\n def lr_lambda(current_step):\n if current_step < num_warmup_steps:\n return float(current_step) / float(max(1, num_warmup_steps))\n progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))\n return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))\n\n return LambdaLR(optimizer, lr_lambda, last_epoch)\n\n\ndef get_cosine_with_hard_restarts_schedule_with_warmup(\n optimizer, num_warmup_steps, num_training_steps, num_cycles=1.0, last_epoch=-1\n):\n \"\"\" Create a schedule with a learning rate that decreases following the\n values of the cosine function with several hard restarts, after a warmup\n period during which it increases linearly between 0 and 1.\n \"\"\"\n\n def lr_lambda(current_step):\n if current_step < num_warmup_steps:\n return float(current_step) / float(max(1, num_warmup_steps))\n progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))\n if progress >= 1.0:\n return 0.0\n return max(0.0, 0.5 * (1.0 + math.cos(math.pi * ((float(num_cycles) * progress) % 1.0))))\n\n return LambdaLR(optimizer, lr_lambda, last_epoch)\n\n\nclass AdamW(Optimizer):\n \"\"\" Implements Adam algorithm with weight decay fix.\n\n Parameters:\n lr (float): learning rate. Default 1e-3.\n betas (tuple of 2 floats): Adams beta parameters (b1, b2). Default: (0.9, 0.999)\n eps (float): Adams epsilon. Default: 1e-6\n weight_decay (float): Weight decay. Default: 0.0\n correct_bias (bool): can be set to False to avoid correcting bias in Adam (e.g. like in Bert TF repository). Default True.\n \"\"\"\n\n def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-6, weight_decay=0.0, correct_bias=True):\n if lr < 0.0:\n raise ValueError(\"Invalid learning rate: {} - should be >= 0.0\".format(lr))\n if not 0.0 <= betas[0] < 1.0:\n raise ValueError(\"Invalid beta parameter: {} - should be in [0.0, 1.0[\".format(betas[0]))\n if not 0.0 <= betas[1] < 1.0:\n raise ValueError(\"Invalid beta parameter: {} - should be in [0.0, 1.0[\".format(betas[1]))\n if not 0.0 <= eps:\n raise ValueError(\"Invalid epsilon value: {} - should be >= 0.0\".format(eps))\n defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, correct_bias=correct_bias)\n super().__init__(params, defaults)\n\n def step(self, closure=None):\n \"\"\"Performs a single optimization step.\n\n Arguments:\n closure (callable, optional): A closure that reevaluates the model\n and returns the loss.\n \"\"\"\n loss = None\n if closure is not None:\n loss = closure()\n\n for group in self.param_groups:\n for p in group[\"params\"]:\n if p.grad is None:\n continue\n grad = p.grad.data\n if grad.is_sparse:\n raise RuntimeError(\"Adam does not support sparse gradients, please consider SparseAdam instead\")\n\n state = self.state[p]\n\n # State initialization\n if len(state) == 0:\n state[\"step\"] = 0\n # Exponential moving average of gradient values\n state[\"exp_avg\"] = torch.zeros_like(p.data)\n # Exponential moving average of squared gradient values\n state[\"exp_avg_sq\"] = torch.zeros_like(p.data)\n\n exp_avg, exp_avg_sq = state[\"exp_avg\"], state[\"exp_avg_sq\"]\n beta1, beta2 = group[\"betas\"]\n\n state[\"step\"] += 1\n\n # Decay the first and second moment running average coefficient\n # In-place operations to update the averages at the same time\n exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)\n exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1.0 - beta2)\n denom = exp_avg_sq.sqrt().add_(group[\"eps\"])\n\n step_size = group[\"lr\"]\n if group[\"correct_bias\"]: # No bias correction for Bert\n bias_correction1 = 1.0 - beta1 ** state[\"step\"]\n bias_correction2 = 1.0 - beta2 ** state[\"step\"]\n step_size = step_size * math.sqrt(bias_correction2) / bias_correction1\n\n p.data.addcdiv_(-step_size, exp_avg, denom)\n\n # Just adding the square of the weights to the loss function is *not*\n # the correct way of using L2 regularization/weight decay with Adam,\n # since that will interact with the m and v parameters in strange ways.\n #\n # Instead we want to decay the weights in a manner that doesn't interact\n # with the m/v parameters. This is equivalent to adding the square\n # of the weights to the loss with plain (non-momentum) SGD.\n # Add weight decay at the end (fixed version)\n if group[\"weight_decay\"] > 0.0:\n p.data.add_(p.data, alpha=-group[\"lr\"] * group[\"weight_decay\"])\n\n return loss\n", "path": "src/transformers/optimization.py"}]}
| 3,001 | 163 |
gh_patches_debug_15174
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-6389
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PathOverflow: Path Overflow from: '**********ZZZZ'
### Issue Summary
Creating a new page from the administration area raises a PathOverflow error.
```
PathOverflow: Path Overflow from: '000100020003ZZZZ'
File "django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "django/core/handlers/base.py", line 126, in _get_response
response = self.process_exception_by_middleware(e, request)
File "django/core/handlers/base.py", line 124, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "wagtail/admin/urls/__init__.py", line 102, in wrapper
return view_func(request, *args, **kwargs)
File "wagtail/admin/decorators.py", line 34, in decorated_view
return view_func(request, *args, **kwargs)
File "wagtail/admin/views/pages.py", line 224, in create
parent_page.add_child(instance=page)
File "treebeard/mp_tree.py", line 1013, in add_child
return MP_AddChildHandler(self, **kwargs).process()
File "treebeard/mp_tree.py", line 387, in process
newobj.path = self.node.get_last_child()._inc_path()
File "treebeard/mp_tree.py", line 1114, in _inc_path
raise PathOverflow(_("Path Overflow from: '%s'" % (self.path, )))
```
The only information I could find potentially regarding the issue was on SO at the following link: https://stackoverflow.com/questions/54166821/wagtail-pathoverflow-on-adding-new-child-page
There are only a handful of pages, so I'm not sure that particular issue applies though.
### Steps to Reproduce
I haven't been able to reproduce this myself, however a client has been seeing this behavior, which at the time I've just resolved by modifying the ZZZZ at the end to be numeric. I'm in the process of gathering some steps they may have taken to replicate it from this end (where they tried to add the page from)
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: no
### Technical details
* Python version: 3.6.8
* Django version: 2.1.4
* Wagtail version: 2.4
* Browser version: N/A
</issue>
<code>
[start of wagtail/admin/views/pages/preview.py]
1 from time import time
2
3 from django.contrib.contenttypes.models import ContentType
4 from django.core.exceptions import PermissionDenied
5 from django.http import Http404, JsonResponse
6 from django.http.request import QueryDict
7 from django.shortcuts import get_object_or_404
8 from django.template.response import TemplateResponse
9 from django.views.generic import View
10
11 from wagtail.core.models import Page
12
13
14 def view_draft(request, page_id):
15 page = get_object_or_404(Page, id=page_id).get_latest_revision_as_page()
16 perms = page.permissions_for_user(request.user)
17 if not (perms.can_publish() or perms.can_edit()):
18 raise PermissionDenied
19
20 try:
21 preview_mode = page.default_preview_mode
22 except IndexError:
23 raise PermissionDenied
24
25 return page.make_preview_request(request, preview_mode)
26
27
28 class PreviewOnEdit(View):
29 http_method_names = ('post', 'get')
30 preview_expiration_timeout = 60 * 60 * 24 # seconds
31 session_key_prefix = 'wagtail-preview-'
32
33 def remove_old_preview_data(self):
34 expiration = time() - self.preview_expiration_timeout
35 expired_keys = [
36 k for k, v in self.request.session.items()
37 if k.startswith(self.session_key_prefix) and v[1] < expiration]
38 # Removes the session key gracefully
39 for k in expired_keys:
40 self.request.session.pop(k)
41
42 @property
43 def session_key(self):
44 return self.session_key_prefix + ','.join(self.args)
45
46 def get_page(self):
47 return get_object_or_404(Page,
48 id=self.kwargs["page_id"]).get_latest_revision_as_page()
49
50 def get_form(self, page, query_dict):
51 form_class = page.get_edit_handler().get_form_class()
52 parent_page = page.get_parent().specific
53
54 if self.session_key not in self.request.session:
55 # Session key not in session, returning null form
56 return form_class(instance=page, parent_page=parent_page)
57
58 return form_class(query_dict, instance=page, parent_page=parent_page)
59
60 def post(self, request, *args, **kwargs):
61 # TODO: Handle request.FILES.
62 request.session[self.session_key] = request.POST.urlencode(), time()
63 self.remove_old_preview_data()
64 form = self.get_form(self.get_page(), request.POST)
65 return JsonResponse({'is_valid': form.is_valid()})
66
67 def error_response(self, page):
68 return TemplateResponse(
69 self.request, 'wagtailadmin/pages/preview_error.html',
70 {'page': page}
71 )
72
73 def get(self, request, *args, **kwargs):
74 page = self.get_page()
75
76 post_data, timestamp = self.request.session.get(self.session_key,
77 (None, None))
78 if not isinstance(post_data, str):
79 post_data = ''
80 form = self.get_form(page, QueryDict(post_data))
81
82 if not form.is_valid():
83 return self.error_response(page)
84
85 form.save(commit=False)
86
87 try:
88 preview_mode = request.GET.get('mode', page.default_preview_mode)
89 except IndexError:
90 raise PermissionDenied
91
92 return page.make_preview_request(request, preview_mode)
93
94
95 class PreviewOnCreate(PreviewOnEdit):
96 def get_page(self):
97 content_type_app_name = self.kwargs["content_type_app_name"]
98 content_type_model_name = self.kwargs["content_type_model_name"]
99 parent_page_id = self.kwargs["parent_page_id"]
100 try:
101 content_type = ContentType.objects.get_by_natural_key(
102 content_type_app_name, content_type_model_name)
103 except ContentType.DoesNotExist:
104 raise Http404
105
106 page = content_type.model_class()()
107 parent_page = get_object_or_404(Page, id=parent_page_id).specific
108 # We need to populate treebeard's path / depth fields in order to
109 # pass validation. We can't make these 100% consistent with the rest
110 # of the tree without making actual database changes (such as
111 # incrementing the parent's numchild field), but by calling treebeard's
112 # internal _get_path method, we can set a 'realistic' value that will
113 # hopefully enable tree traversal operations
114 # to at least partially work.
115 page.depth = parent_page.depth + 1
116 # Puts the page at the maximum possible path
117 # for a child of `parent_page`.
118 page.path = Page._get_children_path_interval(parent_page.path)[1]
119 return page
120
121 def get_form(self, page, query_dict):
122 form = super().get_form(page, query_dict)
123 if form.is_valid():
124 # Ensures our unsaved page has a suitable url.
125 form.instance.set_url_path(form.parent_page)
126
127 form.instance.full_clean()
128 return form
129
[end of wagtail/admin/views/pages/preview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/admin/views/pages/preview.py b/wagtail/admin/views/pages/preview.py
--- a/wagtail/admin/views/pages/preview.py
+++ b/wagtail/admin/views/pages/preview.py
@@ -113,9 +113,15 @@
# hopefully enable tree traversal operations
# to at least partially work.
page.depth = parent_page.depth + 1
- # Puts the page at the maximum possible path
+ # Puts the page at the next available path
# for a child of `parent_page`.
- page.path = Page._get_children_path_interval(parent_page.path)[1]
+ if parent_page.is_leaf():
+ # set the path as the first child of parent_page
+ page.path = page._get_path(parent_page.path, page.depth, 1)
+ else:
+ # add the new page after the last child of parent_page
+ page.path = parent_page.get_last_child()._inc_path()
+
return page
def get_form(self, page, query_dict):
|
{"golden_diff": "diff --git a/wagtail/admin/views/pages/preview.py b/wagtail/admin/views/pages/preview.py\n--- a/wagtail/admin/views/pages/preview.py\n+++ b/wagtail/admin/views/pages/preview.py\n@@ -113,9 +113,15 @@\n # hopefully enable tree traversal operations\n # to at least partially work.\n page.depth = parent_page.depth + 1\n- # Puts the page at the maximum possible path\n+ # Puts the page at the next available path\n # for a child of `parent_page`.\n- page.path = Page._get_children_path_interval(parent_page.path)[1]\n+ if parent_page.is_leaf():\n+ # set the path as the first child of parent_page\n+ page.path = page._get_path(parent_page.path, page.depth, 1)\n+ else:\n+ # add the new page after the last child of parent_page\n+ page.path = parent_page.get_last_child()._inc_path()\n+\n return page\n \n def get_form(self, page, query_dict):\n", "issue": "PathOverflow: Path Overflow from: '**********ZZZZ'\n### Issue Summary\r\n\r\nCreating a new page from the administration area raises a PathOverflow error.\r\n\r\n```\r\nPathOverflow: Path Overflow from: '000100020003ZZZZ'\r\n File \"django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"django/core/handlers/base.py\", line 126, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"django/core/handlers/base.py\", line 124, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"django/views/decorators/cache.py\", line 44, in _wrapped_view_func\r\n response = view_func(request, *args, **kwargs)\r\n File \"wagtail/admin/urls/__init__.py\", line 102, in wrapper\r\n return view_func(request, *args, **kwargs)\r\n File \"wagtail/admin/decorators.py\", line 34, in decorated_view\r\n return view_func(request, *args, **kwargs)\r\n File \"wagtail/admin/views/pages.py\", line 224, in create\r\n parent_page.add_child(instance=page)\r\n File \"treebeard/mp_tree.py\", line 1013, in add_child\r\n return MP_AddChildHandler(self, **kwargs).process()\r\n File \"treebeard/mp_tree.py\", line 387, in process\r\n newobj.path = self.node.get_last_child()._inc_path()\r\n File \"treebeard/mp_tree.py\", line 1114, in _inc_path\r\n raise PathOverflow(_(\"Path Overflow from: '%s'\" % (self.path, )))\r\n```\r\n\r\nThe only information I could find potentially regarding the issue was on SO at the following link: https://stackoverflow.com/questions/54166821/wagtail-pathoverflow-on-adding-new-child-page\r\n\r\nThere are only a handful of pages, so I'm not sure that particular issue applies though.\r\n\r\n### Steps to Reproduce\r\n\r\nI haven't been able to reproduce this myself, however a client has been seeing this behavior, which at the time I've just resolved by modifying the ZZZZ at the end to be numeric. I'm in the process of gathering some steps they may have taken to replicate it from this end (where they tried to add the page from)\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: no\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.6.8\r\n* Django version: 2.1.4\r\n* Wagtail version: 2.4\r\n* Browser version: N/A\r\n\n", "before_files": [{"content": "from time import time\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import Http404, JsonResponse\nfrom django.http.request import QueryDict\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.views.generic import View\n\nfrom wagtail.core.models import Page\n\n\ndef view_draft(request, page_id):\n page = get_object_or_404(Page, id=page_id).get_latest_revision_as_page()\n perms = page.permissions_for_user(request.user)\n if not (perms.can_publish() or perms.can_edit()):\n raise PermissionDenied\n\n try:\n preview_mode = page.default_preview_mode\n except IndexError:\n raise PermissionDenied\n\n return page.make_preview_request(request, preview_mode)\n\n\nclass PreviewOnEdit(View):\n http_method_names = ('post', 'get')\n preview_expiration_timeout = 60 * 60 * 24 # seconds\n session_key_prefix = 'wagtail-preview-'\n\n def remove_old_preview_data(self):\n expiration = time() - self.preview_expiration_timeout\n expired_keys = [\n k for k, v in self.request.session.items()\n if k.startswith(self.session_key_prefix) and v[1] < expiration]\n # Removes the session key gracefully\n for k in expired_keys:\n self.request.session.pop(k)\n\n @property\n def session_key(self):\n return self.session_key_prefix + ','.join(self.args)\n\n def get_page(self):\n return get_object_or_404(Page,\n id=self.kwargs[\"page_id\"]).get_latest_revision_as_page()\n\n def get_form(self, page, query_dict):\n form_class = page.get_edit_handler().get_form_class()\n parent_page = page.get_parent().specific\n\n if self.session_key not in self.request.session:\n # Session key not in session, returning null form\n return form_class(instance=page, parent_page=parent_page)\n\n return form_class(query_dict, instance=page, parent_page=parent_page)\n\n def post(self, request, *args, **kwargs):\n # TODO: Handle request.FILES.\n request.session[self.session_key] = request.POST.urlencode(), time()\n self.remove_old_preview_data()\n form = self.get_form(self.get_page(), request.POST)\n return JsonResponse({'is_valid': form.is_valid()})\n\n def error_response(self, page):\n return TemplateResponse(\n self.request, 'wagtailadmin/pages/preview_error.html',\n {'page': page}\n )\n\n def get(self, request, *args, **kwargs):\n page = self.get_page()\n\n post_data, timestamp = self.request.session.get(self.session_key,\n (None, None))\n if not isinstance(post_data, str):\n post_data = ''\n form = self.get_form(page, QueryDict(post_data))\n\n if not form.is_valid():\n return self.error_response(page)\n\n form.save(commit=False)\n\n try:\n preview_mode = request.GET.get('mode', page.default_preview_mode)\n except IndexError:\n raise PermissionDenied\n\n return page.make_preview_request(request, preview_mode)\n\n\nclass PreviewOnCreate(PreviewOnEdit):\n def get_page(self):\n content_type_app_name = self.kwargs[\"content_type_app_name\"]\n content_type_model_name = self.kwargs[\"content_type_model_name\"]\n parent_page_id = self.kwargs[\"parent_page_id\"]\n try:\n content_type = ContentType.objects.get_by_natural_key(\n content_type_app_name, content_type_model_name)\n except ContentType.DoesNotExist:\n raise Http404\n\n page = content_type.model_class()()\n parent_page = get_object_or_404(Page, id=parent_page_id).specific\n # We need to populate treebeard's path / depth fields in order to\n # pass validation. We can't make these 100% consistent with the rest\n # of the tree without making actual database changes (such as\n # incrementing the parent's numchild field), but by calling treebeard's\n # internal _get_path method, we can set a 'realistic' value that will\n # hopefully enable tree traversal operations\n # to at least partially work.\n page.depth = parent_page.depth + 1\n # Puts the page at the maximum possible path\n # for a child of `parent_page`.\n page.path = Page._get_children_path_interval(parent_page.path)[1]\n return page\n\n def get_form(self, page, query_dict):\n form = super().get_form(page, query_dict)\n if form.is_valid():\n # Ensures our unsaved page has a suitable url.\n form.instance.set_url_path(form.parent_page)\n\n form.instance.full_clean()\n return form\n", "path": "wagtail/admin/views/pages/preview.py"}]}
| 2,454 | 233 |
gh_patches_debug_27864
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-5158
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
notification sent for modules in de/unpublished project
**URL:** (depending on follow status, so URL does not help here)
**user:** follower
**expected behaviour:** I should not get a notification for modules in unpublished projects
**behaviour:** If I follow an unpublished or depublished project (e.g. because a project had been published and after a participation was depublished or because the initiator followed in preview before publishing) I get a notification eg for start of participation. The Module has to be added to project.
**important screensize:**
**device & browser:**
**Comment/Question:** it is an edge case but was topic in two support questions
Screenshot?
</issue>
<code>
[start of meinberlin/apps/notifications/emails.py]
1 from django.contrib import auth
2
3 from meinberlin.apps.contrib.emails import Email
4
5 User = auth.get_user_model()
6
7
8 def _exclude_actor(receivers, actor):
9 if not actor:
10 return receivers
11
12 if hasattr(receivers, "exclude"):
13 return receivers.exclude(id=actor.id)
14
15 return [receiver for receiver in receivers if not receiver == actor]
16
17
18 def _exclude_moderators(receivers, action):
19 if hasattr(action, "project"):
20 moderator_ids = action.project.moderators.values_list("id", flat=True)
21
22 if hasattr(receivers, "exclude"):
23 return receivers.exclude(id__in=moderator_ids)
24
25 return [user for user in receivers if user.id not in moderator_ids]
26
27 return receivers
28
29
30 def _exclude_notifications_disabled(receivers):
31 if hasattr(receivers, "filter"):
32 return receivers.filter(get_notifications=True)
33
34 return [user for user in receivers if user.get_notifications]
35
36
37 class NotifyCreatorEmail(Email):
38 template_name = "meinberlin_notifications/emails/notify_creator"
39
40 def get_receivers(self):
41 action = self.object
42 if hasattr(action.target, "creator"):
43 receivers = [action.target.creator]
44 receivers = _exclude_notifications_disabled(receivers)
45 receivers = _exclude_actor(receivers, action.actor)
46 receivers = _exclude_moderators(receivers, action)
47 return receivers
48 return []
49
50
51 class NotifyCreatorOrContactOnModeratorFeedback(Email):
52 template_name = (
53 "meinberlin_notifications/emails/notify_creator_on_moderator_feedback"
54 )
55
56 def get_receivers(self):
57 if hasattr(self.object, "contact_email"):
58 # send to contact
59 receivers = [self.object.contact_email]
60 else:
61 # send to creator
62 receivers = [self.object.creator]
63 receivers = _exclude_notifications_disabled(receivers)
64 return receivers
65
66 def get_context(self):
67 context = super().get_context()
68 context["object"] = self.object
69 if not hasattr(self.object, "contact_email"):
70 # send to creator
71 context["send_to_creator"] = True
72 return context
73
74
75 class NotifyModeratorsEmail(Email):
76 template_name = "meinberlin_notifications/emails/notify_moderator"
77
78 def get_receivers(self):
79 action = self.object
80 receivers = action.project.moderators.all()
81 receivers = _exclude_actor(receivers, action.actor)
82 receivers = _exclude_notifications_disabled(receivers)
83 return receivers
84
85
86 class NotifyInitiatorsOnProjectCreatedEmail(Email):
87 template_name = "meinberlin_notifications/emails/notify_initiators_project_created"
88
89 def get_receivers(self):
90 project = self.object
91 creator = User.objects.get(pk=self.kwargs["creator_pk"])
92 receivers = project.organisation.initiators.all()
93 receivers = _exclude_actor(receivers, creator)
94 receivers = _exclude_notifications_disabled(receivers)
95 return receivers
96
97 def get_context(self):
98 context = super().get_context()
99 creator = User.objects.get(pk=self.kwargs["creator_pk"])
100 context["creator"] = creator
101 context["project"] = self.object
102 return context
103
104
105 class NotifyFollowersOnPhaseStartedEmail(Email):
106 template_name = "meinberlin_notifications/emails" "/notify_followers_phase_started"
107
108 def get_receivers(self):
109 action = self.object
110 receivers = User.objects.filter(
111 follow__project=action.project,
112 follow__enabled=True,
113 )
114 receivers = _exclude_notifications_disabled(receivers)
115 return receivers
116
117
118 class NotifyFollowersOnPhaseIsOverSoonEmail(Email):
119 template_name = (
120 "meinberlin_notifications/emails" "/notify_followers_phase_over_soon"
121 )
122
123 def get_receivers(self):
124 action = self.object
125 receivers = User.objects.filter(
126 follow__project=action.project,
127 follow__enabled=True,
128 )
129 receivers = _exclude_notifications_disabled(receivers)
130 return receivers
131
132
133 class NotifyFollowersOnUpcommingEventEmail(Email):
134 template_name = (
135 "meinberlin_notifications/emails" "/notify_followers_event_upcomming"
136 )
137
138 def get_receivers(self):
139 action = self.object
140 receivers = User.objects.filter(
141 follow__project=action.project,
142 follow__enabled=True,
143 )
144 receivers = _exclude_notifications_disabled(receivers)
145 return receivers
146
[end of meinberlin/apps/notifications/emails.py]
[start of meinberlin/apps/notifications/signals.py]
1 from django.contrib.auth import get_user_model
2 from django.db.models import signals
3 from django.dispatch import receiver
4
5 from adhocracy4.actions.models import Action
6 from adhocracy4.actions.verbs import Verbs
7 from adhocracy4.dashboard import signals as dashboard_signals
8 from adhocracy4.follows.models import Follow
9 from adhocracy4.projects.models import Project
10
11 from . import emails
12
13 User = get_user_model()
14
15
16 @receiver(signals.post_save, sender=Action)
17 def send_notifications(instance, created, **kwargs):
18 action = instance
19 verb = Verbs(action.verb)
20
21 if action.type in ("item", "comment") and verb in (Verbs.CREATE, Verbs.ADD):
22 emails.NotifyCreatorEmail.send(action)
23
24 if action.project:
25 emails.NotifyModeratorsEmail.send(action)
26
27 elif action.type == "phase" and action.project.project_type == "a4projects.Project":
28 if verb == Verbs.START:
29 emails.NotifyFollowersOnPhaseStartedEmail.send(action)
30 elif verb == Verbs.SCHEDULE:
31 emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)
32
33 elif action.type == "offlineevent" and verb == Verbs.START:
34 emails.NotifyFollowersOnUpcommingEventEmail.send(action)
35
36
37 @receiver(dashboard_signals.project_created)
38 def send_project_created_notifications(**kwargs):
39 project = kwargs.get("project")
40 creator = kwargs.get("user")
41 emails.NotifyInitiatorsOnProjectCreatedEmail.send(project, creator_pk=creator.pk)
42
43
44 @receiver(signals.m2m_changed, sender=Project.moderators.through)
45 def autofollow_project_moderators(instance, action, pk_set, reverse, **kwargs):
46 if action == "post_add":
47 autofollow_project(instance, pk_set, reverse)
48
49
50 def autofollow_project(instance, pk_set, reverse):
51 if not reverse:
52 project = instance
53 users_pks = pk_set
54
55 for user_pk in users_pks:
56 Follow.objects.update_or_create(
57 project=project, creator_id=user_pk, defaults={"enabled": True}
58 )
59 else:
60 user = instance
61 project_pks = pk_set
62
63 for project_pk in project_pks:
64 Follow.objects.update_or_create(
65 project_id=project_pk, creator=user, defaults={"enabled": True}
66 )
67
[end of meinberlin/apps/notifications/signals.py]
[start of meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py]
1 from datetime import timedelta
2
3 from django.conf import settings
4 from django.contrib.contenttypes.models import ContentType
5 from django.core.management.base import BaseCommand
6
7 from adhocracy4.actions.models import Action
8 from adhocracy4.actions.verbs import Verbs
9 from meinberlin.apps.offlineevents.models import OfflineEvent
10
11
12 class Command(BaseCommand):
13 help = "Create offlineevent system actions."
14
15 def __init__(self):
16 if hasattr(settings, "ACTIONS_OFFLINE_EVENT_STARTING_HOURS"):
17 self.event_starting_hours = settings.ACTIONS_OFFLINE_EVENT_STARTING_HOURS
18 else:
19 self.event_starting_hours = 72
20
21 def handle(self, *args, **options):
22 self._event_starting()
23
24 def _event_starting(self):
25 event_ct = ContentType.objects.get_for_model(OfflineEvent)
26
27 events = OfflineEvent.objects.starts_within(hours=self.event_starting_hours)
28 for event in events:
29 existing_action = Action.objects.filter(
30 project=event.project,
31 verb=Verbs.START.value,
32 obj_content_type=event_ct,
33 obj_object_id=event.id,
34 ).first()
35
36 # If the event date has been modified and moved more than
37 # event_starting_hours ahead, schedule a new action
38 if (
39 not existing_action
40 or (
41 existing_action.timestamp
42 + timedelta(hours=self.event_starting_hours)
43 )
44 < event.date
45 ):
46 Action.objects.create(
47 project=event.project,
48 verb=Verbs.START.value,
49 obj=event,
50 timestamp=event.date,
51 )
52 elif existing_action.timestamp != event.date:
53 existing_action.timestamp = event.date
54 existing_action.save()
55
[end of meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/notifications/emails.py b/meinberlin/apps/notifications/emails.py
--- a/meinberlin/apps/notifications/emails.py
+++ b/meinberlin/apps/notifications/emails.py
@@ -130,10 +130,8 @@
return receivers
-class NotifyFollowersOnUpcommingEventEmail(Email):
- template_name = (
- "meinberlin_notifications/emails" "/notify_followers_event_upcomming"
- )
+class NotifyFollowersOnUpcomingEventEmail(Email):
+ template_name = "meinberlin_notifications/emails/notify_followers_event_upcoming"
def get_receivers(self):
action = self.object
diff --git a/meinberlin/apps/notifications/signals.py b/meinberlin/apps/notifications/signals.py
--- a/meinberlin/apps/notifications/signals.py
+++ b/meinberlin/apps/notifications/signals.py
@@ -31,7 +31,7 @@
emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)
elif action.type == "offlineevent" and verb == Verbs.START:
- emails.NotifyFollowersOnUpcommingEventEmail.send(action)
+ emails.NotifyFollowersOnUpcomingEventEmail.send(action)
@receiver(dashboard_signals.project_created)
diff --git a/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py b/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py
--- a/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py
+++ b/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py
@@ -24,7 +24,9 @@
def _event_starting(self):
event_ct = ContentType.objects.get_for_model(OfflineEvent)
- events = OfflineEvent.objects.starts_within(hours=self.event_starting_hours)
+ events = OfflineEvent.objects.starts_within(
+ hours=self.event_starting_hours
+ ).exclude(project__is_draft=True)
for event in events:
existing_action = Action.objects.filter(
project=event.project,
|
{"golden_diff": "diff --git a/meinberlin/apps/notifications/emails.py b/meinberlin/apps/notifications/emails.py\n--- a/meinberlin/apps/notifications/emails.py\n+++ b/meinberlin/apps/notifications/emails.py\n@@ -130,10 +130,8 @@\n return receivers\n \n \n-class NotifyFollowersOnUpcommingEventEmail(Email):\n- template_name = (\n- \"meinberlin_notifications/emails\" \"/notify_followers_event_upcomming\"\n- )\n+class NotifyFollowersOnUpcomingEventEmail(Email):\n+ template_name = \"meinberlin_notifications/emails/notify_followers_event_upcoming\"\n \n def get_receivers(self):\n action = self.object\ndiff --git a/meinberlin/apps/notifications/signals.py b/meinberlin/apps/notifications/signals.py\n--- a/meinberlin/apps/notifications/signals.py\n+++ b/meinberlin/apps/notifications/signals.py\n@@ -31,7 +31,7 @@\n emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)\n \n elif action.type == \"offlineevent\" and verb == Verbs.START:\n- emails.NotifyFollowersOnUpcommingEventEmail.send(action)\n+ emails.NotifyFollowersOnUpcomingEventEmail.send(action)\n \n \n @receiver(dashboard_signals.project_created)\ndiff --git a/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py b/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py\n--- a/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py\n+++ b/meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py\n@@ -24,7 +24,9 @@\n def _event_starting(self):\n event_ct = ContentType.objects.get_for_model(OfflineEvent)\n \n- events = OfflineEvent.objects.starts_within(hours=self.event_starting_hours)\n+ events = OfflineEvent.objects.starts_within(\n+ hours=self.event_starting_hours\n+ ).exclude(project__is_draft=True)\n for event in events:\n existing_action = Action.objects.filter(\n project=event.project,\n", "issue": "notification sent for modules in de/unpublished project\n**URL:** (depending on follow status, so URL does not help here)\r\n**user:** follower\r\n**expected behaviour:** I should not get a notification for modules in unpublished projects\r\n**behaviour:** If I follow an unpublished or depublished project (e.g. because a project had been published and after a participation was depublished or because the initiator followed in preview before publishing) I get a notification eg for start of participation. The Module has to be added to project.\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** it is an edge case but was topic in two support questions\r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "from django.contrib import auth\n\nfrom meinberlin.apps.contrib.emails import Email\n\nUser = auth.get_user_model()\n\n\ndef _exclude_actor(receivers, actor):\n if not actor:\n return receivers\n\n if hasattr(receivers, \"exclude\"):\n return receivers.exclude(id=actor.id)\n\n return [receiver for receiver in receivers if not receiver == actor]\n\n\ndef _exclude_moderators(receivers, action):\n if hasattr(action, \"project\"):\n moderator_ids = action.project.moderators.values_list(\"id\", flat=True)\n\n if hasattr(receivers, \"exclude\"):\n return receivers.exclude(id__in=moderator_ids)\n\n return [user for user in receivers if user.id not in moderator_ids]\n\n return receivers\n\n\ndef _exclude_notifications_disabled(receivers):\n if hasattr(receivers, \"filter\"):\n return receivers.filter(get_notifications=True)\n\n return [user for user in receivers if user.get_notifications]\n\n\nclass NotifyCreatorEmail(Email):\n template_name = \"meinberlin_notifications/emails/notify_creator\"\n\n def get_receivers(self):\n action = self.object\n if hasattr(action.target, \"creator\"):\n receivers = [action.target.creator]\n receivers = _exclude_notifications_disabled(receivers)\n receivers = _exclude_actor(receivers, action.actor)\n receivers = _exclude_moderators(receivers, action)\n return receivers\n return []\n\n\nclass NotifyCreatorOrContactOnModeratorFeedback(Email):\n template_name = (\n \"meinberlin_notifications/emails/notify_creator_on_moderator_feedback\"\n )\n\n def get_receivers(self):\n if hasattr(self.object, \"contact_email\"):\n # send to contact\n receivers = [self.object.contact_email]\n else:\n # send to creator\n receivers = [self.object.creator]\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n\n def get_context(self):\n context = super().get_context()\n context[\"object\"] = self.object\n if not hasattr(self.object, \"contact_email\"):\n # send to creator\n context[\"send_to_creator\"] = True\n return context\n\n\nclass NotifyModeratorsEmail(Email):\n template_name = \"meinberlin_notifications/emails/notify_moderator\"\n\n def get_receivers(self):\n action = self.object\n receivers = action.project.moderators.all()\n receivers = _exclude_actor(receivers, action.actor)\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n\n\nclass NotifyInitiatorsOnProjectCreatedEmail(Email):\n template_name = \"meinberlin_notifications/emails/notify_initiators_project_created\"\n\n def get_receivers(self):\n project = self.object\n creator = User.objects.get(pk=self.kwargs[\"creator_pk\"])\n receivers = project.organisation.initiators.all()\n receivers = _exclude_actor(receivers, creator)\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n\n def get_context(self):\n context = super().get_context()\n creator = User.objects.get(pk=self.kwargs[\"creator_pk\"])\n context[\"creator\"] = creator\n context[\"project\"] = self.object\n return context\n\n\nclass NotifyFollowersOnPhaseStartedEmail(Email):\n template_name = \"meinberlin_notifications/emails\" \"/notify_followers_phase_started\"\n\n def get_receivers(self):\n action = self.object\n receivers = User.objects.filter(\n follow__project=action.project,\n follow__enabled=True,\n )\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n\n\nclass NotifyFollowersOnPhaseIsOverSoonEmail(Email):\n template_name = (\n \"meinberlin_notifications/emails\" \"/notify_followers_phase_over_soon\"\n )\n\n def get_receivers(self):\n action = self.object\n receivers = User.objects.filter(\n follow__project=action.project,\n follow__enabled=True,\n )\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n\n\nclass NotifyFollowersOnUpcommingEventEmail(Email):\n template_name = (\n \"meinberlin_notifications/emails\" \"/notify_followers_event_upcomming\"\n )\n\n def get_receivers(self):\n action = self.object\n receivers = User.objects.filter(\n follow__project=action.project,\n follow__enabled=True,\n )\n receivers = _exclude_notifications_disabled(receivers)\n return receivers\n", "path": "meinberlin/apps/notifications/emails.py"}, {"content": "from django.contrib.auth import get_user_model\nfrom django.db.models import signals\nfrom django.dispatch import receiver\n\nfrom adhocracy4.actions.models import Action\nfrom adhocracy4.actions.verbs import Verbs\nfrom adhocracy4.dashboard import signals as dashboard_signals\nfrom adhocracy4.follows.models import Follow\nfrom adhocracy4.projects.models import Project\n\nfrom . import emails\n\nUser = get_user_model()\n\n\n@receiver(signals.post_save, sender=Action)\ndef send_notifications(instance, created, **kwargs):\n action = instance\n verb = Verbs(action.verb)\n\n if action.type in (\"item\", \"comment\") and verb in (Verbs.CREATE, Verbs.ADD):\n emails.NotifyCreatorEmail.send(action)\n\n if action.project:\n emails.NotifyModeratorsEmail.send(action)\n\n elif action.type == \"phase\" and action.project.project_type == \"a4projects.Project\":\n if verb == Verbs.START:\n emails.NotifyFollowersOnPhaseStartedEmail.send(action)\n elif verb == Verbs.SCHEDULE:\n emails.NotifyFollowersOnPhaseIsOverSoonEmail.send(action)\n\n elif action.type == \"offlineevent\" and verb == Verbs.START:\n emails.NotifyFollowersOnUpcommingEventEmail.send(action)\n\n\n@receiver(dashboard_signals.project_created)\ndef send_project_created_notifications(**kwargs):\n project = kwargs.get(\"project\")\n creator = kwargs.get(\"user\")\n emails.NotifyInitiatorsOnProjectCreatedEmail.send(project, creator_pk=creator.pk)\n\n\n@receiver(signals.m2m_changed, sender=Project.moderators.through)\ndef autofollow_project_moderators(instance, action, pk_set, reverse, **kwargs):\n if action == \"post_add\":\n autofollow_project(instance, pk_set, reverse)\n\n\ndef autofollow_project(instance, pk_set, reverse):\n if not reverse:\n project = instance\n users_pks = pk_set\n\n for user_pk in users_pks:\n Follow.objects.update_or_create(\n project=project, creator_id=user_pk, defaults={\"enabled\": True}\n )\n else:\n user = instance\n project_pks = pk_set\n\n for project_pk in project_pks:\n Follow.objects.update_or_create(\n project_id=project_pk, creator=user, defaults={\"enabled\": True}\n )\n", "path": "meinberlin/apps/notifications/signals.py"}, {"content": "from datetime import timedelta\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.management.base import BaseCommand\n\nfrom adhocracy4.actions.models import Action\nfrom adhocracy4.actions.verbs import Verbs\nfrom meinberlin.apps.offlineevents.models import OfflineEvent\n\n\nclass Command(BaseCommand):\n help = \"Create offlineevent system actions.\"\n\n def __init__(self):\n if hasattr(settings, \"ACTIONS_OFFLINE_EVENT_STARTING_HOURS\"):\n self.event_starting_hours = settings.ACTIONS_OFFLINE_EVENT_STARTING_HOURS\n else:\n self.event_starting_hours = 72\n\n def handle(self, *args, **options):\n self._event_starting()\n\n def _event_starting(self):\n event_ct = ContentType.objects.get_for_model(OfflineEvent)\n\n events = OfflineEvent.objects.starts_within(hours=self.event_starting_hours)\n for event in events:\n existing_action = Action.objects.filter(\n project=event.project,\n verb=Verbs.START.value,\n obj_content_type=event_ct,\n obj_object_id=event.id,\n ).first()\n\n # If the event date has been modified and moved more than\n # event_starting_hours ahead, schedule a new action\n if (\n not existing_action\n or (\n existing_action.timestamp\n + timedelta(hours=self.event_starting_hours)\n )\n < event.date\n ):\n Action.objects.create(\n project=event.project,\n verb=Verbs.START.value,\n obj=event,\n timestamp=event.date,\n )\n elif existing_action.timestamp != event.date:\n existing_action.timestamp = event.date\n existing_action.save()\n", "path": "meinberlin/apps/offlineevents/management/commands/create_offlineevent_system_actions.py"}]}
| 3,135 | 484 |
gh_patches_debug_18757
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-2672
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[WIN M1] WIN (ZIP/MSI) Build/Assemble Process
|Tasks(ZIP) |**Tasks(MSI)** |Estimate |Status(ZIP) |Status(MSI) |Notes |
|--- |--- |--- |--- |--- |--- |
|Re-use the existing build process to generate the OpenSearch/Dashboards min + all of the plugins artifacts for WIN package to use |Same |0 |Completed |Completed | |
|The artifacts should be built with ~~LINUX~~ Windows platform specified, ~~as we will cross-compile WIN binary on LINUX then package with WINDOWS JDK~~. |Same |1 |Completed |Completed |~~This is still in debate as we can techinically build WIN on Windows machine, but there are a lot of things to setup just so Jenkins can run Python on Windows Agent.~~ We are able to run shell scripts natively on Windows agent |
|We already have "zip" supported for "—distribution“ parameter, but needs to check whether it is already combined with `--platform windows`. |We do not have "exe" support for "—distribution“ yet. However, this is different from "RPM" as we do not need min artifact to be a exe. The min artifact can be zip and the final product in assemble can be exe. |2 |Completed |Completed |As for "exe" we need to discuss whether a standard exe is enough, or do we want to invest into Windows official installer "msi". |
|We already have "--distribution" param available in assemble workflow, just need to verify existing functions of "ZIP". |We already have "--distribution" param available in assemble workflow, but no support for "EXE" redirection. Need to add a child class supporting the new distribution. |2 |Completed |Completed | |
|The generation code should pull the artifacts from the build workflow to a temporary location |Same |1 |Completed |Completed | |
|The code will compile the components and also call existing install function to install plugins on min artifacts |Same |1 |Completed |Completed | ETA: 2022/09/16 |
|After installation, the code will execute a tool or utility to wrap all the content into corresponding distribution format |Same |1 |Completed |Completed | **20220819 Note:** Plugin compilation currently have some issues with the build scripts, the compilation itself seems ok at least on things like common-utils. <br/><br/> ETA: 2022/09/16|
|The code will move the final distribution artifact from the temp location to dist folder |Same |1 |Completed |Completed | ETA: 2022/09/07 |
### Note: MSI section in this milestone is obsolete as MSI is just a wrapper of the content in ZIP. So as long as ZIP is completed here MSI is considered complete as well.
- [ ] do not remove
* PRs:
20220715:
* https://github.com/opensearch-project/opensearch-ci/pull/155
20220721:
* https://github.com/opensearch-project/opensearch-ci/pull/167
20220722:
* https://github.com/opensearch-project/opensearch-ci/pull/169
20220819:
* https://github.com/opensearch-project/opensearch-build/pull/2483
* https://github.com/opensearch-project/opensearch-ci/pull/187
20220824:
* https://github.com/opensearch-project/opensearch-ci/pull/190
20220902:
* https://github.com/opensearch-project/opensearch-ci/pull/197
* https://github.com/opensearch-project/opensearch-build/pull/2550
20220907:
* common-utils: https://github.com/opensearch-project/common-utils/issues/238
* https://github.com/opensearch-project/common-utils/pull/258
* https://github.com/opensearch-project/opensearch-ci/pull/199
20220914:
* alerting: https://github.com/opensearch-project/alerting/issues/557
* https://github.com/opensearch-project/alerting/issues/573
20220915:
* https://github.com/opensearch-project/opensearch-ci/pull/201
20220916:
* https://github.com/opensearch-project/opensearch-ci/pull/202
20220927:
* https://github.com/opensearch-project/alerting/pull/570
* https://github.com/opensearch-project/alerting/pull/571
* https://github.com/opensearch-project/alerting/pull/572
20220928:
* https://github.com/opensearch-project/opensearch-build/pull/2672
* security: https://github.com/opensearch-project/security/issues/2122
* https://github.com/opensearch-project/common-utils/pull/258
20221004:
* https://github.com/opensearch-project/opensearch-build/pull/2678
20221006:
* Security: https://github.com/opensearch-project/security/issues/2148
* https://github.com/opensearch-project/opensearch-build/pull/2704
20221007:
* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2535
20221010:
* https://github.com/opensearch-project/opensearch-ci/pull/208
* https://github.com/opensearch-project/opensearch-ci/pull/209
20221011:
* https://github.com/opensearch-project/opensearch-build-libraries/pull/14
* https://github.com/opensearch-project/opensearch-build/pull/2721
* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2554
20221012:
* https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2026
20221013:
* https://github.com/opensearch-project/opensearch-build/pull/2730
20221018:
* https://github.com/opensearch-project/opensearch-build/pull/2756
* SQL: https://github.com/opensearch-project/sql/issues/928
* https://github.com/opensearch-project/opensearch-build/pull/2761
20221019:
* https://github.com/opensearch-project/OpenSearch/issues/4817
20221024:
* https://github.com/opensearch-project/opensearch-build/pull/2788
20221025:
* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2669
* https://github.com/opensearch-project/k-NN/pull/595
* https://github.com/opensearch-project/opensearch-ci/pull/212
20221027:
* https://github.com/opensearch-project/opensearch-ci/pull/213
* https://github.com/opensearch-project/opensearch-build/pull/2812
* https://github.com/opensearch-project/opensearch-build/pull/2817
* https://github.com/opensearch-project/opensearch-ci/pull/214
20221028:
* https://github.com/opensearch-project/opensearch-build/pull/2818
* https://github.com/opensearch-project/OpenSearch-Dashboards/pull/2686
20221101:
* https://github.com/opensearch-project/opensearch-build/pull/2840
* https://github.com/opensearch-project/opensearch-build/pull/2841
* https://github.com/opensearch-project/opensearch-ci/pull/215
20221103:
* https://github.com/opensearch-project/opensearch-build/pull/2845
* https://github.com/opensearch-project/opensearch-ci/pull/219
~~* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2764~~
* OSD ftrepo: https://github.com/opensearch-project/opensearch-dashboards-functional-test/issues/370
20221104:
* https://github.com/opensearch-project/opensearch-build/pull/2848
* https://github.com/opensearch-project/opensearch-ci/pull/224
* https://github.com/opensearch-project/opensearch-build-libraries/pull/28
20221107:
* https://github.com/opensearch-project/opensearch-build/pull/2860
* https://github.com/opensearch-project/opensearch-ci/pull/228
20221108:
* OSD ftrepo: https://github.com/opensearch-project/opensearch-dashboards-functional-test/issues/370
* https://github.com/opensearch-project/opensearch-build-libraries/pull/31
* https://github.com/opensearch-project/opensearch-build/pull/2869
20221114:
* #2892
20221116:
* https://github.com/opensearch-project/opensearch-build/pull/2914
</issue>
<code>
[start of src/assemble_workflow/dist.py]
1 # Copyright OpenSearch Contributors
2 # SPDX-License-Identifier: Apache-2.0
3 #
4 # The OpenSearch Contributors require contributions made to
5 # this file be licensed under the Apache-2.0 license or a
6 # compatible open source license.
7
8 import errno
9 import logging
10 import os
11 import shutil
12 import tarfile
13 import zipfile
14 from abc import ABC, abstractmethod
15
16 from assemble_workflow.bundle_rpm import BundleRpm
17 from manifests.build_manifest import BuildManifest
18 from system.zip_file import ZipFile
19
20
21 class Dist(ABC):
22 def __init__(self, name: str, path: str, min_path: str, build_cls: BuildManifest.Build) -> None:
23 self.build_cls = build_cls
24 self.name = name
25 self.filename = name.lower()
26 self.path = path
27 self.min_path = min_path
28
29 @abstractmethod
30 def __extract__(self, dest: str) -> None:
31 pass
32
33 @abstractmethod
34 def __build__(self, name: str, dest: str) -> None:
35 pass
36
37 def find_min_archive_path(self, dest: str) -> str:
38 '''
39 Return the single folder that contains the main files of {name}.
40 This folder is normally in the format of {filename}-{exact or bc version}.
41
42 Ex: opensearch-1.3.0 or opensearch-dashboards-1.3.0
43
44 Adding a check of whether {filename} is in folder name is to ensure
45 that only folders in above format are returned.
46
47 In tar there is only 1 top level folders after extraction.
48 But in rpm there are multiple folders such as var / usr / opensearch-1.3.0 ......
49
50 This is to ensure corrent folder is found, instead of simply choosing the 1st in the list.
51 '''
52
53 for file in os.scandir(dest):
54 if self.filename in file.name and file.is_dir():
55 self.archive_path = file.path
56 return self.archive_path
57
58 raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), os.path.join(dest, "*"))
59
60 def rename_archive_path(self, path: str) -> str:
61 '''
62 Rename the single folder at the top level of the tar that contains the min distribution to match current version.
63 For example, when OpenSearch 1.1.1 is built using the 1.1.0 artifact, we rename opensearch-1.1.0 to opensearch-1.1.1.
64 '''
65 current_name = os.path.basename(path)
66 target_name = self.min_path
67 if current_name != target_name:
68 logging.info(f"Renaming {path} to {target_name}.")
69 target_path = os.path.join(os.path.dirname(path), target_name)
70 os.rename(path, target_path)
71 return target_path
72 else:
73 return path
74
75 def extract(self, dest: str) -> str:
76 self.__extract__(dest)
77 self.archive_path = self.rename_archive_path(
78 self.find_min_archive_path(dest)
79 )
80 return self.archive_path
81
82 def build(self, name: str, dest: str) -> None:
83 self.__build__(name, dest)
84 path = os.path.join(dest, name)
85 shutil.copyfile(name, path)
86 logging.info(f"Published {path}.")
87
88
89 class DistTar(Dist):
90 def __extract__(self, dest: str) -> None:
91 with tarfile.open(self.path, "r:gz") as tar:
92 tar.extractall(dest)
93
94 def __build__(self, name: str, dest: str) -> None:
95 with tarfile.open(name, "w:gz") as tar:
96 tar.add(self.archive_path, arcname=os.path.basename(self.archive_path))
97
98
99 class DistZip(Dist):
100 def __extract__(self, dest: str) -> None:
101 with ZipFile(self.path, "r") as zip:
102 zip.extractall(dest)
103
104 def __build__(self, name: str, dest: str) -> None:
105 with ZipFile(name, "w", zipfile.ZIP_DEFLATED) as zip:
106 rootlen = len(self.archive_path) + 1
107 for base, _, files in os.walk(self.archive_path):
108 for file in files:
109 fn = os.path.join(base, file)
110 zip.write(fn, fn[rootlen:])
111
112
113 class DistRpm(Dist):
114
115 def __extract__(self, dest: str) -> None:
116 BundleRpm(self.filename, self.path, self.min_path).extract(dest)
117
118 def __build__(self, name: str, dest: str) -> None:
119 BundleRpm(self.filename, self.path, self.min_path).build(name, dest, self.archive_path, self.build_cls)
120
[end of src/assemble_workflow/dist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/assemble_workflow/dist.py b/src/assemble_workflow/dist.py
--- a/src/assemble_workflow/dist.py
+++ b/src/assemble_workflow/dist.py
@@ -103,11 +103,17 @@
def __build__(self, name: str, dest: str) -> None:
with ZipFile(name, "w", zipfile.ZIP_DEFLATED) as zip:
- rootlen = len(self.archive_path) + 1
+ # root : /tmp/tmp********/opensearch-<version+qualifier>
+ # leadingdir : opensearch-<version+qualifier>
+ # root no leading dir: /tmp/tmp********/
+ # This is to preserve the leading directory `opensearch-<version+qualifier>` in zip
+ rootlen = len(self.archive_path)
+ leadingdirlen = len(os.path.basename(self.archive_path))
+ noleadingdirlen = rootlen - leadingdirlen
for base, _, files in os.walk(self.archive_path):
for file in files:
fn = os.path.join(base, file)
- zip.write(fn, fn[rootlen:])
+ zip.write(fn, fn[noleadingdirlen:])
class DistRpm(Dist):
|
{"golden_diff": "diff --git a/src/assemble_workflow/dist.py b/src/assemble_workflow/dist.py\n--- a/src/assemble_workflow/dist.py\n+++ b/src/assemble_workflow/dist.py\n@@ -103,11 +103,17 @@\n \n def __build__(self, name: str, dest: str) -> None:\n with ZipFile(name, \"w\", zipfile.ZIP_DEFLATED) as zip:\n- rootlen = len(self.archive_path) + 1\n+ # root : /tmp/tmp********/opensearch-<version+qualifier>\n+ # leadingdir : opensearch-<version+qualifier>\n+ # root no leading dir: /tmp/tmp********/\n+ # This is to preserve the leading directory `opensearch-<version+qualifier>` in zip\n+ rootlen = len(self.archive_path)\n+ leadingdirlen = len(os.path.basename(self.archive_path))\n+ noleadingdirlen = rootlen - leadingdirlen\n for base, _, files in os.walk(self.archive_path):\n for file in files:\n fn = os.path.join(base, file)\n- zip.write(fn, fn[rootlen:])\n+ zip.write(fn, fn[noleadingdirlen:])\n \n \n class DistRpm(Dist):\n", "issue": "[WIN M1] WIN (ZIP/MSI) Build/Assemble Process \n|Tasks(ZIP) |**Tasks(MSI)** |Estimate |Status(ZIP) |Status(MSI) |Notes |\r\n|--- |--- |--- |--- |--- |--- |\r\n|Re-use the existing build process to generate the OpenSearch/Dashboards min + all of the plugins artifacts for WIN package to use |Same |0 |Completed |Completed | |\r\n|The artifacts should be built with ~~LINUX~~ Windows platform specified, ~~as we will cross-compile WIN binary on LINUX then package with WINDOWS JDK~~. |Same |1 |Completed |Completed |~~This is still in debate as we can techinically build WIN on Windows machine, but there are a lot of things to setup just so Jenkins can run Python on Windows Agent.~~ We are able to run shell scripts natively on Windows agent |\r\n|We already have \"zip\" supported for \"\u2014distribution\u201c parameter, but needs to check whether it is already combined with `--platform windows`. |We do not have \"exe\" support for \"\u2014distribution\u201c yet. However, this is different from \"RPM\" as we do not need min artifact to be a exe. The min artifact can be zip and the final product in assemble can be exe. |2 |Completed |Completed |As for \"exe\" we need to discuss whether a standard exe is enough, or do we want to invest into Windows official installer \"msi\". |\r\n|We already have \"--distribution\" param available in assemble workflow, just need to verify existing functions of \"ZIP\". |We already have \"--distribution\" param available in assemble workflow, but no support for \"EXE\" redirection. Need to add a child class supporting the new distribution. |2 |Completed |Completed | |\r\n|The generation code should pull the artifacts from the build workflow to a temporary location |Same |1 |Completed |Completed | |\r\n|The code will compile the components and also call existing install function to install plugins on min artifacts |Same |1 |Completed |Completed | ETA: 2022/09/16 |\r\n|After installation, the code will execute a tool or utility to wrap all the content into corresponding distribution format |Same |1 |Completed |Completed | **20220819 Note:** Plugin compilation currently have some issues with the build scripts, the compilation itself seems ok at least on things like common-utils. <br/><br/> ETA: 2022/09/16|\r\n|The code will move the final distribution artifact from the temp location to dist folder |Same |1 |Completed |Completed | ETA: 2022/09/07 |\r\n\r\n\r\n### Note: MSI section in this milestone is obsolete as MSI is just a wrapper of the content in ZIP. So as long as ZIP is completed here MSI is considered complete as well.\r\n\r\n- [ ] do not remove\r\n\r\n* PRs:\r\n\r\n20220715:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/155\r\n\r\n20220721:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/167\r\n\r\n20220722:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/169\r\n\r\n20220819:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2483\r\n* https://github.com/opensearch-project/opensearch-ci/pull/187\r\n\r\n20220824:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/190\r\n\r\n20220902:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/197\r\n* https://github.com/opensearch-project/opensearch-build/pull/2550\r\n\r\n20220907:\r\n* common-utils: https://github.com/opensearch-project/common-utils/issues/238\r\n * https://github.com/opensearch-project/common-utils/pull/258\r\n* https://github.com/opensearch-project/opensearch-ci/pull/199\r\n\r\n20220914:\r\n* alerting: https://github.com/opensearch-project/alerting/issues/557\r\n * https://github.com/opensearch-project/alerting/issues/573\r\n\r\n20220915:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/201\r\n\r\n20220916:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/202\r\n\r\n20220927:\r\n* https://github.com/opensearch-project/alerting/pull/570\r\n* https://github.com/opensearch-project/alerting/pull/571\r\n* https://github.com/opensearch-project/alerting/pull/572\r\n\r\n20220928:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2672\r\n* security: https://github.com/opensearch-project/security/issues/2122\r\n* https://github.com/opensearch-project/common-utils/pull/258\r\n\r\n20221004:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2678\r\n\r\n20221006:\r\n* Security: https://github.com/opensearch-project/security/issues/2148\r\n* https://github.com/opensearch-project/opensearch-build/pull/2704\r\n\r\n20221007:\r\n* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2535\r\n\r\n20221010:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/208\r\n* https://github.com/opensearch-project/opensearch-ci/pull/209\r\n\r\n20221011:\r\n* https://github.com/opensearch-project/opensearch-build-libraries/pull/14\r\n* https://github.com/opensearch-project/opensearch-build/pull/2721\r\n* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2554\r\n\r\n20221012:\r\n* https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2026\r\n\r\n20221013:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2730\r\n\r\n20221018:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2756\r\n* SQL: https://github.com/opensearch-project/sql/issues/928\r\n* https://github.com/opensearch-project/opensearch-build/pull/2761\r\n\r\n20221019:\r\n* https://github.com/opensearch-project/OpenSearch/issues/4817\r\n\r\n20221024:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2788\r\n\r\n20221025:\r\n* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2669\r\n* https://github.com/opensearch-project/k-NN/pull/595\r\n* https://github.com/opensearch-project/opensearch-ci/pull/212\r\n\r\n20221027:\r\n* https://github.com/opensearch-project/opensearch-ci/pull/213\r\n* https://github.com/opensearch-project/opensearch-build/pull/2812\r\n* https://github.com/opensearch-project/opensearch-build/pull/2817\r\n* https://github.com/opensearch-project/opensearch-ci/pull/214\r\n\r\n20221028:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2818\r\n* https://github.com/opensearch-project/OpenSearch-Dashboards/pull/2686\r\n\r\n20221101:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2840\r\n* https://github.com/opensearch-project/opensearch-build/pull/2841\r\n* https://github.com/opensearch-project/opensearch-ci/pull/215\r\n\r\n20221103:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2845\r\n* https://github.com/opensearch-project/opensearch-ci/pull/219\r\n~~* OSD: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/2764~~\r\n* OSD ftrepo: https://github.com/opensearch-project/opensearch-dashboards-functional-test/issues/370\r\n\r\n20221104:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2848\r\n* https://github.com/opensearch-project/opensearch-ci/pull/224\r\n* https://github.com/opensearch-project/opensearch-build-libraries/pull/28\r\n\r\n20221107:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2860\r\n* https://github.com/opensearch-project/opensearch-ci/pull/228\r\n\r\n20221108:\r\n* OSD ftrepo: https://github.com/opensearch-project/opensearch-dashboards-functional-test/issues/370\r\n* https://github.com/opensearch-project/opensearch-build-libraries/pull/31\r\n* https://github.com/opensearch-project/opensearch-build/pull/2869\r\n\r\n20221114:\r\n* #2892\r\n\r\n20221116:\r\n* https://github.com/opensearch-project/opensearch-build/pull/2914\n", "before_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport errno\nimport logging\nimport os\nimport shutil\nimport tarfile\nimport zipfile\nfrom abc import ABC, abstractmethod\n\nfrom assemble_workflow.bundle_rpm import BundleRpm\nfrom manifests.build_manifest import BuildManifest\nfrom system.zip_file import ZipFile\n\n\nclass Dist(ABC):\n def __init__(self, name: str, path: str, min_path: str, build_cls: BuildManifest.Build) -> None:\n self.build_cls = build_cls\n self.name = name\n self.filename = name.lower()\n self.path = path\n self.min_path = min_path\n\n @abstractmethod\n def __extract__(self, dest: str) -> None:\n pass\n\n @abstractmethod\n def __build__(self, name: str, dest: str) -> None:\n pass\n\n def find_min_archive_path(self, dest: str) -> str:\n '''\n Return the single folder that contains the main files of {name}.\n This folder is normally in the format of {filename}-{exact or bc version}.\n\n Ex: opensearch-1.3.0 or opensearch-dashboards-1.3.0\n\n Adding a check of whether {filename} is in folder name is to ensure\n that only folders in above format are returned.\n\n In tar there is only 1 top level folders after extraction.\n But in rpm there are multiple folders such as var / usr / opensearch-1.3.0 ......\n\n This is to ensure corrent folder is found, instead of simply choosing the 1st in the list.\n '''\n\n for file in os.scandir(dest):\n if self.filename in file.name and file.is_dir():\n self.archive_path = file.path\n return self.archive_path\n\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), os.path.join(dest, \"*\"))\n\n def rename_archive_path(self, path: str) -> str:\n '''\n Rename the single folder at the top level of the tar that contains the min distribution to match current version.\n For example, when OpenSearch 1.1.1 is built using the 1.1.0 artifact, we rename opensearch-1.1.0 to opensearch-1.1.1.\n '''\n current_name = os.path.basename(path)\n target_name = self.min_path\n if current_name != target_name:\n logging.info(f\"Renaming {path} to {target_name}.\")\n target_path = os.path.join(os.path.dirname(path), target_name)\n os.rename(path, target_path)\n return target_path\n else:\n return path\n\n def extract(self, dest: str) -> str:\n self.__extract__(dest)\n self.archive_path = self.rename_archive_path(\n self.find_min_archive_path(dest)\n )\n return self.archive_path\n\n def build(self, name: str, dest: str) -> None:\n self.__build__(name, dest)\n path = os.path.join(dest, name)\n shutil.copyfile(name, path)\n logging.info(f\"Published {path}.\")\n\n\nclass DistTar(Dist):\n def __extract__(self, dest: str) -> None:\n with tarfile.open(self.path, \"r:gz\") as tar:\n tar.extractall(dest)\n\n def __build__(self, name: str, dest: str) -> None:\n with tarfile.open(name, \"w:gz\") as tar:\n tar.add(self.archive_path, arcname=os.path.basename(self.archive_path))\n\n\nclass DistZip(Dist):\n def __extract__(self, dest: str) -> None:\n with ZipFile(self.path, \"r\") as zip:\n zip.extractall(dest)\n\n def __build__(self, name: str, dest: str) -> None:\n with ZipFile(name, \"w\", zipfile.ZIP_DEFLATED) as zip:\n rootlen = len(self.archive_path) + 1\n for base, _, files in os.walk(self.archive_path):\n for file in files:\n fn = os.path.join(base, file)\n zip.write(fn, fn[rootlen:])\n\n\nclass DistRpm(Dist):\n\n def __extract__(self, dest: str) -> None:\n BundleRpm(self.filename, self.path, self.min_path).extract(dest)\n\n def __build__(self, name: str, dest: str) -> None:\n BundleRpm(self.filename, self.path, self.min_path).build(name, dest, self.archive_path, self.build_cls)\n", "path": "src/assemble_workflow/dist.py"}]}
| 4,075 | 281 |
gh_patches_debug_92
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-1179
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Scheduler to Library Reference
The scheduler module is not at all listed in the Library Reference.
</issue>
<code>
[start of cocotb/__init__.py]
1 # Copyright (c) 2013 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Cocotb is a coroutine, cosimulation framework for writing testbenches in Python.
30
31 See http://cocotb.readthedocs.org for full documentation
32 """
33 import os
34 import sys
35 import logging
36 import threading
37 import random
38 import time
39
40 import cocotb.handle
41 from cocotb.scheduler import Scheduler
42 from cocotb.log import SimBaseLog, SimLog
43 from cocotb.regression import RegressionManager
44
45
46 # Things we want in the cocotb namespace
47 from cocotb.decorators import test, coroutine, hook, function, external # noqa: F401
48
49 # Singleton scheduler instance
50 # NB this cheekily ensures a singleton since we're replacing the reference
51 # so that cocotb.scheduler gives you the singleton instance and not the
52 # scheduler package
53
54 # GPI logging instance
55 if "COCOTB_SIM" in os.environ:
56 import simulator
57 logging.basicConfig()
58 logging.setLoggerClass(SimBaseLog)
59 log = SimLog('cocotb')
60 level = os.getenv("COCOTB_LOG_LEVEL", "INFO")
61 try:
62 _default_log = getattr(logging, level)
63 except AttributeError as e:
64 log.error("Unable to set loging level to %s" % level)
65 _default_log = logging.INFO
66 log.setLevel(_default_log)
67 loggpi = SimLog('cocotb.gpi')
68 # Notify GPI of log level
69 simulator.log_level(_default_log)
70
71 # If stdout/stderr are not TTYs, Python may not have opened them with line
72 # buffering. In that case, try to reopen them with line buffering
73 # explicitly enabled. This ensures that prints such as stack traces always
74 # appear. Continue silently if this fails.
75 try:
76 if not sys.stdout.isatty():
77 sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)
78 log.debug("Reopened stdout with line buffering")
79 if not sys.stderr.isatty():
80 sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)
81 log.debug("Reopened stderr with line buffering")
82 except Exception as e:
83 log.warning("Failed to ensure that stdout/stderr are line buffered: %s", e)
84 log.warning("Some stack traces may not appear because of this.")
85
86
87 scheduler = Scheduler()
88 regression_manager = None
89
90 plusargs = {}
91
92 # To save typing provide an alias to scheduler.add
93 fork = scheduler.add
94
95 # FIXME is this really required?
96 _rlock = threading.RLock()
97
98
99 def mem_debug(port):
100 import cocotb.memdebug
101 cocotb.memdebug.start(port)
102
103
104 def _initialise_testbench(root_name):
105 """
106 This function is called after the simulator has elaborated all
107 entities and is ready to run the test.
108
109 The test must be defined by the environment variables
110 MODULE
111 TESTCASE
112
113 The environment variable COCOTB_HOOKS contains a comma-separated list of
114 modules that should be executed before the first test.
115 """
116 _rlock.acquire()
117
118 memcheck_port = os.getenv('MEMCHECK')
119 if memcheck_port is not None:
120 mem_debug(int(memcheck_port))
121
122 exec_path = os.getenv('COCOTB_PY_DIR')
123 if exec_path is None:
124 exec_path = 'Unknown'
125
126 version = os.getenv('VERSION')
127 if version is None:
128 log.info("Unable to determine Cocotb version from %s" % exec_path)
129 else:
130 log.info("Running tests with Cocotb v%s from %s" %
131 (version, exec_path))
132
133 # Create the base handle type
134
135 process_plusargs()
136
137 # Seed the Python random number generator to make this repeatable
138 global RANDOM_SEED
139 RANDOM_SEED = os.getenv('RANDOM_SEED')
140
141 if RANDOM_SEED is None:
142 if 'ntb_random_seed' in plusargs:
143 RANDOM_SEED = eval(plusargs['ntb_random_seed'])
144 elif 'seed' in plusargs:
145 RANDOM_SEED = eval(plusargs['seed'])
146 else:
147 RANDOM_SEED = int(time.time())
148 log.info("Seeding Python random module with %d" % (RANDOM_SEED))
149 else:
150 RANDOM_SEED = int(RANDOM_SEED)
151 log.info("Seeding Python random module with supplied seed %d" % (RANDOM_SEED))
152 random.seed(RANDOM_SEED)
153
154 module_str = os.getenv('MODULE')
155 test_str = os.getenv('TESTCASE')
156 hooks_str = os.getenv('COCOTB_HOOKS', '')
157
158 if not module_str:
159 raise ImportError("Environment variables defining the module(s) to " +
160 "execute not defined. MODULE=\"%s\"" % (module_str))
161
162 modules = module_str.split(',')
163 hooks = hooks_str.split(',') if hooks_str else []
164
165 global regression_manager
166
167 regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)
168 regression_manager.initialise()
169 regression_manager.execute()
170
171 _rlock.release()
172 return True
173
174
175 def _sim_event(level, message):
176 """Function that can be called externally to signal an event"""
177 SIM_INFO = 0
178 SIM_TEST_FAIL = 1
179 SIM_FAIL = 2
180 from cocotb.result import TestFailure, SimFailure
181
182 if level is SIM_TEST_FAIL:
183 scheduler.log.error("Failing test at simulator request")
184 scheduler.finish_test(TestFailure("Failure from external source: %s" %
185 message))
186 elif level is SIM_FAIL:
187 # We simply return here as the simulator will exit
188 # so no cleanup is needed
189 msg = ("Failing test at simulator request before test run completion: "
190 "%s" % message)
191 scheduler.log.error(msg)
192 scheduler.finish_scheduler(SimFailure(msg))
193 else:
194 scheduler.log.error("Unsupported sim event")
195
196 return True
197
198
199 def process_plusargs():
200
201 global plusargs
202
203 plusargs = {}
204
205 for option in cocotb.argv:
206 if option.startswith('+'):
207 if option.find('=') != -1:
208 (name, value) = option[1:].split('=')
209 plusargs[name] = value
210 else:
211 plusargs[option[1:]] = True
212
[end of cocotb/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/__init__.py b/cocotb/__init__.py
--- a/cocotb/__init__.py
+++ b/cocotb/__init__.py
@@ -85,6 +85,8 @@
scheduler = Scheduler()
+"""The global scheduler instance."""
+
regression_manager = None
plusargs = {}
|
{"golden_diff": "diff --git a/cocotb/__init__.py b/cocotb/__init__.py\n--- a/cocotb/__init__.py\n+++ b/cocotb/__init__.py\n@@ -85,6 +85,8 @@\n \n \n scheduler = Scheduler()\n+\"\"\"The global scheduler instance.\"\"\"\n+\n regression_manager = None\n \n plusargs = {}\n", "issue": "Add Scheduler to Library Reference\nThe scheduler module is not at all listed in the Library Reference.\n", "before_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nCocotb is a coroutine, cosimulation framework for writing testbenches in Python.\n\nSee http://cocotb.readthedocs.org for full documentation\n\"\"\"\nimport os\nimport sys\nimport logging\nimport threading\nimport random\nimport time\n\nimport cocotb.handle\nfrom cocotb.scheduler import Scheduler\nfrom cocotb.log import SimBaseLog, SimLog\nfrom cocotb.regression import RegressionManager\n\n\n# Things we want in the cocotb namespace\nfrom cocotb.decorators import test, coroutine, hook, function, external # noqa: F401\n\n# Singleton scheduler instance\n# NB this cheekily ensures a singleton since we're replacing the reference\n# so that cocotb.scheduler gives you the singleton instance and not the\n# scheduler package\n\n# GPI logging instance\nif \"COCOTB_SIM\" in os.environ:\n import simulator\n logging.basicConfig()\n logging.setLoggerClass(SimBaseLog)\n log = SimLog('cocotb')\n level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n try:\n _default_log = getattr(logging, level)\n except AttributeError as e:\n log.error(\"Unable to set loging level to %s\" % level)\n _default_log = logging.INFO\n log.setLevel(_default_log)\n loggpi = SimLog('cocotb.gpi')\n # Notify GPI of log level\n simulator.log_level(_default_log)\n\n # If stdout/stderr are not TTYs, Python may not have opened them with line\n # buffering. In that case, try to reopen them with line buffering\n # explicitly enabled. This ensures that prints such as stack traces always\n # appear. Continue silently if this fails.\n try:\n if not sys.stdout.isatty():\n sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)\n log.debug(\"Reopened stdout with line buffering\")\n if not sys.stderr.isatty():\n sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)\n log.debug(\"Reopened stderr with line buffering\")\n except Exception as e:\n log.warning(\"Failed to ensure that stdout/stderr are line buffered: %s\", e)\n log.warning(\"Some stack traces may not appear because of this.\")\n\n\nscheduler = Scheduler()\nregression_manager = None\n\nplusargs = {}\n\n# To save typing provide an alias to scheduler.add\nfork = scheduler.add\n\n# FIXME is this really required?\n_rlock = threading.RLock()\n\n\ndef mem_debug(port):\n import cocotb.memdebug\n cocotb.memdebug.start(port)\n\n\ndef _initialise_testbench(root_name):\n \"\"\"\n This function is called after the simulator has elaborated all\n entities and is ready to run the test.\n\n The test must be defined by the environment variables\n MODULE\n TESTCASE\n\n The environment variable COCOTB_HOOKS contains a comma-separated list of\n modules that should be executed before the first test.\n \"\"\"\n _rlock.acquire()\n\n memcheck_port = os.getenv('MEMCHECK')\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n\n exec_path = os.getenv('COCOTB_PY_DIR')\n if exec_path is None:\n exec_path = 'Unknown'\n\n version = os.getenv('VERSION')\n if version is None:\n log.info(\"Unable to determine Cocotb version from %s\" % exec_path)\n else:\n log.info(\"Running tests with Cocotb v%s from %s\" %\n (version, exec_path))\n\n # Create the base handle type\n\n process_plusargs()\n\n # Seed the Python random number generator to make this repeatable\n global RANDOM_SEED\n RANDOM_SEED = os.getenv('RANDOM_SEED')\n\n if RANDOM_SEED is None:\n if 'ntb_random_seed' in plusargs:\n RANDOM_SEED = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n RANDOM_SEED = eval(plusargs['seed'])\n else:\n RANDOM_SEED = int(time.time())\n log.info(\"Seeding Python random module with %d\" % (RANDOM_SEED))\n else:\n RANDOM_SEED = int(RANDOM_SEED)\n log.info(\"Seeding Python random module with supplied seed %d\" % (RANDOM_SEED))\n random.seed(RANDOM_SEED)\n\n module_str = os.getenv('MODULE')\n test_str = os.getenv('TESTCASE')\n hooks_str = os.getenv('COCOTB_HOOKS', '')\n\n if not module_str:\n raise ImportError(\"Environment variables defining the module(s) to \" +\n \"execute not defined. MODULE=\\\"%s\\\"\" % (module_str))\n\n modules = module_str.split(',')\n hooks = hooks_str.split(',') if hooks_str else []\n\n global regression_manager\n\n regression_manager = RegressionManager(root_name, modules, tests=test_str, seed=RANDOM_SEED, hooks=hooks)\n regression_manager.initialise()\n regression_manager.execute()\n\n _rlock.release()\n return True\n\n\ndef _sim_event(level, message):\n \"\"\"Function that can be called externally to signal an event\"\"\"\n SIM_INFO = 0\n SIM_TEST_FAIL = 1\n SIM_FAIL = 2\n from cocotb.result import TestFailure, SimFailure\n\n if level is SIM_TEST_FAIL:\n scheduler.log.error(\"Failing test at simulator request\")\n scheduler.finish_test(TestFailure(\"Failure from external source: %s\" %\n message))\n elif level is SIM_FAIL:\n # We simply return here as the simulator will exit\n # so no cleanup is needed\n msg = (\"Failing test at simulator request before test run completion: \"\n \"%s\" % message)\n scheduler.log.error(msg)\n scheduler.finish_scheduler(SimFailure(msg))\n else:\n scheduler.log.error(\"Unsupported sim event\")\n\n return True\n\n\ndef process_plusargs():\n\n global plusargs\n\n plusargs = {}\n\n for option in cocotb.argv:\n if option.startswith('+'):\n if option.find('=') != -1:\n (name, value) = option[1:].split('=')\n plusargs[name] = value\n else:\n plusargs[option[1:]] = True\n", "path": "cocotb/__init__.py"}]}
| 2,789 | 78 |
gh_patches_debug_1648
|
rasdani/github-patches
|
git_diff
|
benoitc__gunicorn-1806
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I get error in this package AttributeError: 'NoneType' object has no attribute 'add_extra_file'
hi every one ..
when i try to deploy keras model into google cloud i get this error ...
```py
File "/home/falahgs07/keras/env/lib/python3.5/site-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/home/falahgs07/keras/env/lib/python3.5/site-packages/gunicorn/workers/base.py", line 148, in load_wsgi
self.reloader.add_extra_file(exc_val.filename)
AttributeError: 'NoneType' object has no attribute 'add_extra_file'
```
</issue>
<code>
[start of gunicorn/workers/base.py]
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 from datetime import datetime
7 import os
8 from random import randint
9 import signal
10 from ssl import SSLError
11 import sys
12 import time
13 import traceback
14
15 from gunicorn import six
16 from gunicorn import util
17 from gunicorn.workers.workertmp import WorkerTmp
18 from gunicorn.reloader import reloader_engines
19 from gunicorn.http.errors import (
20 InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,
21 InvalidHTTPVersion, LimitRequestLine, LimitRequestHeaders,
22 )
23 from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest
24 from gunicorn.http.errors import InvalidSchemeHeaders
25 from gunicorn.http.wsgi import default_environ, Response
26 from gunicorn.six import MAXSIZE
27
28
29 class Worker(object):
30
31 SIGNALS = [getattr(signal, "SIG%s" % x)
32 for x in "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()]
33
34 PIPE = []
35
36 def __init__(self, age, ppid, sockets, app, timeout, cfg, log):
37 """\
38 This is called pre-fork so it shouldn't do anything to the
39 current process. If there's a need to make process wide
40 changes you'll want to do that in ``self.init_process()``.
41 """
42 self.age = age
43 self.pid = "[booting]"
44 self.ppid = ppid
45 self.sockets = sockets
46 self.app = app
47 self.timeout = timeout
48 self.cfg = cfg
49 self.booted = False
50 self.aborted = False
51 self.reloader = None
52
53 self.nr = 0
54 jitter = randint(0, cfg.max_requests_jitter)
55 self.max_requests = cfg.max_requests + jitter or MAXSIZE
56 self.alive = True
57 self.log = log
58 self.tmp = WorkerTmp(cfg)
59
60 def __str__(self):
61 return "<Worker %s>" % self.pid
62
63 def notify(self):
64 """\
65 Your worker subclass must arrange to have this method called
66 once every ``self.timeout`` seconds. If you fail in accomplishing
67 this task, the master process will murder your workers.
68 """
69 self.tmp.notify()
70
71 def run(self):
72 """\
73 This is the mainloop of a worker process. You should override
74 this method in a subclass to provide the intended behaviour
75 for your particular evil schemes.
76 """
77 raise NotImplementedError()
78
79 def init_process(self):
80 """\
81 If you override this method in a subclass, the last statement
82 in the function should be to call this method with
83 super(MyWorkerClass, self).init_process() so that the ``run()``
84 loop is initiated.
85 """
86
87 # set environment' variables
88 if self.cfg.env:
89 for k, v in self.cfg.env.items():
90 os.environ[k] = v
91
92 util.set_owner_process(self.cfg.uid, self.cfg.gid,
93 initgroups=self.cfg.initgroups)
94
95 # Reseed the random number generator
96 util.seed()
97
98 # For waking ourselves up
99 self.PIPE = os.pipe()
100 for p in self.PIPE:
101 util.set_non_blocking(p)
102 util.close_on_exec(p)
103
104 # Prevent fd inheritance
105 for s in self.sockets:
106 util.close_on_exec(s)
107 util.close_on_exec(self.tmp.fileno())
108
109 self.wait_fds = self.sockets + [self.PIPE[0]]
110
111 self.log.close_on_exec()
112
113 self.init_signals()
114
115 # start the reloader
116 if self.cfg.reload:
117 def changed(fname):
118 self.log.info("Worker reloading: %s modified", fname)
119 self.alive = False
120 self.cfg.worker_int(self)
121 time.sleep(0.1)
122 sys.exit(0)
123
124 reloader_cls = reloader_engines[self.cfg.reload_engine]
125 self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files,
126 callback=changed)
127 self.reloader.start()
128
129 self.load_wsgi()
130 self.cfg.post_worker_init(self)
131
132 # Enter main run loop
133 self.booted = True
134 self.run()
135
136 def load_wsgi(self):
137 try:
138 self.wsgi = self.app.wsgi()
139 except SyntaxError as e:
140 if self.cfg.reload == 'off':
141 raise
142
143 self.log.exception(e)
144
145 # fix from PR #1228
146 # storing the traceback into exc_tb will create a circular reference.
147 # per https://docs.python.org/2/library/sys.html#sys.exc_info warning,
148 # delete the traceback after use.
149 try:
150 _, exc_val, exc_tb = sys.exc_info()
151 self.reloader.add_extra_file(exc_val.filename)
152
153 tb_string = six.StringIO()
154 traceback.print_tb(exc_tb, file=tb_string)
155 self.wsgi = util.make_fail_app(tb_string.getvalue())
156 finally:
157 del exc_tb
158
159 def init_signals(self):
160 # reset signaling
161 for s in self.SIGNALS:
162 signal.signal(s, signal.SIG_DFL)
163 # init new signaling
164 signal.signal(signal.SIGQUIT, self.handle_quit)
165 signal.signal(signal.SIGTERM, self.handle_exit)
166 signal.signal(signal.SIGINT, self.handle_quit)
167 signal.signal(signal.SIGWINCH, self.handle_winch)
168 signal.signal(signal.SIGUSR1, self.handle_usr1)
169 signal.signal(signal.SIGABRT, self.handle_abort)
170
171 # Don't let SIGTERM and SIGUSR1 disturb active requests
172 # by interrupting system calls
173 if hasattr(signal, 'siginterrupt'): # python >= 2.6
174 signal.siginterrupt(signal.SIGTERM, False)
175 signal.siginterrupt(signal.SIGUSR1, False)
176
177 if hasattr(signal, 'set_wakeup_fd'):
178 signal.set_wakeup_fd(self.PIPE[1])
179
180 def handle_usr1(self, sig, frame):
181 self.log.reopen_files()
182
183 def handle_exit(self, sig, frame):
184 self.alive = False
185
186 def handle_quit(self, sig, frame):
187 self.alive = False
188 # worker_int callback
189 self.cfg.worker_int(self)
190 time.sleep(0.1)
191 sys.exit(0)
192
193 def handle_abort(self, sig, frame):
194 self.alive = False
195 self.cfg.worker_abort(self)
196 sys.exit(1)
197
198 def handle_error(self, req, client, addr, exc):
199 request_start = datetime.now()
200 addr = addr or ('', -1) # unix socket case
201 if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod,
202 InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,
203 LimitRequestLine, LimitRequestHeaders,
204 InvalidProxyLine, ForbiddenProxyRequest,
205 InvalidSchemeHeaders,
206 SSLError)):
207
208 status_int = 400
209 reason = "Bad Request"
210
211 if isinstance(exc, InvalidRequestLine):
212 mesg = "Invalid Request Line '%s'" % str(exc)
213 elif isinstance(exc, InvalidRequestMethod):
214 mesg = "Invalid Method '%s'" % str(exc)
215 elif isinstance(exc, InvalidHTTPVersion):
216 mesg = "Invalid HTTP Version '%s'" % str(exc)
217 elif isinstance(exc, (InvalidHeaderName, InvalidHeader,)):
218 mesg = "%s" % str(exc)
219 if not req and hasattr(exc, "req"):
220 req = exc.req # for access log
221 elif isinstance(exc, LimitRequestLine):
222 mesg = "%s" % str(exc)
223 elif isinstance(exc, LimitRequestHeaders):
224 mesg = "Error parsing headers: '%s'" % str(exc)
225 elif isinstance(exc, InvalidProxyLine):
226 mesg = "'%s'" % str(exc)
227 elif isinstance(exc, ForbiddenProxyRequest):
228 reason = "Forbidden"
229 mesg = "Request forbidden"
230 status_int = 403
231 elif isinstance(exc, InvalidSchemeHeaders):
232 mesg = "%s" % str(exc)
233 elif isinstance(exc, SSLError):
234 reason = "Forbidden"
235 mesg = "'%s'" % str(exc)
236 status_int = 403
237
238 msg = "Invalid request from ip={ip}: {error}"
239 self.log.debug(msg.format(ip=addr[0], error=str(exc)))
240 else:
241 if hasattr(req, "uri"):
242 self.log.exception("Error handling request %s", req.uri)
243 status_int = 500
244 reason = "Internal Server Error"
245 mesg = ""
246
247 if req is not None:
248 request_time = datetime.now() - request_start
249 environ = default_environ(req, client, self.cfg)
250 environ['REMOTE_ADDR'] = addr[0]
251 environ['REMOTE_PORT'] = str(addr[1])
252 resp = Response(req, client, self.cfg)
253 resp.status = "%s %s" % (status_int, reason)
254 resp.response_length = len(mesg)
255 self.log.access(resp, req, environ, request_time)
256
257 try:
258 util.write_error(client, status_int, reason, mesg)
259 except:
260 self.log.debug("Failed to send error message.")
261
262 def handle_winch(self, sig, fname):
263 # Ignore SIGWINCH in worker. Fixes a crash on OpenBSD.
264 self.log.debug("worker: SIGWINCH ignored.")
265
[end of gunicorn/workers/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gunicorn/workers/base.py b/gunicorn/workers/base.py
--- a/gunicorn/workers/base.py
+++ b/gunicorn/workers/base.py
@@ -137,7 +137,7 @@
try:
self.wsgi = self.app.wsgi()
except SyntaxError as e:
- if self.cfg.reload == 'off':
+ if not self.cfg.reload:
raise
self.log.exception(e)
|
{"golden_diff": "diff --git a/gunicorn/workers/base.py b/gunicorn/workers/base.py\n--- a/gunicorn/workers/base.py\n+++ b/gunicorn/workers/base.py\n@@ -137,7 +137,7 @@\n try:\n self.wsgi = self.app.wsgi()\n except SyntaxError as e:\n- if self.cfg.reload == 'off':\n+ if not self.cfg.reload:\n raise\n \n self.log.exception(e)\n", "issue": "I get error in this package AttributeError: 'NoneType' object has no attribute 'add_extra_file'\nhi every one ..\r\nwhen i try to deploy keras model into google cloud i get this error ...\r\n\r\n```py\r\nFile \"/home/falahgs07/keras/env/lib/python3.5/site-packages/gunicorn/workers/base.py\", line 126, in init_process\r\n self.load_wsgi()\r\n File \"/home/falahgs07/keras/env/lib/python3.5/site-packages/gunicorn/workers/base.py\", line 148, in load_wsgi\r\n self.reloader.add_extra_file(exc_val.filename)\r\nAttributeError: 'NoneType' object has no attribute 'add_extra_file'\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nfrom datetime import datetime\nimport os\nfrom random import randint\nimport signal\nfrom ssl import SSLError\nimport sys\nimport time\nimport traceback\n\nfrom gunicorn import six\nfrom gunicorn import util\nfrom gunicorn.workers.workertmp import WorkerTmp\nfrom gunicorn.reloader import reloader_engines\nfrom gunicorn.http.errors import (\n InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, LimitRequestLine, LimitRequestHeaders,\n)\nfrom gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest\nfrom gunicorn.http.errors import InvalidSchemeHeaders\nfrom gunicorn.http.wsgi import default_environ, Response\nfrom gunicorn.six import MAXSIZE\n\n\nclass Worker(object):\n\n SIGNALS = [getattr(signal, \"SIG%s\" % x)\n for x in \"ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD\".split()]\n\n PIPE = []\n\n def __init__(self, age, ppid, sockets, app, timeout, cfg, log):\n \"\"\"\\\n This is called pre-fork so it shouldn't do anything to the\n current process. If there's a need to make process wide\n changes you'll want to do that in ``self.init_process()``.\n \"\"\"\n self.age = age\n self.pid = \"[booting]\"\n self.ppid = ppid\n self.sockets = sockets\n self.app = app\n self.timeout = timeout\n self.cfg = cfg\n self.booted = False\n self.aborted = False\n self.reloader = None\n\n self.nr = 0\n jitter = randint(0, cfg.max_requests_jitter)\n self.max_requests = cfg.max_requests + jitter or MAXSIZE\n self.alive = True\n self.log = log\n self.tmp = WorkerTmp(cfg)\n\n def __str__(self):\n return \"<Worker %s>\" % self.pid\n\n def notify(self):\n \"\"\"\\\n Your worker subclass must arrange to have this method called\n once every ``self.timeout`` seconds. If you fail in accomplishing\n this task, the master process will murder your workers.\n \"\"\"\n self.tmp.notify()\n\n def run(self):\n \"\"\"\\\n This is the mainloop of a worker process. You should override\n this method in a subclass to provide the intended behaviour\n for your particular evil schemes.\n \"\"\"\n raise NotImplementedError()\n\n def init_process(self):\n \"\"\"\\\n If you override this method in a subclass, the last statement\n in the function should be to call this method with\n super(MyWorkerClass, self).init_process() so that the ``run()``\n loop is initiated.\n \"\"\"\n\n # set environment' variables\n if self.cfg.env:\n for k, v in self.cfg.env.items():\n os.environ[k] = v\n\n util.set_owner_process(self.cfg.uid, self.cfg.gid,\n initgroups=self.cfg.initgroups)\n\n # Reseed the random number generator\n util.seed()\n\n # For waking ourselves up\n self.PIPE = os.pipe()\n for p in self.PIPE:\n util.set_non_blocking(p)\n util.close_on_exec(p)\n\n # Prevent fd inheritance\n for s in self.sockets:\n util.close_on_exec(s)\n util.close_on_exec(self.tmp.fileno())\n\n self.wait_fds = self.sockets + [self.PIPE[0]]\n\n self.log.close_on_exec()\n\n self.init_signals()\n\n # start the reloader\n if self.cfg.reload:\n def changed(fname):\n self.log.info(\"Worker reloading: %s modified\", fname)\n self.alive = False\n self.cfg.worker_int(self)\n time.sleep(0.1)\n sys.exit(0)\n\n reloader_cls = reloader_engines[self.cfg.reload_engine]\n self.reloader = reloader_cls(extra_files=self.cfg.reload_extra_files,\n callback=changed)\n self.reloader.start()\n\n self.load_wsgi()\n self.cfg.post_worker_init(self)\n\n # Enter main run loop\n self.booted = True\n self.run()\n\n def load_wsgi(self):\n try:\n self.wsgi = self.app.wsgi()\n except SyntaxError as e:\n if self.cfg.reload == 'off':\n raise\n\n self.log.exception(e)\n\n # fix from PR #1228\n # storing the traceback into exc_tb will create a circular reference.\n # per https://docs.python.org/2/library/sys.html#sys.exc_info warning,\n # delete the traceback after use.\n try:\n _, exc_val, exc_tb = sys.exc_info()\n self.reloader.add_extra_file(exc_val.filename)\n\n tb_string = six.StringIO()\n traceback.print_tb(exc_tb, file=tb_string)\n self.wsgi = util.make_fail_app(tb_string.getvalue())\n finally:\n del exc_tb\n\n def init_signals(self):\n # reset signaling\n for s in self.SIGNALS:\n signal.signal(s, signal.SIG_DFL)\n # init new signaling\n signal.signal(signal.SIGQUIT, self.handle_quit)\n signal.signal(signal.SIGTERM, self.handle_exit)\n signal.signal(signal.SIGINT, self.handle_quit)\n signal.signal(signal.SIGWINCH, self.handle_winch)\n signal.signal(signal.SIGUSR1, self.handle_usr1)\n signal.signal(signal.SIGABRT, self.handle_abort)\n\n # Don't let SIGTERM and SIGUSR1 disturb active requests\n # by interrupting system calls\n if hasattr(signal, 'siginterrupt'): # python >= 2.6\n signal.siginterrupt(signal.SIGTERM, False)\n signal.siginterrupt(signal.SIGUSR1, False)\n\n if hasattr(signal, 'set_wakeup_fd'):\n signal.set_wakeup_fd(self.PIPE[1])\n\n def handle_usr1(self, sig, frame):\n self.log.reopen_files()\n\n def handle_exit(self, sig, frame):\n self.alive = False\n\n def handle_quit(self, sig, frame):\n self.alive = False\n # worker_int callback\n self.cfg.worker_int(self)\n time.sleep(0.1)\n sys.exit(0)\n\n def handle_abort(self, sig, frame):\n self.alive = False\n self.cfg.worker_abort(self)\n sys.exit(1)\n\n def handle_error(self, req, client, addr, exc):\n request_start = datetime.now()\n addr = addr or ('', -1) # unix socket case\n if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,\n LimitRequestLine, LimitRequestHeaders,\n InvalidProxyLine, ForbiddenProxyRequest,\n InvalidSchemeHeaders,\n SSLError)):\n\n status_int = 400\n reason = \"Bad Request\"\n\n if isinstance(exc, InvalidRequestLine):\n mesg = \"Invalid Request Line '%s'\" % str(exc)\n elif isinstance(exc, InvalidRequestMethod):\n mesg = \"Invalid Method '%s'\" % str(exc)\n elif isinstance(exc, InvalidHTTPVersion):\n mesg = \"Invalid HTTP Version '%s'\" % str(exc)\n elif isinstance(exc, (InvalidHeaderName, InvalidHeader,)):\n mesg = \"%s\" % str(exc)\n if not req and hasattr(exc, \"req\"):\n req = exc.req # for access log\n elif isinstance(exc, LimitRequestLine):\n mesg = \"%s\" % str(exc)\n elif isinstance(exc, LimitRequestHeaders):\n mesg = \"Error parsing headers: '%s'\" % str(exc)\n elif isinstance(exc, InvalidProxyLine):\n mesg = \"'%s'\" % str(exc)\n elif isinstance(exc, ForbiddenProxyRequest):\n reason = \"Forbidden\"\n mesg = \"Request forbidden\"\n status_int = 403\n elif isinstance(exc, InvalidSchemeHeaders):\n mesg = \"%s\" % str(exc)\n elif isinstance(exc, SSLError):\n reason = \"Forbidden\"\n mesg = \"'%s'\" % str(exc)\n status_int = 403\n\n msg = \"Invalid request from ip={ip}: {error}\"\n self.log.debug(msg.format(ip=addr[0], error=str(exc)))\n else:\n if hasattr(req, \"uri\"):\n self.log.exception(\"Error handling request %s\", req.uri)\n status_int = 500\n reason = \"Internal Server Error\"\n mesg = \"\"\n\n if req is not None:\n request_time = datetime.now() - request_start\n environ = default_environ(req, client, self.cfg)\n environ['REMOTE_ADDR'] = addr[0]\n environ['REMOTE_PORT'] = str(addr[1])\n resp = Response(req, client, self.cfg)\n resp.status = \"%s %s\" % (status_int, reason)\n resp.response_length = len(mesg)\n self.log.access(resp, req, environ, request_time)\n\n try:\n util.write_error(client, status_int, reason, mesg)\n except:\n self.log.debug(\"Failed to send error message.\")\n\n def handle_winch(self, sig, fname):\n # Ignore SIGWINCH in worker. Fixes a crash on OpenBSD.\n self.log.debug(\"worker: SIGWINCH ignored.\")\n", "path": "gunicorn/workers/base.py"}]}
| 3,408 | 97 |
gh_patches_debug_19410
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1418
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Downloading from Oboom.com without premium ERROR
It can't download the file. I get the Error "recaptcha html not found".
Everything is up2date...:(
</issue>
<code>
[start of module/plugins/hoster/OboomCom.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Test links:
4 # https://www.oboom.com/B7CYZIEB/10Mio.dat
5
6 import re
7
8 from module.common.json_layer import json_loads
9 from module.plugins.Hoster import Hoster
10 from module.plugins.internal.CaptchaService import ReCaptcha
11
12
13 class OboomCom(Hoster):
14 __name__ = "OboomCom"
15 __type__ = "hoster"
16 __version__ = "0.31"
17
18 __pattern__ = r'https?://(?:www\.)?oboom\.com/(#(id=|/)?)?(?P<ID>\w{8})'
19
20 __description__ = """oboom.com hoster plugin"""
21 __license__ = "GPLv3"
22 __authors__ = [("stanley", "[email protected]")]
23
24
25 RECAPTCHA_KEY = "6LdqpO0SAAAAAJGHXo63HyalP7H4qlRs_vff0kJX"
26
27
28 def setup(self):
29 self.chunkLimit = 1
30 self.multiDL = self.resumeDownload = self.premium
31
32
33 def process(self, pyfile):
34 self.pyfile.url.replace(".com/#id=", ".com/#")
35 self.pyfile.url.replace(".com/#/", ".com/#")
36 self.getFileId(self.pyfile.url)
37 self.getSessionToken()
38 self.getFileInfo(self.sessionToken, self.fileId)
39 self.pyfile.name = self.fileName
40 self.pyfile.size = self.fileSize
41 if not self.premium:
42 self.solveCaptcha()
43 self.getDownloadTicket()
44 self.download("https://%s/1.0/dlh" % self.downloadDomain, get={"ticket": self.downloadTicket, "http_errors": 0})
45
46
47 def loadUrl(self, url, get=None):
48 if get is None:
49 get = dict()
50 return json_loads(self.load(url, get, decode=True))
51
52
53 def getFileId(self, url):
54 self.fileId = re.match(OboomCom.__pattern__, url).group('ID')
55
56
57 def getSessionToken(self):
58 if self.premium:
59 accountInfo = self.account.getAccountInfo(self.user, True)
60 if "session" in accountInfo:
61 self.sessionToken = accountInfo['session']
62 else:
63 self.fail(_("Could not retrieve premium session"))
64 else:
65 apiUrl = "https://www.oboom.com/1.0/guestsession"
66 result = self.loadUrl(apiUrl)
67 if result[0] == 200:
68 self.sessionToken = result[1]
69 else:
70 self.fail(_("Could not retrieve token for guest session. Error code: %s") % result[0])
71
72
73 def solveCaptcha(self):
74 recaptcha = ReCaptcha(self)
75
76 for _i in xrange(5):
77 response, challenge = recaptcha.challenge(self.RECAPTCHA_KEY)
78 apiUrl = "https://www.oboom.com/1.0/download/ticket"
79 params = {"recaptcha_challenge_field": challenge,
80 "recaptcha_response_field": response,
81 "download_id": self.fileId,
82 "token": self.sessionToken}
83 result = self.loadUrl(apiUrl, params)
84
85 if result[0] == 200:
86 self.downloadToken = result[1]
87 self.downloadAuth = result[2]
88 self.correctCaptcha()
89 self.setWait(30)
90 self.wait()
91 break
92
93 elif result[0] == 400:
94 if result[1] == "incorrect-captcha-sol":
95 self.invalidCaptcha()
96 elif result[1] == "captcha-timeout":
97 self.invalidCaptcha()
98 elif result[1] == "forbidden":
99 self.retry(5, 15 * 60, _("Service unavailable"))
100
101 elif result[0] == 403:
102 if result[1] == -1: # another download is running
103 self.setWait(15 * 60)
104 else:
105 self.setWait(result[1], True)
106 self.wait()
107 self.retry(5)
108 else:
109 self.invalidCaptcha()
110 self.fail(_("Received invalid captcha 5 times"))
111
112
113 def getFileInfo(self, token, fileId):
114 apiUrl = "https://api.oboom.com/1.0/info"
115 params = {"token": token, "items": fileId, "http_errors": 0}
116
117 result = self.loadUrl(apiUrl, params)
118 if result[0] == 200:
119 item = result[1][0]
120 if item['state'] == "online":
121 self.fileSize = item['size']
122 self.fileName = item['name']
123 else:
124 self.offline()
125 else:
126 self.fail(_("Could not retrieve file info. Error code %s: %s") % (result[0], result[1]))
127
128
129 def getDownloadTicket(self):
130 apiUrl = "https://api.oboom.com/1/dl"
131 params = {"item": self.fileId, "http_errors": 0}
132 if self.premium:
133 params['token'] = self.sessionToken
134 else:
135 params['token'] = self.downloadToken
136 params['auth'] = self.downloadAuth
137
138 result = self.loadUrl(apiUrl, params)
139 if result[0] == 200:
140 self.downloadDomain = result[1]
141 self.downloadTicket = result[2]
142 elif result[0] == 421:
143 self.retry(wait_time=result[2] + 60, reason=_("Connection limit exceeded"))
144 else:
145 self.fail(_("Could not retrieve download ticket. Error code: %s") % result[0])
146
[end of module/plugins/hoster/OboomCom.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/hoster/OboomCom.py b/module/plugins/hoster/OboomCom.py
--- a/module/plugins/hoster/OboomCom.py
+++ b/module/plugins/hoster/OboomCom.py
@@ -13,9 +13,9 @@
class OboomCom(Hoster):
__name__ = "OboomCom"
__type__ = "hoster"
- __version__ = "0.31"
+ __version__ = "0.32"
- __pattern__ = r'https?://(?:www\.)?oboom\.com/(#(id=|/)?)?(?P<ID>\w{8})'
+ __pattern__ = r'https?://(?:www\.)?oboom\.com/(?:#(?:id=|/)?)?(?P<ID>\w{8})'
__description__ = """oboom.com hoster plugin"""
__license__ = "GPLv3"
@@ -33,6 +33,7 @@
def process(self, pyfile):
self.pyfile.url.replace(".com/#id=", ".com/#")
self.pyfile.url.replace(".com/#/", ".com/#")
+ self.html = self.load(pyfile.url)
self.getFileId(self.pyfile.url)
self.getSessionToken()
self.getFileInfo(self.sessionToken, self.fileId)
|
{"golden_diff": "diff --git a/module/plugins/hoster/OboomCom.py b/module/plugins/hoster/OboomCom.py\n--- a/module/plugins/hoster/OboomCom.py\n+++ b/module/plugins/hoster/OboomCom.py\n@@ -13,9 +13,9 @@\n class OboomCom(Hoster):\n __name__ = \"OboomCom\"\n __type__ = \"hoster\"\n- __version__ = \"0.31\"\n+ __version__ = \"0.32\"\n \n- __pattern__ = r'https?://(?:www\\.)?oboom\\.com/(#(id=|/)?)?(?P<ID>\\w{8})'\n+ __pattern__ = r'https?://(?:www\\.)?oboom\\.com/(?:#(?:id=|/)?)?(?P<ID>\\w{8})'\n \n __description__ = \"\"\"oboom.com hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n@@ -33,6 +33,7 @@\n def process(self, pyfile):\n self.pyfile.url.replace(\".com/#id=\", \".com/#\")\n self.pyfile.url.replace(\".com/#/\", \".com/#\")\n+ self.html = self.load(pyfile.url)\n self.getFileId(self.pyfile.url)\n self.getSessionToken()\n self.getFileInfo(self.sessionToken, self.fileId)\n", "issue": "Downloading from Oboom.com without premium ERROR\nIt can't download the file. I get the Error \"recaptcha html not found\".\nEverything is up2date...:(\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Test links:\n# https://www.oboom.com/B7CYZIEB/10Mio.dat\n\nimport re\n\nfrom module.common.json_layer import json_loads\nfrom module.plugins.Hoster import Hoster\nfrom module.plugins.internal.CaptchaService import ReCaptcha\n\n\nclass OboomCom(Hoster):\n __name__ = \"OboomCom\"\n __type__ = \"hoster\"\n __version__ = \"0.31\"\n\n __pattern__ = r'https?://(?:www\\.)?oboom\\.com/(#(id=|/)?)?(?P<ID>\\w{8})'\n\n __description__ = \"\"\"oboom.com hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"stanley\", \"[email protected]\")]\n\n\n RECAPTCHA_KEY = \"6LdqpO0SAAAAAJGHXo63HyalP7H4qlRs_vff0kJX\"\n\n\n def setup(self):\n self.chunkLimit = 1\n self.multiDL = self.resumeDownload = self.premium\n\n\n def process(self, pyfile):\n self.pyfile.url.replace(\".com/#id=\", \".com/#\")\n self.pyfile.url.replace(\".com/#/\", \".com/#\")\n self.getFileId(self.pyfile.url)\n self.getSessionToken()\n self.getFileInfo(self.sessionToken, self.fileId)\n self.pyfile.name = self.fileName\n self.pyfile.size = self.fileSize\n if not self.premium:\n self.solveCaptcha()\n self.getDownloadTicket()\n self.download(\"https://%s/1.0/dlh\" % self.downloadDomain, get={\"ticket\": self.downloadTicket, \"http_errors\": 0})\n\n\n def loadUrl(self, url, get=None):\n if get is None:\n get = dict()\n return json_loads(self.load(url, get, decode=True))\n\n\n def getFileId(self, url):\n self.fileId = re.match(OboomCom.__pattern__, url).group('ID')\n\n\n def getSessionToken(self):\n if self.premium:\n accountInfo = self.account.getAccountInfo(self.user, True)\n if \"session\" in accountInfo:\n self.sessionToken = accountInfo['session']\n else:\n self.fail(_(\"Could not retrieve premium session\"))\n else:\n apiUrl = \"https://www.oboom.com/1.0/guestsession\"\n result = self.loadUrl(apiUrl)\n if result[0] == 200:\n self.sessionToken = result[1]\n else:\n self.fail(_(\"Could not retrieve token for guest session. Error code: %s\") % result[0])\n\n\n def solveCaptcha(self):\n recaptcha = ReCaptcha(self)\n\n for _i in xrange(5):\n response, challenge = recaptcha.challenge(self.RECAPTCHA_KEY)\n apiUrl = \"https://www.oboom.com/1.0/download/ticket\"\n params = {\"recaptcha_challenge_field\": challenge,\n \"recaptcha_response_field\": response,\n \"download_id\": self.fileId,\n \"token\": self.sessionToken}\n result = self.loadUrl(apiUrl, params)\n\n if result[0] == 200:\n self.downloadToken = result[1]\n self.downloadAuth = result[2]\n self.correctCaptcha()\n self.setWait(30)\n self.wait()\n break\n\n elif result[0] == 400:\n if result[1] == \"incorrect-captcha-sol\":\n self.invalidCaptcha()\n elif result[1] == \"captcha-timeout\":\n self.invalidCaptcha()\n elif result[1] == \"forbidden\":\n self.retry(5, 15 * 60, _(\"Service unavailable\"))\n\n elif result[0] == 403:\n if result[1] == -1: # another download is running\n self.setWait(15 * 60)\n else:\n self.setWait(result[1], True)\n self.wait()\n self.retry(5)\n else:\n self.invalidCaptcha()\n self.fail(_(\"Received invalid captcha 5 times\"))\n\n\n def getFileInfo(self, token, fileId):\n apiUrl = \"https://api.oboom.com/1.0/info\"\n params = {\"token\": token, \"items\": fileId, \"http_errors\": 0}\n\n result = self.loadUrl(apiUrl, params)\n if result[0] == 200:\n item = result[1][0]\n if item['state'] == \"online\":\n self.fileSize = item['size']\n self.fileName = item['name']\n else:\n self.offline()\n else:\n self.fail(_(\"Could not retrieve file info. Error code %s: %s\") % (result[0], result[1]))\n\n\n def getDownloadTicket(self):\n apiUrl = \"https://api.oboom.com/1/dl\"\n params = {\"item\": self.fileId, \"http_errors\": 0}\n if self.premium:\n params['token'] = self.sessionToken\n else:\n params['token'] = self.downloadToken\n params['auth'] = self.downloadAuth\n\n result = self.loadUrl(apiUrl, params)\n if result[0] == 200:\n self.downloadDomain = result[1]\n self.downloadTicket = result[2]\n elif result[0] == 421:\n self.retry(wait_time=result[2] + 60, reason=_(\"Connection limit exceeded\"))\n else:\n self.fail(_(\"Could not retrieve download ticket. Error code: %s\") % result[0])\n", "path": "module/plugins/hoster/OboomCom.py"}]}
| 2,155 | 296 |
gh_patches_debug_7405
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-823
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
urllib instrumentation fails for local file access
When reading local files the status code is not specified and is None. This isn't handled by the instrumentation and causes an exception.
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/444e0a13127304d3a04ccd44445b2e6caed3f770/instrumentation/opentelemetry-instrumentation-urllib/src/opentelemetry/instrumentation/urllib/__init__.py#L212-L217
urllib instrumentation fails for local file access
When reading local files the status code is not specified and is None. This isn't handled by the instrumentation and causes an exception.
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/444e0a13127304d3a04ccd44445b2e6caed3f770/instrumentation/opentelemetry-instrumentation-urllib/src/opentelemetry/instrumentation/urllib/__init__.py#L212-L217
</issue>
<code>
[start of opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Dict, Sequence
16
17 from wrapt import ObjectProxy
18
19 from opentelemetry import context, trace
20
21 # pylint: disable=unused-import
22 # pylint: disable=E0611
23 from opentelemetry.context import _SUPPRESS_INSTRUMENTATION_KEY # noqa: F401
24 from opentelemetry.propagate import extract
25 from opentelemetry.trace import StatusCode
26
27
28 def extract_attributes_from_object(
29 obj: any, attributes: Sequence[str], existing: Dict[str, str] = None
30 ) -> Dict[str, str]:
31 extracted = {}
32 if existing:
33 extracted.update(existing)
34 for attr in attributes:
35 value = getattr(obj, attr, None)
36 if value is not None:
37 extracted[attr] = str(value)
38 return extracted
39
40
41 def http_status_to_status_code(
42 status: int,
43 allow_redirect: bool = True,
44 server_span: bool = False,
45 ) -> StatusCode:
46 """Converts an HTTP status code to an OpenTelemetry canonical status code
47
48 Args:
49 status (int): HTTP status code
50 """
51 # See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#status
52 if status < 100:
53 return StatusCode.ERROR
54 if status <= 299:
55 return StatusCode.UNSET
56 if status <= 399 and allow_redirect:
57 return StatusCode.UNSET
58 if status <= 499 and server_span:
59 return StatusCode.UNSET
60 return StatusCode.ERROR
61
62
63 def unwrap(obj, attr: str):
64 """Given a function that was wrapped by wrapt.wrap_function_wrapper, unwrap it
65
66 Args:
67 obj: Object that holds a reference to the wrapped function
68 attr (str): Name of the wrapped function
69 """
70 func = getattr(obj, attr, None)
71 if func and isinstance(func, ObjectProxy) and hasattr(func, "__wrapped__"):
72 setattr(obj, attr, func.__wrapped__)
73
74
75 def _start_internal_or_server_span(
76 tracer, span_name, start_time, context_carrier, context_getter
77 ):
78 """Returns internal or server span along with the token which can be used by caller to reset context
79
80
81 Args:
82 tracer : tracer in use by given instrumentation library
83 name (string): name of the span
84 start_time : start time of the span
85 context_carrier : object which contains values that are
86 used to construct a Context. This object
87 must be paired with an appropriate getter
88 which understands how to extract a value from it.
89 context_getter : an object which contains a get function that can retrieve zero
90 or more values from the carrier and a keys function that can get all the keys
91 from carrier.
92 """
93
94 token = ctx = span_kind = None
95 if trace.get_current_span() is trace.INVALID_SPAN:
96 ctx = extract(context_carrier, getter=context_getter)
97 token = context.attach(ctx)
98 span_kind = trace.SpanKind.SERVER
99 else:
100 ctx = context.get_current()
101 span_kind = trace.SpanKind.INTERNAL
102 span = tracer.start_span(
103 name=span_name,
104 context=ctx,
105 kind=span_kind,
106 start_time=start_time,
107 )
108 return span, token
109
[end of opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py
@@ -49,6 +49,9 @@
status (int): HTTP status code
"""
# See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#status
+ if not isinstance(status, int):
+ return StatusCode.UNSET
+
if status < 100:
return StatusCode.ERROR
if status <= 299:
|
{"golden_diff": "diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py\n@@ -49,6 +49,9 @@\n status (int): HTTP status code\n \"\"\"\n # See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#status\n+ if not isinstance(status, int):\n+ return StatusCode.UNSET\n+\n if status < 100:\n return StatusCode.ERROR\n if status <= 299:\n", "issue": "urllib instrumentation fails for local file access\nWhen reading local files the status code is not specified and is None. This isn't handled by the instrumentation and causes an exception.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python-contrib/blob/444e0a13127304d3a04ccd44445b2e6caed3f770/instrumentation/opentelemetry-instrumentation-urllib/src/opentelemetry/instrumentation/urllib/__init__.py#L212-L217\nurllib instrumentation fails for local file access\nWhen reading local files the status code is not specified and is None. This isn't handled by the instrumentation and causes an exception.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python-contrib/blob/444e0a13127304d3a04ccd44445b2e6caed3f770/instrumentation/opentelemetry-instrumentation-urllib/src/opentelemetry/instrumentation/urllib/__init__.py#L212-L217\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Sequence\n\nfrom wrapt import ObjectProxy\n\nfrom opentelemetry import context, trace\n\n# pylint: disable=unused-import\n# pylint: disable=E0611\nfrom opentelemetry.context import _SUPPRESS_INSTRUMENTATION_KEY # noqa: F401\nfrom opentelemetry.propagate import extract\nfrom opentelemetry.trace import StatusCode\n\n\ndef extract_attributes_from_object(\n obj: any, attributes: Sequence[str], existing: Dict[str, str] = None\n) -> Dict[str, str]:\n extracted = {}\n if existing:\n extracted.update(existing)\n for attr in attributes:\n value = getattr(obj, attr, None)\n if value is not None:\n extracted[attr] = str(value)\n return extracted\n\n\ndef http_status_to_status_code(\n status: int,\n allow_redirect: bool = True,\n server_span: bool = False,\n) -> StatusCode:\n \"\"\"Converts an HTTP status code to an OpenTelemetry canonical status code\n\n Args:\n status (int): HTTP status code\n \"\"\"\n # See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/http.md#status\n if status < 100:\n return StatusCode.ERROR\n if status <= 299:\n return StatusCode.UNSET\n if status <= 399 and allow_redirect:\n return StatusCode.UNSET\n if status <= 499 and server_span:\n return StatusCode.UNSET\n return StatusCode.ERROR\n\n\ndef unwrap(obj, attr: str):\n \"\"\"Given a function that was wrapped by wrapt.wrap_function_wrapper, unwrap it\n\n Args:\n obj: Object that holds a reference to the wrapped function\n attr (str): Name of the wrapped function\n \"\"\"\n func = getattr(obj, attr, None)\n if func and isinstance(func, ObjectProxy) and hasattr(func, \"__wrapped__\"):\n setattr(obj, attr, func.__wrapped__)\n\n\ndef _start_internal_or_server_span(\n tracer, span_name, start_time, context_carrier, context_getter\n):\n \"\"\"Returns internal or server span along with the token which can be used by caller to reset context\n\n\n Args:\n tracer : tracer in use by given instrumentation library\n name (string): name of the span\n start_time : start time of the span\n context_carrier : object which contains values that are\n used to construct a Context. This object\n must be paired with an appropriate getter\n which understands how to extract a value from it.\n context_getter : an object which contains a get function that can retrieve zero\n or more values from the carrier and a keys function that can get all the keys\n from carrier.\n \"\"\"\n\n token = ctx = span_kind = None\n if trace.get_current_span() is trace.INVALID_SPAN:\n ctx = extract(context_carrier, getter=context_getter)\n token = context.attach(ctx)\n span_kind = trace.SpanKind.SERVER\n else:\n ctx = context.get_current()\n span_kind = trace.SpanKind.INTERNAL\n span = tracer.start_span(\n name=span_name,\n context=ctx,\n kind=span_kind,\n start_time=start_time,\n )\n return span, token\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/utils.py"}]}
| 1,853 | 166 |
gh_patches_debug_8569
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-bids-1091
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CI: Problem with `gen_cli.py`
see: https://app.circleci.com/pipelines/github/mne-tools/mne-bids/4785/workflows/21ad6804-1cc2-42dd-9133-f24de2ea3db5/jobs/6923
```
Traceback (most recent call last):
File "/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/events.py", line 94, in emit
results.append(listener.handler(self.app, *args))
File "/home/circleci/project/doc/sphinxext/gen_cli.py", line 84, in generate_cli_rst
output[0], output[2] = output[2], output[0]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/cmd/build.py", line 276, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
File "/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/application.py", line 262, in __init__
self._init_builder()
File "/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/application.py", line 335, in _init_builder
self.events.emit('builder-inited')
File "/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/events.py", line 105, in emit
raise ExtensionError(__("Handler %r for event %r threw an exception") %
sphinx.errors.ExtensionError: Handler <function generate_cli_rst at 0x7fe9bf90c160> for event 'builder-inited' threw an exception (exception: list index out of range)
```
https://github.com/mne-tools/mne-bids/blob/46b0a5300ed5c17ca93b8bbf1d9542069597ef62/doc/sphinxext/gen_cli.py#L1
</issue>
<code>
[start of doc/sphinxext/gen_cli.py]
1 """Custom sphinx extension to generate docs for the command line interface.
2
3 Inspired by MNE-Python's `gen_commands.py`
4 see: github.com/mne-tools/mne-python/blob/main/doc/sphinxext/gen_commands.py
5 """
6 # Authors: Eric Larson <[email protected]>
7 # Alexandre Gramfort <[email protected]>
8 # Stefan Appelhoff <[email protected]>
9 #
10 # License: BSD-3-Clause
11 import os
12 import glob
13 from os import path as op
14 import subprocess
15 import sys
16
17 import sphinx.util
18 from mne.utils import run_subprocess, _replace_md5
19
20
21 def setup(app):
22 """Set up the app."""
23 app.connect('builder-inited', generate_cli_rst)
24
25
26 # Header markings go:
27 # 1. =/= : Page title
28 # 2. = : Command name
29 # 3. -/- : Command description
30 # 4. - : Command sections (Examples, Notes)
31
32 header = """\
33 :orphan:
34
35 .. _python_cli:
36
37 =====================================
38 MNE-BIDS Command Line Interface (CLI)
39 =====================================
40
41 Here we list the MNE-BIDS tools that you can use from the command line.
42
43 """
44
45 command_rst = """
46
47 .. _gen_%s:
48
49 %s
50 %s
51
52 .. rst-class:: callout
53
54 %s
55
56 """
57
58
59 def generate_cli_rst(app=None):
60 """Generate the command line interface docs."""
61 out_dir = op.abspath(op.join(op.dirname(__file__), '..', 'generated'))
62 if not op.isdir(out_dir):
63 os.mkdir(out_dir)
64 out_fname = op.join(out_dir, 'cli.rst.new')
65
66 cli_path = op.abspath(
67 op.join(os.path.dirname(__file__), '..', '..', 'mne_bids', 'commands'))
68 fnames = sorted([
69 op.basename(fname)
70 for fname in glob.glob(op.join(cli_path, 'mne_bids*.py'))])
71 iterator = sphinx.util.status_iterator(
72 fnames, 'generating MNE-BIDS cli help ... ', length=len(fnames))
73 with open(out_fname, 'w', encoding='utf-8') as f:
74 f.write(header)
75 for fname in iterator:
76 cmd_name = fname[:-3]
77 run_name = op.join(cli_path, fname)
78 output, _ = run_subprocess([sys.executable, run_name, '--help'],
79 stdout=subprocess.PIPE,
80 stderr=subprocess.PIPE, verbose=False)
81 output = output.splitlines()
82
83 # Swap usage and title lines
84 output[0], output[2] = output[2], output[0]
85
86 # Add header marking
87 for idx in (1, 0):
88 output.insert(idx, '-' * len(output[0]))
89
90 # Add code styling for the "Usage: " line
91 for li, line in enumerate(output):
92 if line.startswith('Usage: mne_bids '):
93 output[li] = 'Usage: ``%s``' % line[7:]
94 break
95
96 # Turn "Options:" into field list
97 if 'Options:' in output:
98 ii = output.index('Options:')
99 output[ii] = 'Options'
100 output.insert(ii + 1, '-------')
101 output.insert(ii + 2, '')
102 output.insert(ii + 3, '.. rst-class:: field-list cmd-list')
103 output.insert(ii + 4, '')
104 output = '\n'.join(output)
105 f.write(command_rst % (cmd_name,
106 cmd_name.replace('mne_bids_', 'mne_bids '),
107 '=' * len(cmd_name),
108 output))
109 _replace_md5(out_fname)
110 print('[Done]')
111
112
113 # This is useful for testing/iterating to see what the result looks like
114 if __name__ == '__main__':
115 generate_cli_rst()
116
[end of doc/sphinxext/gen_cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/sphinxext/gen_cli.py b/doc/sphinxext/gen_cli.py
--- a/doc/sphinxext/gen_cli.py
+++ b/doc/sphinxext/gen_cli.py
@@ -76,8 +76,7 @@
cmd_name = fname[:-3]
run_name = op.join(cli_path, fname)
output, _ = run_subprocess([sys.executable, run_name, '--help'],
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, verbose=False)
+ verbose=False)
output = output.splitlines()
# Swap usage and title lines
|
{"golden_diff": "diff --git a/doc/sphinxext/gen_cli.py b/doc/sphinxext/gen_cli.py\n--- a/doc/sphinxext/gen_cli.py\n+++ b/doc/sphinxext/gen_cli.py\n@@ -76,8 +76,7 @@\n cmd_name = fname[:-3]\n run_name = op.join(cli_path, fname)\n output, _ = run_subprocess([sys.executable, run_name, '--help'],\n- stdout=subprocess.PIPE,\n- stderr=subprocess.PIPE, verbose=False)\n+ verbose=False)\n output = output.splitlines()\n \n # Swap usage and title lines\n", "issue": "CI: Problem with `gen_cli.py`\nsee: https://app.circleci.com/pipelines/github/mne-tools/mne-bids/4785/workflows/21ad6804-1cc2-42dd-9133-f24de2ea3db5/jobs/6923\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/events.py\", line 94, in emit\r\n results.append(listener.handler(self.app, *args))\r\n File \"/home/circleci/project/doc/sphinxext/gen_cli.py\", line 84, in generate_cli_rst\r\n output[0], output[2] = output[2], output[0]\r\nIndexError: list index out of range\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/cmd/build.py\", line 276, in build_main\r\n app = Sphinx(args.sourcedir, args.confdir, args.outputdir,\r\n File \"/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/application.py\", line 262, in __init__\r\n self._init_builder()\r\n File \"/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/application.py\", line 335, in _init_builder\r\n self.events.emit('builder-inited')\r\n File \"/home/circleci/mne_bids_env/lib/python3.9/site-packages/sphinx/events.py\", line 105, in emit\r\n raise ExtensionError(__(\"Handler %r for event %r threw an exception\") %\r\nsphinx.errors.ExtensionError: Handler <function generate_cli_rst at 0x7fe9bf90c160> for event 'builder-inited' threw an exception (exception: list index out of range)\r\n```\r\n\r\nhttps://github.com/mne-tools/mne-bids/blob/46b0a5300ed5c17ca93b8bbf1d9542069597ef62/doc/sphinxext/gen_cli.py#L1\n", "before_files": [{"content": "\"\"\"Custom sphinx extension to generate docs for the command line interface.\n\nInspired by MNE-Python's `gen_commands.py`\nsee: github.com/mne-tools/mne-python/blob/main/doc/sphinxext/gen_commands.py\n\"\"\"\n# Authors: Eric Larson <[email protected]>\n# Alexandre Gramfort <[email protected]>\n# Stefan Appelhoff <[email protected]>\n#\n# License: BSD-3-Clause\nimport os\nimport glob\nfrom os import path as op\nimport subprocess\nimport sys\n\nimport sphinx.util\nfrom mne.utils import run_subprocess, _replace_md5\n\n\ndef setup(app):\n \"\"\"Set up the app.\"\"\"\n app.connect('builder-inited', generate_cli_rst)\n\n\n# Header markings go:\n# 1. =/= : Page title\n# 2. = : Command name\n# 3. -/- : Command description\n# 4. - : Command sections (Examples, Notes)\n\nheader = \"\"\"\\\n:orphan:\n\n.. _python_cli:\n\n=====================================\nMNE-BIDS Command Line Interface (CLI)\n=====================================\n\nHere we list the MNE-BIDS tools that you can use from the command line.\n\n\"\"\"\n\ncommand_rst = \"\"\"\n\n.. _gen_%s:\n\n%s\n%s\n\n.. rst-class:: callout\n\n%s\n\n\"\"\"\n\n\ndef generate_cli_rst(app=None):\n \"\"\"Generate the command line interface docs.\"\"\"\n out_dir = op.abspath(op.join(op.dirname(__file__), '..', 'generated'))\n if not op.isdir(out_dir):\n os.mkdir(out_dir)\n out_fname = op.join(out_dir, 'cli.rst.new')\n\n cli_path = op.abspath(\n op.join(os.path.dirname(__file__), '..', '..', 'mne_bids', 'commands'))\n fnames = sorted([\n op.basename(fname)\n for fname in glob.glob(op.join(cli_path, 'mne_bids*.py'))])\n iterator = sphinx.util.status_iterator(\n fnames, 'generating MNE-BIDS cli help ... ', length=len(fnames))\n with open(out_fname, 'w', encoding='utf-8') as f:\n f.write(header)\n for fname in iterator:\n cmd_name = fname[:-3]\n run_name = op.join(cli_path, fname)\n output, _ = run_subprocess([sys.executable, run_name, '--help'],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, verbose=False)\n output = output.splitlines()\n\n # Swap usage and title lines\n output[0], output[2] = output[2], output[0]\n\n # Add header marking\n for idx in (1, 0):\n output.insert(idx, '-' * len(output[0]))\n\n # Add code styling for the \"Usage: \" line\n for li, line in enumerate(output):\n if line.startswith('Usage: mne_bids '):\n output[li] = 'Usage: ``%s``' % line[7:]\n break\n\n # Turn \"Options:\" into field list\n if 'Options:' in output:\n ii = output.index('Options:')\n output[ii] = 'Options'\n output.insert(ii + 1, '-------')\n output.insert(ii + 2, '')\n output.insert(ii + 3, '.. rst-class:: field-list cmd-list')\n output.insert(ii + 4, '')\n output = '\\n'.join(output)\n f.write(command_rst % (cmd_name,\n cmd_name.replace('mne_bids_', 'mne_bids '),\n '=' * len(cmd_name),\n output))\n _replace_md5(out_fname)\n print('[Done]')\n\n\n# This is useful for testing/iterating to see what the result looks like\nif __name__ == '__main__':\n generate_cli_rst()\n", "path": "doc/sphinxext/gen_cli.py"}]}
| 2,111 | 126 |
gh_patches_debug_4621
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-379
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnicodeDecodeError in compiler.py
Reported here: https://stackoverflow.com/questions/45473903/unicodedecodeerror-when-i-use-cuda-to-train-dataset
Versions:
- chainer (2.0.2)
- cupy (1.0.2)
It seems `nvcc` generated non-UTF8 output.
</issue>
<code>
[start of cupy/cuda/compiler.py]
1 import hashlib
2 import os
3 import re
4 import shutil
5 import subprocess
6 import sys
7 import tempfile
8
9 import six
10
11 from cupy.cuda import device
12 from cupy.cuda import function
13
14
15 _nvcc_version = None
16
17
18 def _get_nvcc_version():
19 global _nvcc_version
20 if _nvcc_version is None:
21 cmd = ['nvcc', '--version']
22 _nvcc_version = _run_nvcc(cmd, '.')
23
24 return _nvcc_version
25
26
27 def _get_arch():
28 cc = device.Device().compute_capability
29 return 'sm_%s' % cc
30
31
32 class TemporaryDirectory(object):
33
34 def __enter__(self):
35 self.path = tempfile.mkdtemp()
36 return self.path
37
38 def __exit__(self, exc_type, exc_value, traceback):
39 if exc_value is not None:
40 return
41
42 for name in os.listdir(self.path):
43 os.unlink(os.path.join(self.path, name))
44 os.rmdir(self.path)
45
46
47 def _run_nvcc(cmd, cwd):
48 try:
49 return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)
50 except subprocess.CalledProcessError as e:
51 msg = ('`nvcc` command returns non-zero exit status. \n'
52 'command: {0}\n'
53 'return-code: {1}\n'
54 'stdout/stderr: \n'
55 '{2}'.format(e.cmd, e.returncode, e.output))
56 raise RuntimeError(msg)
57 except OSError as e:
58 msg = 'Failed to run `nvcc` command. ' \
59 'Check PATH environment variable: ' \
60 + str(e)
61 raise OSError(msg)
62
63
64 def nvcc(source, options=(), arch=None):
65 if not arch:
66 arch = _get_arch()
67 cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)
68
69 with TemporaryDirectory() as root_dir:
70 path = os.path.join(root_dir, 'kern')
71 cu_path = '%s.cu' % path
72 cubin_path = '%s.cubin' % path
73
74 with open(cu_path, 'w') as cu_file:
75 cu_file.write(source)
76
77 cmd.append(cu_path)
78 _run_nvcc(cmd, root_dir)
79
80 with open(cubin_path, 'rb') as bin_file:
81 return bin_file.read()
82
83
84 def preprocess(source, options=()):
85 cmd = ['nvcc', '--preprocess'] + list(options)
86 with TemporaryDirectory() as root_dir:
87 path = os.path.join(root_dir, 'kern')
88 cu_path = '%s.cu' % path
89
90 with open(cu_path, 'w') as cu_file:
91 cu_file.write(source)
92
93 cmd.append(cu_path)
94 pp_src = _run_nvcc(cmd, root_dir)
95
96 if isinstance(pp_src, six.binary_type):
97 pp_src = pp_src.decode('utf-8')
98 return re.sub('(?m)^#.*$', '', pp_src)
99
100
101 _default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')
102
103
104 def get_cache_dir():
105 return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)
106
107
108 _empty_file_preprocess_cache = {}
109
110
111 def compile_with_cache(source, options=(), arch=None, cache_dir=None):
112 global _empty_file_preprocess_cache
113 if cache_dir is None:
114 cache_dir = get_cache_dir()
115 if arch is None:
116 arch = _get_arch()
117
118 if 'win32' == sys.platform:
119 options += ('-Xcompiler', '/wd 4819')
120 if sys.maxsize == 9223372036854775807:
121 options += '-m64',
122 elif sys.maxsize == 2147483647:
123 options += '-m32',
124
125 env = (arch, options, _get_nvcc_version())
126 if '#include' in source:
127 pp_src = '%s %s' % (env, preprocess(source, options))
128 else:
129 base = _empty_file_preprocess_cache.get(env, None)
130 if base is None:
131 base = _empty_file_preprocess_cache[env] = preprocess('', options)
132 pp_src = '%s %s %s' % (env, base, source)
133
134 if isinstance(pp_src, six.text_type):
135 pp_src = pp_src.encode('utf-8')
136 name = '%s_2.cubin' % hashlib.md5(pp_src).hexdigest()
137
138 if not os.path.isdir(cache_dir):
139 try:
140 os.makedirs(cache_dir)
141 except OSError:
142 if not os.path.isdir(cache_dir):
143 raise
144
145 mod = function.Module()
146 # To handle conflicts in concurrent situation, we adopt lock-free method
147 # to avoid performance degradation.
148 path = os.path.join(cache_dir, name)
149 if os.path.exists(path):
150 with open(path, 'rb') as file:
151 data = file.read()
152 if len(data) >= 32:
153 hash = data[:32]
154 cubin = data[32:]
155 cubin_hash = six.b(hashlib.md5(cubin).hexdigest())
156 if hash == cubin_hash:
157 mod.load(cubin)
158 return mod
159
160 cubin = nvcc(source, options, arch)
161 cubin_hash = six.b(hashlib.md5(cubin).hexdigest())
162
163 # shutil.move is not atomic operation, so it could result in a corrupted
164 # file. We detect it by appending md5 hash at the beginning of each cache
165 # file. If the file is corrupted, it will be ignored next time it is read.
166 with tempfile.NamedTemporaryFile(dir=cache_dir, delete=False) as tf:
167 tf.write(cubin_hash)
168 tf.write(cubin)
169 temp_path = tf.name
170 shutil.move(temp_path, path)
171
172 mod.load(cubin)
173 return mod
174
[end of cupy/cuda/compiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py
--- a/cupy/cuda/compiler.py
+++ b/cupy/cuda/compiler.py
@@ -93,9 +93,8 @@
cmd.append(cu_path)
pp_src = _run_nvcc(cmd, root_dir)
- if isinstance(pp_src, six.binary_type):
- pp_src = pp_src.decode('utf-8')
- return re.sub('(?m)^#.*$', '', pp_src)
+ assert isinstance(pp_src, six.binary_type)
+ return re.sub(b'(?m)^#.*$', b'', pp_src)
_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')
|
{"golden_diff": "diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py\n--- a/cupy/cuda/compiler.py\n+++ b/cupy/cuda/compiler.py\n@@ -93,9 +93,8 @@\n cmd.append(cu_path)\n pp_src = _run_nvcc(cmd, root_dir)\n \n- if isinstance(pp_src, six.binary_type):\n- pp_src = pp_src.decode('utf-8')\n- return re.sub('(?m)^#.*$', '', pp_src)\n+ assert isinstance(pp_src, six.binary_type)\n+ return re.sub(b'(?m)^#.*$', b'', pp_src)\n \n \n _default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n", "issue": "UnicodeDecodeError in compiler.py\nReported here: https://stackoverflow.com/questions/45473903/unicodedecodeerror-when-i-use-cuda-to-train-dataset\r\n\r\nVersions:\r\n- chainer (2.0.2)\r\n- cupy (1.0.2)\r\n\r\nIt seems `nvcc` generated non-UTF8 output.\n", "before_files": [{"content": "import hashlib\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\nimport tempfile\n\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\n\n\n_nvcc_version = None\n\n\ndef _get_nvcc_version():\n global _nvcc_version\n if _nvcc_version is None:\n cmd = ['nvcc', '--version']\n _nvcc_version = _run_nvcc(cmd, '.')\n\n return _nvcc_version\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n\n\nclass TemporaryDirectory(object):\n\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _run_nvcc(cmd, cwd):\n try:\n return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n msg = ('`nvcc` command returns non-zero exit status. \\n'\n 'command: {0}\\n'\n 'return-code: {1}\\n'\n 'stdout/stderr: \\n'\n '{2}'.format(e.cmd, e.returncode, e.output))\n raise RuntimeError(msg)\n except OSError as e:\n msg = 'Failed to run `nvcc` command. ' \\\n 'Check PATH environment variable: ' \\\n + str(e)\n raise OSError(msg)\n\n\ndef nvcc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n cubin_path = '%s.cubin' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n _run_nvcc(cmd, root_dir)\n\n with open(cubin_path, 'rb') as bin_file:\n return bin_file.read()\n\n\ndef preprocess(source, options=()):\n cmd = ['nvcc', '--preprocess'] + list(options)\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n pp_src = _run_nvcc(cmd, root_dir)\n\n if isinstance(pp_src, six.binary_type):\n pp_src = pp_src.decode('utf-8')\n return re.sub('(?m)^#.*$', '', pp_src)\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None):\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n if 'win32' == sys.platform:\n options += ('-Xcompiler', '/wd 4819')\n if sys.maxsize == 9223372036854775807:\n options += '-m64',\n elif sys.maxsize == 2147483647:\n options += '-m32',\n\n env = (arch, options, _get_nvcc_version())\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n base = _empty_file_preprocess_cache[env] = preprocess('', options)\n pp_src = '%s %s %s' % (env, base, source)\n\n if isinstance(pp_src, six.text_type):\n pp_src = pp_src.encode('utf-8')\n name = '%s_2.cubin' % hashlib.md5(pp_src).hexdigest()\n\n if not os.path.isdir(cache_dir):\n try:\n os.makedirs(cache_dir)\n except OSError:\n if not os.path.isdir(cache_dir):\n raise\n\n mod = function.Module()\n # To handle conflicts in concurrent situation, we adopt lock-free method\n # to avoid performance degradation.\n path = os.path.join(cache_dir, name)\n if os.path.exists(path):\n with open(path, 'rb') as file:\n data = file.read()\n if len(data) >= 32:\n hash = data[:32]\n cubin = data[32:]\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n if hash == cubin_hash:\n mod.load(cubin)\n return mod\n\n cubin = nvcc(source, options, arch)\n cubin_hash = six.b(hashlib.md5(cubin).hexdigest())\n\n # shutil.move is not atomic operation, so it could result in a corrupted\n # file. We detect it by appending md5 hash at the beginning of each cache\n # file. If the file is corrupted, it will be ignored next time it is read.\n with tempfile.NamedTemporaryFile(dir=cache_dir, delete=False) as tf:\n tf.write(cubin_hash)\n tf.write(cubin)\n temp_path = tf.name\n shutil.move(temp_path, path)\n\n mod.load(cubin)\n return mod\n", "path": "cupy/cuda/compiler.py"}]}
| 2,333 | 155 |
gh_patches_debug_19589
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-839
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use the PlatformArn property to specify a custom platform for Elastic Beanstalk.
[AWS::ElasticBeanstalk::ConfigurationTemplate](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-beanstalk-configurationtemplate.html) and [AWS::ElasticBeanstalk::Environment](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html)
Use the PlatformArn property to specify a custom platform for Elastic Beanstalk.
</issue>
<code>
[start of troposphere/elasticbeanstalk.py]
1 # Copyright (c) 2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7
8
9 WebServer = "WebServer"
10 Worker = "Worker"
11 WebServerType = "Standard"
12 WorkerType = "SQS/HTTP"
13
14
15 class SourceBundle(AWSProperty):
16 props = {
17 'S3Bucket': (basestring, True),
18 'S3Key': (basestring, True),
19 }
20
21
22 class SourceConfiguration(AWSProperty):
23 props = {
24 'ApplicationName': (basestring, True),
25 'TemplateName': (basestring, True),
26 }
27
28
29 class OptionSettings(AWSProperty):
30 props = {
31 'Namespace': (basestring, True),
32 'OptionName': (basestring, True),
33 'Value': (basestring, True),
34 }
35
36
37 class Application(AWSObject):
38 resource_type = "AWS::ElasticBeanstalk::Application"
39
40 props = {
41 'ApplicationName': (basestring, False),
42 'Description': (basestring, False),
43 }
44
45
46 class ApplicationVersion(AWSObject):
47 resource_type = "AWS::ElasticBeanstalk::ApplicationVersion"
48
49 props = {
50 'ApplicationName': (basestring, True),
51 'Description': (basestring, False),
52 'SourceBundle': (SourceBundle, False),
53 }
54
55
56 class ConfigurationTemplate(AWSObject):
57 resource_type = "AWS::ElasticBeanstalk::ConfigurationTemplate"
58
59 props = {
60 'ApplicationName': (basestring, True),
61 'Description': (basestring, False),
62 'EnvironmentId': (basestring, False),
63 'OptionSettings': ([OptionSettings], False),
64 'SolutionStackName': (basestring, False),
65 'SourceConfiguration': (SourceConfiguration, False),
66 }
67
68
69 def validate_tier_name(name):
70 valid_names = [WebServer, Worker]
71 if name not in valid_names:
72 raise ValueError('Tier name needs to be one of %r' % valid_names)
73 return name
74
75
76 def validate_tier_type(tier_type):
77 valid_types = [WebServerType, WorkerType]
78 if tier_type not in valid_types:
79 raise ValueError('Tier type needs to be one of %r' % valid_types)
80 return tier_type
81
82
83 class Tier(AWSProperty):
84 props = {
85 'Name': (validate_tier_name, False),
86 'Type': (validate_tier_type, False),
87 'Version': (basestring, False),
88 }
89
90
91 class Environment(AWSObject):
92 resource_type = "AWS::ElasticBeanstalk::Environment"
93
94 props = {
95 'ApplicationName': (basestring, True),
96 'CNAMEPrefix': (basestring, False),
97 'Description': (basestring, False),
98 'EnvironmentName': (basestring, False),
99 'OptionSettings': ([OptionSettings], False),
100 'SolutionStackName': (basestring, False),
101 'Tags': (Tags, False),
102 'TemplateName': (basestring, False),
103 'Tier': (Tier, False),
104 'VersionLabel': (basestring, False),
105 }
106
[end of troposphere/elasticbeanstalk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py
--- a/troposphere/elasticbeanstalk.py
+++ b/troposphere/elasticbeanstalk.py
@@ -61,6 +61,7 @@
'Description': (basestring, False),
'EnvironmentId': (basestring, False),
'OptionSettings': ([OptionSettings], False),
+ 'PlatformArn': (basestring, False),
'SolutionStackName': (basestring, False),
'SourceConfiguration': (SourceConfiguration, False),
}
@@ -97,6 +98,7 @@
'Description': (basestring, False),
'EnvironmentName': (basestring, False),
'OptionSettings': ([OptionSettings], False),
+ 'PlatformArn': (basestring, False),
'SolutionStackName': (basestring, False),
'Tags': (Tags, False),
'TemplateName': (basestring, False),
|
{"golden_diff": "diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py\n--- a/troposphere/elasticbeanstalk.py\n+++ b/troposphere/elasticbeanstalk.py\n@@ -61,6 +61,7 @@\n 'Description': (basestring, False),\n 'EnvironmentId': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n+ 'PlatformArn': (basestring, False),\n 'SolutionStackName': (basestring, False),\n 'SourceConfiguration': (SourceConfiguration, False),\n }\n@@ -97,6 +98,7 @@\n 'Description': (basestring, False),\n 'EnvironmentName': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n+ 'PlatformArn': (basestring, False),\n 'SolutionStackName': (basestring, False),\n 'Tags': (Tags, False),\n 'TemplateName': (basestring, False),\n", "issue": "Use the PlatformArn property to specify a custom platform for Elastic Beanstalk.\n[AWS::ElasticBeanstalk::ConfigurationTemplate](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-beanstalk-configurationtemplate.html) and [AWS::ElasticBeanstalk::Environment](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html)\r\nUse the PlatformArn property to specify a custom platform for Elastic Beanstalk.\n", "before_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\n\n\nWebServer = \"WebServer\"\nWorker = \"Worker\"\nWebServerType = \"Standard\"\nWorkerType = \"SQS/HTTP\"\n\n\nclass SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n 'S3Key': (basestring, True),\n }\n\n\nclass SourceConfiguration(AWSProperty):\n props = {\n 'ApplicationName': (basestring, True),\n 'TemplateName': (basestring, True),\n }\n\n\nclass OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n 'OptionName': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Application(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Application\"\n\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n }\n\n\nclass ApplicationVersion(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ApplicationVersion\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'SourceBundle': (SourceBundle, False),\n }\n\n\nclass ConfigurationTemplate(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ConfigurationTemplate\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'EnvironmentId': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'SourceConfiguration': (SourceConfiguration, False),\n }\n\n\ndef validate_tier_name(name):\n valid_names = [WebServer, Worker]\n if name not in valid_names:\n raise ValueError('Tier name needs to be one of %r' % valid_names)\n return name\n\n\ndef validate_tier_type(tier_type):\n valid_types = [WebServerType, WorkerType]\n if tier_type not in valid_types:\n raise ValueError('Tier type needs to be one of %r' % valid_types)\n return tier_type\n\n\nclass Tier(AWSProperty):\n props = {\n 'Name': (validate_tier_name, False),\n 'Type': (validate_tier_type, False),\n 'Version': (basestring, False),\n }\n\n\nclass Environment(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Environment\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'CNAMEPrefix': (basestring, False),\n 'Description': (basestring, False),\n 'EnvironmentName': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'Tags': (Tags, False),\n 'TemplateName': (basestring, False),\n 'Tier': (Tier, False),\n 'VersionLabel': (basestring, False),\n }\n", "path": "troposphere/elasticbeanstalk.py"}]}
| 1,539 | 212 |
gh_patches_debug_26636
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-456
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Filter archived and draft projects from the wagtail frontpage selection element
</issue>
<code>
[start of apps/cms/models.py]
1 from django.db import models
2 from django.forms import widgets
3 from modelcluster.fields import ParentalKey
4 from modelcluster.models import ClusterableModel
5 from wagtail.wagtailadmin import edit_handlers
6 from wagtail.wagtailcore import blocks
7 from wagtail.wagtailcore import fields
8 from wagtail.wagtailcore.models import Orderable
9 from wagtail.wagtailcore.models import Page
10 from wagtail.wagtailforms.models import AbstractEmailForm
11 from wagtail.wagtailforms.models import AbstractFormField
12 from wagtail.wagtailimages.edit_handlers import ImageChooserPanel
13 from wagtail.wagtailsnippets.models import register_snippet
14
15 from adhocracy4.projects.models import Project
16
17 from . import emails
18
19
20 class SimplePage(Page):
21 body = fields.RichTextField(blank=True)
22
23 content_panels = [
24 edit_handlers.FieldPanel('title'),
25 edit_handlers.FieldPanel('body'),
26 ]
27
28 subpage_types = []
29
30
31 class ProjectSelectionBlock(blocks.ChooserBlock):
32 target_model = Project
33 widget = widgets.Select
34
35 def value_for_form(self, value):
36 if isinstance(value, Project):
37 return value.pk
38 return value
39
40
41 class ProjectsWrapperBlock(blocks.StructBlock):
42 title = blocks.CharBlock(max_length=80)
43 projects = blocks.ListBlock(
44 ProjectSelectionBlock(label='Project'),
45 )
46
47 class Meta:
48 template = 'meinberlin_cms/blocks/projects_block.html'
49
50
51 class CallToActionBlock(blocks.StructBlock):
52 body = blocks.RichTextBlock()
53 link = blocks.CharBlock()
54 link_text = blocks.CharBlock(max_length=50, label='Link Text')
55
56 class Meta:
57 template = 'meinberlin_cms/blocks/cta_block.html'
58
59
60 class ColumnsBlock(blocks.StructBlock):
61 columns_count = blocks.ChoiceBlock(choices=[
62 (2, 'Two columns'),
63 (3, 'Three columns'),
64 (4, 'Four columns'),
65 ], default=2)
66
67 columns = blocks.ListBlock(
68 blocks.RichTextBlock(label='Column body'),
69 )
70
71 class Meta:
72 template = 'meinberlin_cms/blocks/columns_block.html'
73
74
75 class HomePage(Page):
76 body = fields.StreamField([
77 ('paragraph', blocks.RichTextBlock(
78 template='meinberlin_cms/blocks/richtext_block.html'
79 )),
80 ('call_to_action', CallToActionBlock()),
81 ('columns_text', ColumnsBlock()),
82 ('projects', ProjectsWrapperBlock()),
83 ])
84
85 subtitle = models.CharField(max_length=120)
86
87 header_image = models.ForeignKey(
88 'wagtailimages.Image',
89 null=True,
90 blank=False,
91 on_delete=models.SET_NULL,
92 related_name='+'
93 )
94
95 content_panels = Page.content_panels + [
96 edit_handlers.FieldPanel('subtitle'),
97 ImageChooserPanel('header_image'),
98 edit_handlers.StreamFieldPanel('body'),
99 ]
100
101
102 class MenuItem(models.Model):
103 title = models.CharField(max_length=255)
104 link_page = models.ForeignKey('wagtailcore.Page')
105
106 @property
107 def url(self):
108 return self.link_page.url
109
110 def __str__(self):
111 return self.title
112
113 panels = [
114 edit_handlers.FieldPanel('title'),
115 edit_handlers.PageChooserPanel('link_page')
116 ]
117
118
119 @register_snippet
120 class NavigationMenu(ClusterableModel):
121 title = models.CharField(max_length=255, null=False, blank=False)
122
123 def __str__(self):
124 return self.title
125
126 panels = [
127 edit_handlers.FieldPanel('title'),
128 edit_handlers.InlinePanel('items')
129 ]
130
131
132 class NavigationMenuItem(Orderable, MenuItem):
133 parent = ParentalKey('meinberlin_cms.NavigationMenu', related_name='items')
134
135
136 class EmailFormField(AbstractFormField):
137 page = ParentalKey('EmailFormPage', related_name='form_fields')
138
139
140 class EmailFormPage(AbstractEmailForm):
141 intro = fields.RichTextField(
142 help_text='Introduction text shown above the form'
143 )
144 thank_you = fields.RichTextField(
145 help_text='Text shown after form submission',
146 )
147 email_content = models.CharField(
148 max_length=200,
149 help_text='Email content message',
150 )
151 attach_as = models.CharField(
152 max_length=3,
153 choices=(
154 ('csv', 'CSV Document'),
155 ('txt', 'Text'),
156 ),
157 default='csv',
158 help_text='Form results are send in this document format',
159 )
160
161 content_panels = AbstractEmailForm.content_panels + [
162 edit_handlers.MultiFieldPanel([
163 edit_handlers.FieldPanel('intro', classname='full'),
164 edit_handlers.FieldPanel('thank_you', classname='full'),
165 ], 'Page'),
166 edit_handlers.MultiFieldPanel([
167 edit_handlers.FieldPanel('to_address'),
168 edit_handlers.FieldPanel('subject'),
169 edit_handlers.FieldPanel('email_content', classname='full'),
170 edit_handlers.FieldPanel('attach_as'),
171 ], 'Email'),
172 edit_handlers.InlinePanel('form_fields', label='Form fields'),
173 ]
174
175 def send_mail(self, form):
176 self.form = form
177 if self.attach_as == 'csv':
178 emails.CsvFormEmail.send(self)
179 elif self.attach_as == 'txt':
180 emails.TextFormEmail.send(self)
181
182 @property
183 def field_values(self):
184 fields = {}
185 for field in self.form:
186 value = field.value()
187 if isinstance(value, list):
188 value = ', '.join(value)
189 fields[field.label] = value
190 return fields
191
[end of apps/cms/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/cms/models.py b/apps/cms/models.py
--- a/apps/cms/models.py
+++ b/apps/cms/models.py
@@ -1,5 +1,6 @@
+from django import forms
from django.db import models
-from django.forms import widgets
+from django.utils.functional import cached_property
from modelcluster.fields import ParentalKey
from modelcluster.models import ClusterableModel
from wagtail.wagtailadmin import edit_handlers
@@ -30,13 +31,30 @@
class ProjectSelectionBlock(blocks.ChooserBlock):
target_model = Project
- widget = widgets.Select
+ widget = forms.widgets.Select
+
+ @cached_property
+ def field(self):
+ return forms.ModelChoiceField(
+ queryset=self.target_model.objects.filter(
+ is_draft=False,
+ is_archived=False,
+ is_public=True),
+ widget=self.widget,
+ required=self._required,
+ help_text=self._help_text)
def value_for_form(self, value):
if isinstance(value, Project):
return value.pk
return value
+ def value_from_form(self, value):
+ # if project became unavailable (unpublished), selection will become an
+ # empty string and cause a server error on save, so we give a fallback
+ value = value or None
+ return super().value_from_form(value)
+
class ProjectsWrapperBlock(blocks.StructBlock):
title = blocks.CharBlock(max_length=80)
|
{"golden_diff": "diff --git a/apps/cms/models.py b/apps/cms/models.py\n--- a/apps/cms/models.py\n+++ b/apps/cms/models.py\n@@ -1,5 +1,6 @@\n+from django import forms\n from django.db import models\n-from django.forms import widgets\n+from django.utils.functional import cached_property\n from modelcluster.fields import ParentalKey\n from modelcluster.models import ClusterableModel\n from wagtail.wagtailadmin import edit_handlers\n@@ -30,13 +31,30 @@\n \n class ProjectSelectionBlock(blocks.ChooserBlock):\n target_model = Project\n- widget = widgets.Select\n+ widget = forms.widgets.Select\n+\n+ @cached_property\n+ def field(self):\n+ return forms.ModelChoiceField(\n+ queryset=self.target_model.objects.filter(\n+ is_draft=False,\n+ is_archived=False,\n+ is_public=True),\n+ widget=self.widget,\n+ required=self._required,\n+ help_text=self._help_text)\n \n def value_for_form(self, value):\n if isinstance(value, Project):\n return value.pk\n return value\n \n+ def value_from_form(self, value):\n+ # if project became unavailable (unpublished), selection will become an\n+ # empty string and cause a server error on save, so we give a fallback\n+ value = value or None\n+ return super().value_from_form(value)\n+\n \n class ProjectsWrapperBlock(blocks.StructBlock):\n title = blocks.CharBlock(max_length=80)\n", "issue": "Filter archived and draft projects from the wagtail frontpage selection element\n\n", "before_files": [{"content": "from django.db import models\nfrom django.forms import widgets\nfrom modelcluster.fields import ParentalKey\nfrom modelcluster.models import ClusterableModel\nfrom wagtail.wagtailadmin import edit_handlers\nfrom wagtail.wagtailcore import blocks\nfrom wagtail.wagtailcore import fields\nfrom wagtail.wagtailcore.models import Orderable\nfrom wagtail.wagtailcore.models import Page\nfrom wagtail.wagtailforms.models import AbstractEmailForm\nfrom wagtail.wagtailforms.models import AbstractFormField\nfrom wagtail.wagtailimages.edit_handlers import ImageChooserPanel\nfrom wagtail.wagtailsnippets.models import register_snippet\n\nfrom adhocracy4.projects.models import Project\n\nfrom . import emails\n\n\nclass SimplePage(Page):\n body = fields.RichTextField(blank=True)\n\n content_panels = [\n edit_handlers.FieldPanel('title'),\n edit_handlers.FieldPanel('body'),\n ]\n\n subpage_types = []\n\n\nclass ProjectSelectionBlock(blocks.ChooserBlock):\n target_model = Project\n widget = widgets.Select\n\n def value_for_form(self, value):\n if isinstance(value, Project):\n return value.pk\n return value\n\n\nclass ProjectsWrapperBlock(blocks.StructBlock):\n title = blocks.CharBlock(max_length=80)\n projects = blocks.ListBlock(\n ProjectSelectionBlock(label='Project'),\n )\n\n class Meta:\n template = 'meinberlin_cms/blocks/projects_block.html'\n\n\nclass CallToActionBlock(blocks.StructBlock):\n body = blocks.RichTextBlock()\n link = blocks.CharBlock()\n link_text = blocks.CharBlock(max_length=50, label='Link Text')\n\n class Meta:\n template = 'meinberlin_cms/blocks/cta_block.html'\n\n\nclass ColumnsBlock(blocks.StructBlock):\n columns_count = blocks.ChoiceBlock(choices=[\n (2, 'Two columns'),\n (3, 'Three columns'),\n (4, 'Four columns'),\n ], default=2)\n\n columns = blocks.ListBlock(\n blocks.RichTextBlock(label='Column body'),\n )\n\n class Meta:\n template = 'meinberlin_cms/blocks/columns_block.html'\n\n\nclass HomePage(Page):\n body = fields.StreamField([\n ('paragraph', blocks.RichTextBlock(\n template='meinberlin_cms/blocks/richtext_block.html'\n )),\n ('call_to_action', CallToActionBlock()),\n ('columns_text', ColumnsBlock()),\n ('projects', ProjectsWrapperBlock()),\n ])\n\n subtitle = models.CharField(max_length=120)\n\n header_image = models.ForeignKey(\n 'wagtailimages.Image',\n null=True,\n blank=False,\n on_delete=models.SET_NULL,\n related_name='+'\n )\n\n content_panels = Page.content_panels + [\n edit_handlers.FieldPanel('subtitle'),\n ImageChooserPanel('header_image'),\n edit_handlers.StreamFieldPanel('body'),\n ]\n\n\nclass MenuItem(models.Model):\n title = models.CharField(max_length=255)\n link_page = models.ForeignKey('wagtailcore.Page')\n\n @property\n def url(self):\n return self.link_page.url\n\n def __str__(self):\n return self.title\n\n panels = [\n edit_handlers.FieldPanel('title'),\n edit_handlers.PageChooserPanel('link_page')\n ]\n\n\n@register_snippet\nclass NavigationMenu(ClusterableModel):\n title = models.CharField(max_length=255, null=False, blank=False)\n\n def __str__(self):\n return self.title\n\n panels = [\n edit_handlers.FieldPanel('title'),\n edit_handlers.InlinePanel('items')\n ]\n\n\nclass NavigationMenuItem(Orderable, MenuItem):\n parent = ParentalKey('meinberlin_cms.NavigationMenu', related_name='items')\n\n\nclass EmailFormField(AbstractFormField):\n page = ParentalKey('EmailFormPage', related_name='form_fields')\n\n\nclass EmailFormPage(AbstractEmailForm):\n intro = fields.RichTextField(\n help_text='Introduction text shown above the form'\n )\n thank_you = fields.RichTextField(\n help_text='Text shown after form submission',\n )\n email_content = models.CharField(\n max_length=200,\n help_text='Email content message',\n )\n attach_as = models.CharField(\n max_length=3,\n choices=(\n ('csv', 'CSV Document'),\n ('txt', 'Text'),\n ),\n default='csv',\n help_text='Form results are send in this document format',\n )\n\n content_panels = AbstractEmailForm.content_panels + [\n edit_handlers.MultiFieldPanel([\n edit_handlers.FieldPanel('intro', classname='full'),\n edit_handlers.FieldPanel('thank_you', classname='full'),\n ], 'Page'),\n edit_handlers.MultiFieldPanel([\n edit_handlers.FieldPanel('to_address'),\n edit_handlers.FieldPanel('subject'),\n edit_handlers.FieldPanel('email_content', classname='full'),\n edit_handlers.FieldPanel('attach_as'),\n ], 'Email'),\n edit_handlers.InlinePanel('form_fields', label='Form fields'),\n ]\n\n def send_mail(self, form):\n self.form = form\n if self.attach_as == 'csv':\n emails.CsvFormEmail.send(self)\n elif self.attach_as == 'txt':\n emails.TextFormEmail.send(self)\n\n @property\n def field_values(self):\n fields = {}\n for field in self.form:\n value = field.value()\n if isinstance(value, list):\n value = ', '.join(value)\n fields[field.label] = value\n return fields\n", "path": "apps/cms/models.py"}]}
| 2,211 | 321 |
gh_patches_debug_8055
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-1497
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
setup.py lacks appropriate metadata for differing python versions
Please see this issue for the full context: https://github.com/NixOS/nixpkgs/issues/46318
Basically, it appears the METADATA in the resulting installation differs depending on the installation method.
I've done some minimal patching to `setup.py` to include the same `python_version` constraints that appear in `setup.cfg` and it appears to fix the issues with regards to METADATA.
However, I'm not very experienced in python packaging and am surprised that no one else has run into this issue before me.
Can anyone confirm that there is a mismatch here and that adding additional constraints to `setup.py` would be appropriate? I'll go ahead and get a PR together in the meantime.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup
4
5 import os
6 import re
7 import codecs
8
9 base_path = os.path.dirname(__file__)
10
11 # Get the version (borrowed from SQLAlchemy)
12 with open(os.path.join(base_path, 'src', 'urllib3', '__init__.py')) as fp:
13 VERSION = re.compile(r".*__version__ = '(.*?)'",
14 re.S).match(fp.read()).group(1)
15
16 with codecs.open('README.rst', encoding='utf-8') as fp:
17 readme = fp.read()
18 with codecs.open('CHANGES.rst', encoding='utf-8') as fp:
19 changes = fp.read()
20 version = VERSION
21
22 setup(name='urllib3',
23 version=version,
24 description="HTTP library with thread-safe connection pooling, file post, and more.",
25 long_description=u'\n\n'.join([readme, changes]),
26 classifiers=[
27 'Environment :: Web Environment',
28 'Intended Audience :: Developers',
29 'License :: OSI Approved :: MIT License',
30 'Operating System :: OS Independent',
31 'Programming Language :: Python',
32 'Programming Language :: Python :: 2',
33 'Programming Language :: Python :: 2.7',
34 'Programming Language :: Python :: 3',
35 'Programming Language :: Python :: 3.4',
36 'Programming Language :: Python :: 3.5',
37 'Programming Language :: Python :: 3.6',
38 'Programming Language :: Python :: 3.7',
39 'Programming Language :: Python :: 3.8',
40 'Programming Language :: Python :: Implementation :: CPython',
41 'Programming Language :: Python :: Implementation :: PyPy',
42 'Topic :: Internet :: WWW/HTTP',
43 'Topic :: Software Development :: Libraries',
44 ],
45 keywords='urllib httplib threadsafe filepost http https ssl pooling',
46 author='Andrey Petrov',
47 author_email='[email protected]',
48 url='https://urllib3.readthedocs.io/',
49 license='MIT',
50 packages=['urllib3',
51 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',
52 'urllib3.packages.backports', 'urllib3.packages.rfc3986',
53 'urllib3.contrib', 'urllib3.contrib._securetransport',
54 'urllib3.util'],
55 package_dir={'': 'src'},
56 requires=[],
57 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
58 tests_require=[
59 # These are a less-specific subset of dev-requirements.txt, for the
60 # convenience of distro package maintainers.
61 'pytest',
62 'mock',
63 'tornado',
64 ],
65 test_suite='test',
66 extras_require={
67 'secure': [
68 'pyOpenSSL >= 0.14',
69 'cryptography>=1.3.4',
70 'idna>=2.0.0',
71 'certifi',
72 "ipaddress",
73 ],
74 'socks': [
75 'PySocks>=1.5.6,<2.0,!=1.5.7',
76 ]
77 },
78 )
79
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,11 +65,11 @@
test_suite='test',
extras_require={
'secure': [
- 'pyOpenSSL >= 0.14',
+ 'pyOpenSSL>=0.14',
'cryptography>=1.3.4',
'idna>=2.0.0',
'certifi',
- "ipaddress",
+ "ipaddress; python_version=='2.7'",
],
'socks': [
'PySocks>=1.5.6,<2.0,!=1.5.7',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,11 +65,11 @@\n test_suite='test',\n extras_require={\n 'secure': [\n- 'pyOpenSSL >= 0.14',\n+ 'pyOpenSSL>=0.14',\n 'cryptography>=1.3.4',\n 'idna>=2.0.0',\n 'certifi',\n- \"ipaddress\",\n+ \"ipaddress; python_version=='2.7'\",\n ],\n 'socks': [\n 'PySocks>=1.5.6,<2.0,!=1.5.7',\n", "issue": "setup.py lacks appropriate metadata for differing python versions\nPlease see this issue for the full context: https://github.com/NixOS/nixpkgs/issues/46318\r\n\r\nBasically, it appears the METADATA in the resulting installation differs depending on the installation method.\r\n\r\nI've done some minimal patching to `setup.py` to include the same `python_version` constraints that appear in `setup.cfg` and it appears to fix the issues with regards to METADATA.\r\n\r\nHowever, I'm not very experienced in python packaging and am surprised that no one else has run into this issue before me.\r\n\r\nCan anyone confirm that there is a mismatch here and that adding additional constraints to `setup.py` would be appropriate? I'll go ahead and get a PR together in the meantime.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport os\nimport re\nimport codecs\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, 'src', 'urllib3', '__init__.py')) as fp:\n VERSION = re.compile(r\".*__version__ = '(.*?)'\",\n re.S).match(fp.read()).group(1)\n\nwith codecs.open('README.rst', encoding='utf-8') as fp:\n readme = fp.read()\nwith codecs.open('CHANGES.rst', encoding='utf-8') as fp:\n changes = fp.read()\nversion = VERSION\n\nsetup(name='urllib3',\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u'\\n\\n'.join([readme, changes]),\n classifiers=[\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='urllib httplib threadsafe filepost http https ssl pooling',\n author='Andrey Petrov',\n author_email='[email protected]',\n url='https://urllib3.readthedocs.io/',\n license='MIT',\n packages=['urllib3',\n 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',\n 'urllib3.packages.backports', 'urllib3.packages.rfc3986',\n 'urllib3.contrib', 'urllib3.contrib._securetransport',\n 'urllib3.util'],\n package_dir={'': 'src'},\n requires=[],\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n tests_require=[\n # These are a less-specific subset of dev-requirements.txt, for the\n # convenience of distro package maintainers.\n 'pytest',\n 'mock',\n 'tornado',\n ],\n test_suite='test',\n extras_require={\n 'secure': [\n 'pyOpenSSL >= 0.14',\n 'cryptography>=1.3.4',\n 'idna>=2.0.0',\n 'certifi',\n \"ipaddress\",\n ],\n 'socks': [\n 'PySocks>=1.5.6,<2.0,!=1.5.7',\n ]\n },\n )\n", "path": "setup.py"}]}
| 1,522 | 151 |
gh_patches_debug_3178
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1810
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace reward point redemption dropdown with number input field
If a user selects an option, a new line is added and the selection spans two rows. This looks wrong.
A user can insert custom options. If the user inputs something invalid like "abcdef" or an empty string, only parts of "Please select"-placeholder is visible. This looks wrong as well.
Replace reward point redemption dropdown with number input field
If a user selects an option, a new line is added and the selection spans two rows. This looks wrong.
A user can insert custom options. If the user inputs something invalid like "abcdef" or an empty string, only parts of "Please select"-placeholder is visible. This looks wrong as well.
</issue>
<code>
[start of evap/rewards/views.py]
1 from datetime import datetime
2
3 from django.contrib import messages
4 from django.core.exceptions import BadRequest, SuspiciousOperation
5 from django.http import HttpResponse
6 from django.shortcuts import get_object_or_404, redirect, render
7 from django.utils.translation import get_language
8 from django.utils.translation import gettext as _
9 from django.views.decorators.http import require_POST
10
11 from evap.evaluation.auth import manager_required, reward_user_required
12 from evap.evaluation.models import Semester
13 from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
14 from evap.rewards.exporters import RewardsExporter
15 from evap.rewards.forms import RewardPointRedemptionEventForm
16 from evap.rewards.models import (
17 NoPointsSelected,
18 NotEnoughPoints,
19 RedemptionEventExpired,
20 RewardPointGranting,
21 RewardPointRedemption,
22 RewardPointRedemptionEvent,
23 SemesterActivation,
24 )
25 from evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions
26 from evap.staff.views import semester_view
27
28
29 @reward_user_required
30 def index(request):
31 if request.method == "POST":
32 redemptions = {}
33 try:
34 for key, value in request.POST.items():
35 if key.startswith("points-"):
36 event_id = int(key.rpartition("-")[2])
37 redemptions[event_id] = int(value)
38 except ValueError as e:
39 raise BadRequest from e
40
41 try:
42 save_redemptions(request, redemptions)
43 messages.success(request, _("You successfully redeemed your points."))
44 except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:
45 messages.warning(request, error)
46
47 total_points_available = reward_points_of_user(request.user)
48 reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)
49 reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)
50 events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
51
52 reward_point_actions = []
53 for granting in reward_point_grantings:
54 reward_point_actions.append(
55 (granting.granting_time, _("Reward for") + " " + granting.semester.name, granting.value, "")
56 )
57 for redemption in reward_point_redemptions:
58 reward_point_actions.append((redemption.redemption_time, redemption.event.name, "", redemption.value))
59
60 reward_point_actions.sort(key=lambda action: action[0], reverse=True)
61
62 template_data = dict(
63 reward_point_actions=reward_point_actions,
64 total_points_available=total_points_available,
65 events=events,
66 point_selection=range(0, total_points_available + 1),
67 )
68 return render(request, "rewards_index.html", template_data)
69
70
71 @manager_required
72 def reward_point_redemption_events(request):
73 upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
74 past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by("-date")
75 template_data = dict(upcoming_events=upcoming_events, past_events=past_events)
76 return render(request, "rewards_reward_point_redemption_events.html", template_data)
77
78
79 @manager_required
80 def reward_point_redemption_event_create(request):
81 event = RewardPointRedemptionEvent()
82 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
83
84 if form.is_valid():
85 form.save()
86 messages.success(request, _("Successfully created event."))
87 return redirect("rewards:reward_point_redemption_events")
88
89 return render(request, "rewards_reward_point_redemption_event_form.html", dict(form=form))
90
91
92 @manager_required
93 def reward_point_redemption_event_edit(request, event_id):
94 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
95 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
96
97 if form.is_valid():
98 event = form.save()
99
100 messages.success(request, _("Successfully updated event."))
101 return redirect("rewards:reward_point_redemption_events")
102
103 return render(request, "rewards_reward_point_redemption_event_form.html", dict(event=event, form=form))
104
105
106 @require_POST
107 @manager_required
108 def reward_point_redemption_event_delete(request):
109 event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, "event_id")
110
111 if not event.can_delete:
112 raise SuspiciousOperation("Deleting redemption event not allowed")
113 event.delete()
114 return HttpResponse() # 200 OK
115
116
117 @manager_required
118 def reward_point_redemption_event_export(request, event_id):
119 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
120
121 filename = _("RewardPoints") + f"-{event.date}-{event.name}-{get_language()}.xls"
122 response = AttachmentResponse(filename, content_type="application/vnd.ms-excel")
123
124 RewardsExporter().export(response, event.redemptions_by_user())
125
126 return response
127
128
129 @manager_required
130 def semester_activation(request, semester_id, active):
131 semester = get_object_or_404(Semester, id=semester_id)
132 active = active == "on"
133
134 SemesterActivation.objects.update_or_create(semester=semester, defaults={"is_active": active})
135 if active:
136 grant_eligible_reward_points_for_semester(request, semester)
137
138 return semester_view(request=request, semester_id=semester_id)
139
[end of evap/rewards/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/rewards/views.py b/evap/rewards/views.py
--- a/evap/rewards/views.py
+++ b/evap/rewards/views.py
@@ -63,7 +63,6 @@
reward_point_actions=reward_point_actions,
total_points_available=total_points_available,
events=events,
- point_selection=range(0, total_points_available + 1),
)
return render(request, "rewards_index.html", template_data)
|
{"golden_diff": "diff --git a/evap/rewards/views.py b/evap/rewards/views.py\n--- a/evap/rewards/views.py\n+++ b/evap/rewards/views.py\n@@ -63,7 +63,6 @@\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n- point_selection=range(0, total_points_available + 1),\n )\n return render(request, \"rewards_index.html\", template_data)\n", "issue": "Replace reward point redemption dropdown with number input field\nIf a user selects an option, a new line is added and the selection spans two rows. This looks wrong.\r\n\r\nA user can insert custom options. If the user inputs something invalid like \"abcdef\" or an empty string, only parts of \"Please select\"-placeholder is visible. This looks wrong as well.\nReplace reward point redemption dropdown with number input field\nIf a user selects an option, a new line is added and the selection spans two rows. This looks wrong.\r\n\r\nA user can insert custom options. If the user inputs something invalid like \"abcdef\" or an empty string, only parts of \"Please select\"-placeholder is visible. This looks wrong as well.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_POST\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelected,\n NotEnoughPoints,\n RedemptionEventExpired,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\nfrom evap.staff.views import semester_view\n\n\n@reward_user_required\ndef index(request):\n if request.method == \"POST\":\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n except ValueError as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:\n messages.warning(request, error)\n\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n reward_point_actions = []\n for granting in reward_point_grantings:\n reward_point_actions.append(\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n )\n for redemption in reward_point_redemptions:\n reward_point_actions.append((redemption.redemption_time, redemption.event.name, \"\", redemption.value))\n\n reward_point_actions.sort(key=lambda action: action[0], reverse=True)\n\n template_data = dict(\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n point_selection=range(0, total_points_available + 1),\n )\n return render(request, \"rewards_index.html\", template_data)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n template_data = dict(upcoming_events=upcoming_events, past_events=past_events)\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\ndef reward_point_redemption_event_create(request):\n event = RewardPointRedemptionEvent()\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Successfully created event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(form=form))\n\n\n@manager_required\ndef reward_point_redemption_event_edit(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n event = form.save()\n\n messages.success(request, _(\"Successfully updated event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(event=event, form=form))\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@manager_required\ndef semester_activation(request, semester_id, active):\n semester = get_object_or_404(Semester, id=semester_id)\n active = active == \"on\"\n\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n\n return semester_view(request=request, semester_id=semester_id)\n", "path": "evap/rewards/views.py"}]}
| 2,178 | 107 |
gh_patches_debug_9175
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5692
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeprecationWarning: ANTIALIAS is deprecated
> scrapy/pipelines/images.py:163: DeprecationWarning: ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
> image.thumbnail(size, self._Image.ANTIALIAS)
This is deprecated since Pillow 9.1.0, released in April. We should check if `Resampling.LANCZOS` is already available in the earliest version we support (7.1.0 as far as I can see), and use it if it's available there. If it was added later, I think we need to decide how to proceed.
</issue>
<code>
[start of scrapy/pipelines/images.py]
1 """
2 Images Pipeline
3
4 See documentation in topics/media-pipeline.rst
5 """
6 import functools
7 import hashlib
8 from contextlib import suppress
9 from io import BytesIO
10
11 from itemadapter import ItemAdapter
12
13 from scrapy.exceptions import DropItem, NotConfigured
14 from scrapy.http import Request
15 from scrapy.pipelines.files import FileException, FilesPipeline
16 # TODO: from scrapy.pipelines.media import MediaPipeline
17 from scrapy.settings import Settings
18 from scrapy.utils.misc import md5sum
19 from scrapy.utils.python import to_bytes
20
21
22 class NoimagesDrop(DropItem):
23 """Product with no images exception"""
24
25
26 class ImageException(FileException):
27 """General image error exception"""
28
29
30 class ImagesPipeline(FilesPipeline):
31 """Abstract pipeline that implement the image thumbnail generation logic
32
33 """
34
35 MEDIA_NAME = 'image'
36
37 # Uppercase attributes kept for backward compatibility with code that subclasses
38 # ImagesPipeline. They may be overridden by settings.
39 MIN_WIDTH = 0
40 MIN_HEIGHT = 0
41 EXPIRES = 90
42 THUMBS = {}
43 DEFAULT_IMAGES_URLS_FIELD = 'image_urls'
44 DEFAULT_IMAGES_RESULT_FIELD = 'images'
45
46 def __init__(self, store_uri, download_func=None, settings=None):
47 try:
48 from PIL import Image
49 self._Image = Image
50 except ImportError:
51 raise NotConfigured(
52 'ImagesPipeline requires installing Pillow 4.0.0 or later'
53 )
54
55 super().__init__(store_uri, settings=settings, download_func=download_func)
56
57 if isinstance(settings, dict) or settings is None:
58 settings = Settings(settings)
59
60 resolve = functools.partial(self._key_for_pipe,
61 base_class_name="ImagesPipeline",
62 settings=settings)
63 self.expires = settings.getint(
64 resolve("IMAGES_EXPIRES"), self.EXPIRES
65 )
66
67 if not hasattr(self, "IMAGES_RESULT_FIELD"):
68 self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD
69 if not hasattr(self, "IMAGES_URLS_FIELD"):
70 self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD
71
72 self.images_urls_field = settings.get(
73 resolve('IMAGES_URLS_FIELD'),
74 self.IMAGES_URLS_FIELD
75 )
76 self.images_result_field = settings.get(
77 resolve('IMAGES_RESULT_FIELD'),
78 self.IMAGES_RESULT_FIELD
79 )
80 self.min_width = settings.getint(
81 resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH
82 )
83 self.min_height = settings.getint(
84 resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT
85 )
86 self.thumbs = settings.get(
87 resolve('IMAGES_THUMBS'), self.THUMBS
88 )
89
90 @classmethod
91 def from_settings(cls, settings):
92 s3store = cls.STORE_SCHEMES['s3']
93 s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
94 s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
95 s3store.AWS_SESSION_TOKEN = settings['AWS_SESSION_TOKEN']
96 s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
97 s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
98 s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
99 s3store.AWS_VERIFY = settings['AWS_VERIFY']
100 s3store.POLICY = settings['IMAGES_STORE_S3_ACL']
101
102 gcs_store = cls.STORE_SCHEMES['gs']
103 gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
104 gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None
105
106 ftp_store = cls.STORE_SCHEMES['ftp']
107 ftp_store.FTP_USERNAME = settings['FTP_USER']
108 ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']
109 ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')
110
111 store_uri = settings['IMAGES_STORE']
112 return cls(store_uri, settings=settings)
113
114 def file_downloaded(self, response, request, info, *, item=None):
115 return self.image_downloaded(response, request, info, item=item)
116
117 def image_downloaded(self, response, request, info, *, item=None):
118 checksum = None
119 for path, image, buf in self.get_images(response, request, info, item=item):
120 if checksum is None:
121 buf.seek(0)
122 checksum = md5sum(buf)
123 width, height = image.size
124 self.store.persist_file(
125 path, buf, info,
126 meta={'width': width, 'height': height},
127 headers={'Content-Type': 'image/jpeg'})
128 return checksum
129
130 def get_images(self, response, request, info, *, item=None):
131 path = self.file_path(request, response=response, info=info, item=item)
132 orig_image = self._Image.open(BytesIO(response.body))
133
134 width, height = orig_image.size
135 if width < self.min_width or height < self.min_height:
136 raise ImageException("Image too small "
137 f"({width}x{height} < "
138 f"{self.min_width}x{self.min_height})")
139
140 image, buf = self.convert_image(orig_image)
141 yield path, image, buf
142
143 for thumb_id, size in self.thumbs.items():
144 thumb_path = self.thumb_path(request, thumb_id, response=response, info=info, item=item)
145 thumb_image, thumb_buf = self.convert_image(image, size)
146 yield thumb_path, thumb_image, thumb_buf
147
148 def convert_image(self, image, size=None):
149 if image.format == 'PNG' and image.mode == 'RGBA':
150 background = self._Image.new('RGBA', image.size, (255, 255, 255))
151 background.paste(image, image)
152 image = background.convert('RGB')
153 elif image.mode == 'P':
154 image = image.convert("RGBA")
155 background = self._Image.new('RGBA', image.size, (255, 255, 255))
156 background.paste(image, image)
157 image = background.convert('RGB')
158 elif image.mode != 'RGB':
159 image = image.convert('RGB')
160
161 if size:
162 image = image.copy()
163 image.thumbnail(size, self._Image.ANTIALIAS)
164
165 buf = BytesIO()
166 image.save(buf, 'JPEG')
167 return image, buf
168
169 def get_media_requests(self, item, info):
170 urls = ItemAdapter(item).get(self.images_urls_field, [])
171 return [Request(u) for u in urls]
172
173 def item_completed(self, results, item, info):
174 with suppress(KeyError):
175 ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]
176 return item
177
178 def file_path(self, request, response=None, info=None, *, item=None):
179 image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
180 return f'full/{image_guid}.jpg'
181
182 def thumb_path(self, request, thumb_id, response=None, info=None, *, item=None):
183 thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
184 return f'thumbs/{thumb_id}/{thumb_guid}.jpg'
185
[end of scrapy/pipelines/images.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/pipelines/images.py b/scrapy/pipelines/images.py
--- a/scrapy/pipelines/images.py
+++ b/scrapy/pipelines/images.py
@@ -160,7 +160,14 @@
if size:
image = image.copy()
- image.thumbnail(size, self._Image.ANTIALIAS)
+ try:
+ # Image.Resampling.LANCZOS was added in Pillow 9.1.0
+ # remove this try except block,
+ # when updating the minimum requirements for Pillow.
+ resampling_filter = self._Image.Resampling.LANCZOS
+ except AttributeError:
+ resampling_filter = self._Image.ANTIALIAS
+ image.thumbnail(size, resampling_filter)
buf = BytesIO()
image.save(buf, 'JPEG')
|
{"golden_diff": "diff --git a/scrapy/pipelines/images.py b/scrapy/pipelines/images.py\n--- a/scrapy/pipelines/images.py\n+++ b/scrapy/pipelines/images.py\n@@ -160,7 +160,14 @@\n \n if size:\n image = image.copy()\n- image.thumbnail(size, self._Image.ANTIALIAS)\n+ try:\n+ # Image.Resampling.LANCZOS was added in Pillow 9.1.0\n+ # remove this try except block,\n+ # when updating the minimum requirements for Pillow.\n+ resampling_filter = self._Image.Resampling.LANCZOS\n+ except AttributeError:\n+ resampling_filter = self._Image.ANTIALIAS\n+ image.thumbnail(size, resampling_filter)\n \n buf = BytesIO()\n image.save(buf, 'JPEG')\n", "issue": "DeprecationWarning: ANTIALIAS is deprecated\n> scrapy/pipelines/images.py:163: DeprecationWarning: ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.\r\n> image.thumbnail(size, self._Image.ANTIALIAS)\r\n\r\nThis is deprecated since Pillow 9.1.0, released in April. We should check if `Resampling.LANCZOS` is already available in the earliest version we support (7.1.0 as far as I can see), and use it if it's available there. If it was added later, I think we need to decide how to proceed.\n", "before_files": [{"content": "\"\"\"\nImages Pipeline\n\nSee documentation in topics/media-pipeline.rst\n\"\"\"\nimport functools\nimport hashlib\nfrom contextlib import suppress\nfrom io import BytesIO\n\nfrom itemadapter import ItemAdapter\n\nfrom scrapy.exceptions import DropItem, NotConfigured\nfrom scrapy.http import Request\nfrom scrapy.pipelines.files import FileException, FilesPipeline\n# TODO: from scrapy.pipelines.media import MediaPipeline\nfrom scrapy.settings import Settings\nfrom scrapy.utils.misc import md5sum\nfrom scrapy.utils.python import to_bytes\n\n\nclass NoimagesDrop(DropItem):\n \"\"\"Product with no images exception\"\"\"\n\n\nclass ImageException(FileException):\n \"\"\"General image error exception\"\"\"\n\n\nclass ImagesPipeline(FilesPipeline):\n \"\"\"Abstract pipeline that implement the image thumbnail generation logic\n\n \"\"\"\n\n MEDIA_NAME = 'image'\n\n # Uppercase attributes kept for backward compatibility with code that subclasses\n # ImagesPipeline. They may be overridden by settings.\n MIN_WIDTH = 0\n MIN_HEIGHT = 0\n EXPIRES = 90\n THUMBS = {}\n DEFAULT_IMAGES_URLS_FIELD = 'image_urls'\n DEFAULT_IMAGES_RESULT_FIELD = 'images'\n\n def __init__(self, store_uri, download_func=None, settings=None):\n try:\n from PIL import Image\n self._Image = Image\n except ImportError:\n raise NotConfigured(\n 'ImagesPipeline requires installing Pillow 4.0.0 or later'\n )\n\n super().__init__(store_uri, settings=settings, download_func=download_func)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n resolve = functools.partial(self._key_for_pipe,\n base_class_name=\"ImagesPipeline\",\n settings=settings)\n self.expires = settings.getint(\n resolve(\"IMAGES_EXPIRES\"), self.EXPIRES\n )\n\n if not hasattr(self, \"IMAGES_RESULT_FIELD\"):\n self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD\n if not hasattr(self, \"IMAGES_URLS_FIELD\"):\n self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD\n\n self.images_urls_field = settings.get(\n resolve('IMAGES_URLS_FIELD'),\n self.IMAGES_URLS_FIELD\n )\n self.images_result_field = settings.get(\n resolve('IMAGES_RESULT_FIELD'),\n self.IMAGES_RESULT_FIELD\n )\n self.min_width = settings.getint(\n resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH\n )\n self.min_height = settings.getint(\n resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT\n )\n self.thumbs = settings.get(\n resolve('IMAGES_THUMBS'), self.THUMBS\n )\n\n @classmethod\n def from_settings(cls, settings):\n s3store = cls.STORE_SCHEMES['s3']\n s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']\n s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']\n s3store.AWS_SESSION_TOKEN = settings['AWS_SESSION_TOKEN']\n s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']\n s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']\n s3store.AWS_USE_SSL = settings['AWS_USE_SSL']\n s3store.AWS_VERIFY = settings['AWS_VERIFY']\n s3store.POLICY = settings['IMAGES_STORE_S3_ACL']\n\n gcs_store = cls.STORE_SCHEMES['gs']\n gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']\n gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None\n\n ftp_store = cls.STORE_SCHEMES['ftp']\n ftp_store.FTP_USERNAME = settings['FTP_USER']\n ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']\n ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')\n\n store_uri = settings['IMAGES_STORE']\n return cls(store_uri, settings=settings)\n\n def file_downloaded(self, response, request, info, *, item=None):\n return self.image_downloaded(response, request, info, item=item)\n\n def image_downloaded(self, response, request, info, *, item=None):\n checksum = None\n for path, image, buf in self.get_images(response, request, info, item=item):\n if checksum is None:\n buf.seek(0)\n checksum = md5sum(buf)\n width, height = image.size\n self.store.persist_file(\n path, buf, info,\n meta={'width': width, 'height': height},\n headers={'Content-Type': 'image/jpeg'})\n return checksum\n\n def get_images(self, response, request, info, *, item=None):\n path = self.file_path(request, response=response, info=info, item=item)\n orig_image = self._Image.open(BytesIO(response.body))\n\n width, height = orig_image.size\n if width < self.min_width or height < self.min_height:\n raise ImageException(\"Image too small \"\n f\"({width}x{height} < \"\n f\"{self.min_width}x{self.min_height})\")\n\n image, buf = self.convert_image(orig_image)\n yield path, image, buf\n\n for thumb_id, size in self.thumbs.items():\n thumb_path = self.thumb_path(request, thumb_id, response=response, info=info, item=item)\n thumb_image, thumb_buf = self.convert_image(image, size)\n yield thumb_path, thumb_image, thumb_buf\n\n def convert_image(self, image, size=None):\n if image.format == 'PNG' and image.mode == 'RGBA':\n background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode == 'P':\n image = image.convert(\"RGBA\")\n background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode != 'RGB':\n image = image.convert('RGB')\n\n if size:\n image = image.copy()\n image.thumbnail(size, self._Image.ANTIALIAS)\n\n buf = BytesIO()\n image.save(buf, 'JPEG')\n return image, buf\n\n def get_media_requests(self, item, info):\n urls = ItemAdapter(item).get(self.images_urls_field, [])\n return [Request(u) for u in urls]\n\n def item_completed(self, results, item, info):\n with suppress(KeyError):\n ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]\n return item\n\n def file_path(self, request, response=None, info=None, *, item=None):\n image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'full/{image_guid}.jpg'\n\n def thumb_path(self, request, thumb_id, response=None, info=None, *, item=None):\n thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'thumbs/{thumb_id}/{thumb_guid}.jpg'\n", "path": "scrapy/pipelines/images.py"}]}
| 2,680 | 181 |
gh_patches_debug_40636
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-158
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
shapes from data types other than uint8
The companion to #136.
</issue>
<code>
[start of rasterio/features.py]
1 """Functions for working with features in a raster dataset."""
2
3 import json
4 import logging
5 import time
6 import warnings
7
8 import numpy as np
9
10 import rasterio
11 from rasterio._features import _shapes, _sieve, _rasterize
12 from rasterio.transform import IDENTITY, guard_transform
13 from rasterio.dtypes import get_minimum_int_dtype
14
15
16 log = logging.getLogger('rasterio')
17 class NullHandler(logging.Handler):
18 def emit(self, record):
19 pass
20 log.addHandler(NullHandler())
21
22
23 def shapes(image, mask=None, connectivity=4, transform=IDENTITY):
24 """Yields a (shape, image_value) pair for each feature in the image.
25
26 The shapes are GeoJSON-like dicts and the image values are ints.
27
28 Features are found using a connected-component labeling algorithm.
29
30 The image must be of unsigned 8-bit integer (rasterio.byte or
31 numpy.uint8) data type. If a mask is provided, pixels for which the
32 mask is `False` will be excluded from feature generation.
33 """
34 if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):
35 raise ValueError("Image must be dtype uint8/ubyte")
36
37 if mask is not None and np.dtype(mask.dtype) != np.dtype(rasterio.bool_):
38 raise ValueError("Mask must be dtype rasterio.bool_")
39
40 if connectivity not in (4, 8):
41 raise ValueError("Connectivity Option must be 4 or 8")
42
43 transform = guard_transform(transform)
44
45 with rasterio.drivers():
46 for s, v in _shapes(image, mask, connectivity, transform.to_gdal()):
47 yield s, v
48
49
50 def sieve(image, size, connectivity=4, output=None):
51 """Returns a copy of the image, but with smaller features removed.
52
53 Features smaller than the specified size have their pixel value
54 replaced by that of the largest neighboring features.
55
56 The image must be of unsigned 8-bit integer (rasterio.byte or
57 numpy.uint8) data type.
58 """
59 if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):
60 raise ValueError("Image must be dtype uint8/ubyte")
61
62 if output is not None and (
63 np.dtype(output.dtype) != np.dtype(rasterio.ubyte)):
64 raise ValueError("Output must be dtype uint8/ubyte")
65
66 with rasterio.drivers():
67 return _sieve(image, size, connectivity)
68
69
70 def rasterize(
71 shapes,
72 out_shape=None,
73 fill=0,
74 output=None,
75 transform=IDENTITY,
76 all_touched=False,
77 default_value=1,
78 dtype=None):
79 """Returns an image array with points, lines, or polygons burned in.
80
81 A different value may be specified for each shape. The shapes may
82 be georeferenced or may have image coordinates. An existing image
83 array may be provided, or one may be created. By default, the center
84 of image elements determines whether they are updated, but all
85 touched elements may be optionally updated.
86
87 Valid data types are: int16, int32, uint8, uint16, uint32, float32, float64
88
89 :param shapes: an iterator over Fiona style geometry objects (with a default
90 value of default_value) or an iterator over (geometry, value) pairs.
91
92 :param transform: GDAL style geotransform to be applied to the
93 image.
94
95 :param out_shape: shape of created image array
96 :param fill: fill value for created image array
97 :param output: alternatively, an existing image array
98
99 :param all_touched: if True, will rasterize all pixels touched,
100 otherwise will use GDAL default method.
101 :param default_value: value burned in for shapes if not provided as part
102 of shapes.
103 """
104
105 valid_dtypes = ('int16', 'int32', 'uint8', 'uint16', 'uint32', 'float32',
106 'float64')
107
108 def get_valid_dtype(values):
109 values_dtype = values.dtype
110 if values_dtype.kind == 'i':
111 values_dtype = np.dtype(get_minimum_int_dtype(values))
112 if values_dtype.name in valid_dtypes:
113 return values_dtype
114 return None
115
116 def can_cast_dtype(values, dtype):
117 if values.dtype.name == np.dtype(dtype).name:
118 return True
119 elif values.dtype.kind == 'f':
120 return np.allclose(values, values.astype(dtype))
121 else:
122 return np.array_equal(values, values.astype(dtype))
123
124 if fill != 0:
125 fill_array = np.array([fill])
126 if get_valid_dtype(fill_array) is None:
127 raise ValueError('fill must be one of these types: %s'
128 % (', '.join(valid_dtypes)))
129 elif dtype is not None and not can_cast_dtype(fill_array, dtype):
130 raise ValueError('fill value cannot be cast to specified dtype')
131
132
133 if default_value != 1:
134 default_value_array = np.array([default_value])
135 if get_valid_dtype(default_value_array) is None:
136 raise ValueError('default_value must be one of these types: %s'
137 % (', '.join(valid_dtypes)))
138 elif dtype is not None and not can_cast_dtype(default_value_array,
139 dtype):
140 raise ValueError('default_value cannot be cast to specified dtype')
141
142 valid_shapes = []
143 shape_values = []
144 for index, item in enumerate(shapes):
145 try:
146 if isinstance(item, (tuple, list)):
147 geom, value = item
148 else:
149 geom = item
150 value = default_value
151 geom = getattr(geom, '__geo_interface__', None) or geom
152 if (not isinstance(geom, dict) or
153 'type' not in geom or 'coordinates' not in geom):
154 raise ValueError(
155 'Object %r at index %d is not a geometry object' %
156 (geom, index))
157 valid_shapes.append((geom, value))
158 shape_values.append(value)
159 except Exception:
160 log.exception('Exception caught, skipping shape %d', index)
161
162 if not valid_shapes:
163 raise ValueError('No valid shapes found for rasterize. Shapes must be '
164 'valid geometry objects')
165
166 shape_values = np.array(shape_values)
167 values_dtype = get_valid_dtype(shape_values)
168 if values_dtype is None:
169 raise ValueError('shape values must be one of these dtypes: %s' %
170 (', '.join(valid_dtypes)))
171
172 if dtype is None:
173 dtype = values_dtype
174 elif np.dtype(dtype).name not in valid_dtypes:
175 raise ValueError('dtype must be one of: %s' % (', '.join(valid_dtypes)))
176 elif not can_cast_dtype(shape_values, dtype):
177 raise ValueError('shape values could not be cast to specified dtype')
178
179 if output is not None:
180 if np.dtype(output.dtype).name not in valid_dtypes:
181 raise ValueError('Output image dtype must be one of: %s'
182 % (', '.join(valid_dtypes)))
183 if not can_cast_dtype(shape_values, output.dtype):
184 raise ValueError('shape values cannot be cast to dtype of output '
185 'image')
186
187 elif out_shape is not None:
188 output = np.empty(out_shape, dtype=dtype)
189 output.fill(fill)
190 else:
191 raise ValueError('Either an output shape or image must be provided')
192
193 transform = guard_transform(transform)
194
195 with rasterio.drivers():
196 _rasterize(valid_shapes, output, transform.to_gdal(), all_touched)
197
198 return output
199
200
[end of rasterio/features.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rasterio/features.py b/rasterio/features.py
--- a/rasterio/features.py
+++ b/rasterio/features.py
@@ -22,17 +22,26 @@
def shapes(image, mask=None, connectivity=4, transform=IDENTITY):
"""Yields a (shape, image_value) pair for each feature in the image.
-
- The shapes are GeoJSON-like dicts and the image values are ints.
-
+
+ The shapes are GeoJSON-like dicts and the image values are ints or floats
+ depending on the data type of the image.
+
Features are found using a connected-component labeling algorithm.
- The image must be of unsigned 8-bit integer (rasterio.byte or
- numpy.uint8) data type. If a mask is provided, pixels for which the
- mask is `False` will be excluded from feature generation.
+ The image must be one of int16, int32, uint8, uint16, float32 data types.
+ Note: due to floating point precision issues, the floating point values
+ returned from a floating point image may not exactly match the original
+ values.
+
+ If a mask is provided, pixels for which the mask is `False` will be
+ excluded from feature generation.
"""
- if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):
- raise ValueError("Image must be dtype uint8/ubyte")
+
+ valid_dtypes = ('int16', 'int32', 'uint8', 'uint16', 'float32')
+
+ if np.dtype(image.dtype).name not in valid_dtypes:
+ raise ValueError('image dtype must be one of: %s'
+ % (', '.join(valid_dtypes)))
if mask is not None and np.dtype(mask.dtype) != np.dtype(rasterio.bool_):
raise ValueError("Mask must be dtype rasterio.bool_")
@@ -52,7 +61,7 @@
Features smaller than the specified size have their pixel value
replaced by that of the largest neighboring features.
-
+
The image must be of unsigned 8-bit integer (rasterio.byte or
numpy.uint8) data type.
"""
@@ -68,7 +77,7 @@
def rasterize(
- shapes,
+ shapes,
out_shape=None,
fill=0,
output=None,
@@ -96,7 +105,7 @@
:param fill: fill value for created image array
:param output: alternatively, an existing image array
- :param all_touched: if True, will rasterize all pixels touched,
+ :param all_touched: if True, will rasterize all pixels touched,
otherwise will use GDAL default method.
:param default_value: value burned in for shapes if not provided as part
of shapes.
@@ -178,7 +187,7 @@
if output is not None:
if np.dtype(output.dtype).name not in valid_dtypes:
- raise ValueError('Output image dtype must be one of: %s'
+ raise ValueError('Output image dtype must be one of: %s'
% (', '.join(valid_dtypes)))
if not can_cast_dtype(shape_values, output.dtype):
raise ValueError('shape values cannot be cast to dtype of output '
|
{"golden_diff": "diff --git a/rasterio/features.py b/rasterio/features.py\n--- a/rasterio/features.py\n+++ b/rasterio/features.py\n@@ -22,17 +22,26 @@\n \n def shapes(image, mask=None, connectivity=4, transform=IDENTITY):\n \"\"\"Yields a (shape, image_value) pair for each feature in the image.\n- \n- The shapes are GeoJSON-like dicts and the image values are ints.\n- \n+\n+ The shapes are GeoJSON-like dicts and the image values are ints or floats\n+ depending on the data type of the image.\n+\n Features are found using a connected-component labeling algorithm.\n \n- The image must be of unsigned 8-bit integer (rasterio.byte or\n- numpy.uint8) data type. If a mask is provided, pixels for which the\n- mask is `False` will be excluded from feature generation.\n+ The image must be one of int16, int32, uint8, uint16, float32 data types.\n+ Note: due to floating point precision issues, the floating point values\n+ returned from a floating point image may not exactly match the original\n+ values.\n+\n+ If a mask is provided, pixels for which the mask is `False` will be\n+ excluded from feature generation.\n \"\"\"\n- if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):\n- raise ValueError(\"Image must be dtype uint8/ubyte\")\n+\n+ valid_dtypes = ('int16', 'int32', 'uint8', 'uint16', 'float32')\n+\n+ if np.dtype(image.dtype).name not in valid_dtypes:\n+ raise ValueError('image dtype must be one of: %s'\n+ % (', '.join(valid_dtypes)))\n \n if mask is not None and np.dtype(mask.dtype) != np.dtype(rasterio.bool_):\n raise ValueError(\"Mask must be dtype rasterio.bool_\")\n@@ -52,7 +61,7 @@\n \n Features smaller than the specified size have their pixel value\n replaced by that of the largest neighboring features.\n- \n+\n The image must be of unsigned 8-bit integer (rasterio.byte or\n numpy.uint8) data type.\n \"\"\"\n@@ -68,7 +77,7 @@\n \n \n def rasterize(\n- shapes, \n+ shapes,\n out_shape=None,\n fill=0,\n output=None,\n@@ -96,7 +105,7 @@\n :param fill: fill value for created image array\n :param output: alternatively, an existing image array\n \n- :param all_touched: if True, will rasterize all pixels touched, \n+ :param all_touched: if True, will rasterize all pixels touched,\n otherwise will use GDAL default method.\n :param default_value: value burned in for shapes if not provided as part\n of shapes.\n@@ -178,7 +187,7 @@\n \n if output is not None:\n if np.dtype(output.dtype).name not in valid_dtypes:\n- raise ValueError('Output image dtype must be one of: %s' \n+ raise ValueError('Output image dtype must be one of: %s'\n % (', '.join(valid_dtypes)))\n if not can_cast_dtype(shape_values, output.dtype):\n raise ValueError('shape values cannot be cast to dtype of output '\n", "issue": "shapes from data types other than uint8\nThe companion to #136.\n\n", "before_files": [{"content": "\"\"\"Functions for working with features in a raster dataset.\"\"\"\n\nimport json\nimport logging\nimport time\nimport warnings\n\nimport numpy as np\n\nimport rasterio\nfrom rasterio._features import _shapes, _sieve, _rasterize\nfrom rasterio.transform import IDENTITY, guard_transform\nfrom rasterio.dtypes import get_minimum_int_dtype\n\n\nlog = logging.getLogger('rasterio')\nclass NullHandler(logging.Handler):\n def emit(self, record):\n pass\nlog.addHandler(NullHandler())\n\n\ndef shapes(image, mask=None, connectivity=4, transform=IDENTITY):\n \"\"\"Yields a (shape, image_value) pair for each feature in the image.\n \n The shapes are GeoJSON-like dicts and the image values are ints.\n \n Features are found using a connected-component labeling algorithm.\n\n The image must be of unsigned 8-bit integer (rasterio.byte or\n numpy.uint8) data type. If a mask is provided, pixels for which the\n mask is `False` will be excluded from feature generation.\n \"\"\"\n if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):\n raise ValueError(\"Image must be dtype uint8/ubyte\")\n\n if mask is not None and np.dtype(mask.dtype) != np.dtype(rasterio.bool_):\n raise ValueError(\"Mask must be dtype rasterio.bool_\")\n\n if connectivity not in (4, 8):\n raise ValueError(\"Connectivity Option must be 4 or 8\")\n\n transform = guard_transform(transform)\n\n with rasterio.drivers():\n for s, v in _shapes(image, mask, connectivity, transform.to_gdal()):\n yield s, v\n\n\ndef sieve(image, size, connectivity=4, output=None):\n \"\"\"Returns a copy of the image, but with smaller features removed.\n\n Features smaller than the specified size have their pixel value\n replaced by that of the largest neighboring features.\n \n The image must be of unsigned 8-bit integer (rasterio.byte or\n numpy.uint8) data type.\n \"\"\"\n if np.dtype(image.dtype) != np.dtype(rasterio.ubyte):\n raise ValueError(\"Image must be dtype uint8/ubyte\")\n\n if output is not None and (\n np.dtype(output.dtype) != np.dtype(rasterio.ubyte)):\n raise ValueError(\"Output must be dtype uint8/ubyte\")\n\n with rasterio.drivers():\n return _sieve(image, size, connectivity)\n\n\ndef rasterize(\n shapes, \n out_shape=None,\n fill=0,\n output=None,\n transform=IDENTITY,\n all_touched=False,\n default_value=1,\n dtype=None):\n \"\"\"Returns an image array with points, lines, or polygons burned in.\n\n A different value may be specified for each shape. The shapes may\n be georeferenced or may have image coordinates. An existing image\n array may be provided, or one may be created. By default, the center\n of image elements determines whether they are updated, but all\n touched elements may be optionally updated.\n\n Valid data types are: int16, int32, uint8, uint16, uint32, float32, float64\n\n :param shapes: an iterator over Fiona style geometry objects (with a default\n value of default_value) or an iterator over (geometry, value) pairs.\n\n :param transform: GDAL style geotransform to be applied to the\n image.\n\n :param out_shape: shape of created image array\n :param fill: fill value for created image array\n :param output: alternatively, an existing image array\n\n :param all_touched: if True, will rasterize all pixels touched, \n otherwise will use GDAL default method.\n :param default_value: value burned in for shapes if not provided as part\n of shapes.\n \"\"\"\n\n valid_dtypes = ('int16', 'int32', 'uint8', 'uint16', 'uint32', 'float32',\n 'float64')\n\n def get_valid_dtype(values):\n values_dtype = values.dtype\n if values_dtype.kind == 'i':\n values_dtype = np.dtype(get_minimum_int_dtype(values))\n if values_dtype.name in valid_dtypes:\n return values_dtype\n return None\n\n def can_cast_dtype(values, dtype):\n if values.dtype.name == np.dtype(dtype).name:\n return True\n elif values.dtype.kind == 'f':\n return np.allclose(values, values.astype(dtype))\n else:\n return np.array_equal(values, values.astype(dtype))\n\n if fill != 0:\n fill_array = np.array([fill])\n if get_valid_dtype(fill_array) is None:\n raise ValueError('fill must be one of these types: %s'\n % (', '.join(valid_dtypes)))\n elif dtype is not None and not can_cast_dtype(fill_array, dtype):\n raise ValueError('fill value cannot be cast to specified dtype')\n\n\n if default_value != 1:\n default_value_array = np.array([default_value])\n if get_valid_dtype(default_value_array) is None:\n raise ValueError('default_value must be one of these types: %s'\n % (', '.join(valid_dtypes)))\n elif dtype is not None and not can_cast_dtype(default_value_array,\n dtype):\n raise ValueError('default_value cannot be cast to specified dtype')\n\n valid_shapes = []\n shape_values = []\n for index, item in enumerate(shapes):\n try:\n if isinstance(item, (tuple, list)):\n geom, value = item\n else:\n geom = item\n value = default_value\n geom = getattr(geom, '__geo_interface__', None) or geom\n if (not isinstance(geom, dict) or\n 'type' not in geom or 'coordinates' not in geom):\n raise ValueError(\n 'Object %r at index %d is not a geometry object' %\n (geom, index))\n valid_shapes.append((geom, value))\n shape_values.append(value)\n except Exception:\n log.exception('Exception caught, skipping shape %d', index)\n\n if not valid_shapes:\n raise ValueError('No valid shapes found for rasterize. Shapes must be '\n 'valid geometry objects')\n\n shape_values = np.array(shape_values)\n values_dtype = get_valid_dtype(shape_values)\n if values_dtype is None:\n raise ValueError('shape values must be one of these dtypes: %s' %\n (', '.join(valid_dtypes)))\n\n if dtype is None:\n dtype = values_dtype\n elif np.dtype(dtype).name not in valid_dtypes:\n raise ValueError('dtype must be one of: %s' % (', '.join(valid_dtypes)))\n elif not can_cast_dtype(shape_values, dtype):\n raise ValueError('shape values could not be cast to specified dtype')\n\n if output is not None:\n if np.dtype(output.dtype).name not in valid_dtypes:\n raise ValueError('Output image dtype must be one of: %s' \n % (', '.join(valid_dtypes)))\n if not can_cast_dtype(shape_values, output.dtype):\n raise ValueError('shape values cannot be cast to dtype of output '\n 'image')\n\n elif out_shape is not None:\n output = np.empty(out_shape, dtype=dtype)\n output.fill(fill)\n else:\n raise ValueError('Either an output shape or image must be provided')\n \n transform = guard_transform(transform)\n\n with rasterio.drivers():\n _rasterize(valid_shapes, output, transform.to_gdal(), all_touched)\n \n return output\n\n", "path": "rasterio/features.py"}]}
| 2,684 | 746 |
gh_patches_debug_15518
|
rasdani/github-patches
|
git_diff
|
coala__coala-5935
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in docstring
In `coala/coalib/settings/Setting.py`,
In line 174, the word `of` should be replaced with `off`.
</issue>
<code>
[start of coalib/settings/Setting.py]
1 import os
2 from collections import Iterable, OrderedDict
3
4 from coala_utils.decorators import (
5 enforce_signature,
6 generate_repr,
7 )
8 from coala_utils.string_processing.StringConverter import StringConverter
9 from coalib.bearlib.languages.Language import Language, UnknownLanguageError
10 from coalib.parsing.Globbing import glob_escape
11 from coalib.results.SourcePosition import SourcePosition
12
13
14 def path(obj, *args, **kwargs):
15 return obj.__path__(*args, **kwargs)
16
17
18 def path_list(obj, *args, **kwargs):
19 return obj.__path_list__(*args, **kwargs)
20
21
22 def url(obj, *args, **kwargs):
23 return obj.__url__(*args, **kwargs)
24
25
26 def glob(obj, *args, **kwargs):
27 """
28 Creates a path in which all special glob characters in all the
29 parent directories in the given setting are properly escaped.
30
31 :param obj: The ``Setting`` object from which the key is obtained.
32 :return: Returns a path in which special glob characters are escaped.
33 """
34 return obj.__glob__(*args, **kwargs)
35
36
37 def glob_list(obj, *args, **kwargs):
38 """
39 Creates a list of paths in which all special glob characters in all the
40 parent directories of all paths in the given setting are properly escaped.
41
42 :param obj: The ``Setting`` object from which the key is obtained.
43 :return: Returns a list of paths in which special glob characters are
44 escaped.
45 """
46 return obj.__glob_list__(*args, **kwargs)
47
48
49 def language(name):
50 """
51 Convert a string into ``Language`` object.
52
53 :param name: String containing language name.
54 :return: ``Language`` object.
55 :raises ValueError: If the ``name`` contain invalid language name.
56 """
57 try:
58 return Language[name]
59 except UnknownLanguageError as e:
60 raise ValueError(e)
61
62
63 def typed_list(conversion_func):
64 """
65 Creates a class that converts a setting into a list of elements each
66 converted with the given conversion function.
67
68 :param conversion_func: The conversion function that converts a string into
69 your desired list item object.
70 :return: An instance of the created conversion class.
71 """
72
73 class Converter:
74
75 def __call__(self, setting):
76 return [conversion_func(StringConverter(elem))
77 for elem in setting]
78
79 def __repr__(self):
80 return 'typed_list(%s)' % conversion_func.__name__
81
82 return Converter()
83
84
85 str_list = typed_list(str)
86
87
88 int_list = typed_list(int)
89
90
91 float_list = typed_list(float)
92
93
94 bool_list = typed_list(bool)
95
96
97 def typed_dict(key_type, value_type, default):
98 """
99 Creates a class that converts a setting into a dict with the given types.
100
101 :param key_type: The type conversion function for the keys.
102 :param value_type: The type conversion function for the values.
103 :param default: The default value to use if no one is given by the user.
104 :return: An instance of the created conversion class.
105 """
106
107 class Converter:
108
109 def __call__(self, setting):
110 return {key_type(StringConverter(key)):
111 value_type(StringConverter(value))
112 if value != '' else default
113 for key, value in dict(setting).items()}
114
115 def __repr__(self):
116 return 'typed_dict(%s, %s, default=%s)' % (
117 key_type.__name__, value_type.__name__, default)
118
119 return Converter()
120
121
122 def typed_ordered_dict(key_type, value_type, default):
123 """
124 Creates a class that converts a setting into an ordered dict with the
125 given types.
126
127 :param key_type: The type conversion function for the keys.
128 :param value_type: The type conversion function for the values.
129 :param default: The default value to use if no one is given by the user.
130 :return: An instance of the created conversion class.
131 """
132
133 class Converter:
134
135 def __call__(self, setting):
136 return OrderedDict((key_type(StringConverter(key)),
137 value_type(StringConverter(value))
138 if value != '' else default)
139 for key, value in OrderedDict(setting).items())
140
141 def __repr__(self):
142 return 'typed_ordered_dict(%s, %s, default=%s)' % (
143 key_type.__name__, value_type.__name__, default)
144
145 return Converter()
146
147
148 @generate_repr('key', 'value', 'origin', 'from_cli', 'to_append')
149 class Setting(StringConverter):
150 """
151 A Setting consists mainly of a key and a value. It mainly offers many
152 conversions into common data types.
153 """
154
155 @enforce_signature
156 def __init__(self,
157 key,
158 value,
159 origin: (str, SourcePosition) = '',
160 strip_whitespaces: bool = True,
161 list_delimiters: Iterable = (',', ';'),
162 from_cli: bool = False,
163 remove_empty_iter_elements: bool = True,
164 to_append: bool = False):
165 """
166 Initializes a new Setting,
167
168 :param key: The key of the Setting.
169 :param value: The value, if you apply conversions
170 to this object these will be applied
171 to this value.
172 :param origin: The originating file. This will be
173 used for path conversions and the
174 last part will be stripped of. If
175 you want to specify a directory as
176 origin be sure to end it with a
177 directory separator.
178 :param strip_whitespaces: Whether to strip whitespaces from
179 the value or not
180 :param list_delimiters: Delimiters for list conversion
181 :param from_cli: True if this setting was read by the
182 CliParser.
183 :param remove_empty_iter_elements: Whether to remove empty elements in
184 iterable values.
185 :param to_append: The boolean value if setting value
186 needs to be appended to a setting in
187 the defaults of a section.
188 """
189 self.to_append = to_append
190
191 StringConverter.__init__(
192 self,
193 value,
194 strip_whitespaces=strip_whitespaces,
195 list_delimiters=list_delimiters,
196 remove_empty_iter_elements=remove_empty_iter_elements)
197
198 self.from_cli = from_cli
199 self.key = key
200 self._origin = origin
201
202 def __path__(self, origin=None, glob_escape_origin=False):
203 """
204 Determines the path of this setting.
205
206 Note: You can also use this function on strings, in that case the
207 origin argument will be taken in every case.
208
209 :param origin: The origin file to take if no origin is
210 specified for the given setting. If you
211 want to provide a directory, make sure it
212 ends with a directory separator.
213 :param glob_escape_origin: When this is set to true, the origin of
214 this setting will be escaped with
215 ``glob_escape``.
216 :return: An absolute path.
217 :raises ValueError: If no origin is specified in the setting
218 nor the given origin parameter.
219 """
220 strrep = str(self).strip()
221 if os.path.isabs(strrep):
222 return strrep
223
224 if hasattr(self, 'origin') and self.origin != '':
225 origin = self.origin
226
227 if origin is None:
228 raise ValueError('Cannot determine path without origin.')
229
230 # We need to get full path before escaping since the full path
231 # may introduce unintended glob characters
232 origin = os.path.abspath(os.path.dirname(origin))
233
234 if glob_escape_origin:
235 origin = glob_escape(origin)
236
237 return os.path.normpath(os.path.join(origin, strrep))
238
239 def __glob__(self, origin=None):
240 """
241 Determines the path of this setting with proper escaping of its
242 parent directories.
243
244 :param origin: The origin file to take if no origin is specified
245 for the given setting. If you want to provide a
246 directory, make sure it ends with a directory
247 separator.
248 :return: An absolute path in which the parent directories
249 are escaped.
250 :raises ValueError: If no origin is specified in the setting nor the
251 given origin parameter.
252 """
253 return Setting.__path__(self, origin, glob_escape_origin=True)
254
255 def __path_list__(self):
256 """
257 Splits the value into a list and creates a path out of each item taking
258 the origin of the setting into account.
259
260 :return: A list of absolute paths.
261 """
262 return [Setting.__path__(elem, self.origin) for elem in self]
263
264 def __glob_list__(self):
265 """
266 Splits the value into a list and creates a path out of each item in
267 which the special glob characters in origin are escaped.
268
269 :return: A list of absolute paths in which the special characters in
270 the parent directories of the setting are escaped.
271 """
272 return [Setting.__glob__(elem, self.origin) for elem in self]
273
274 def __iter__(self, remove_backslashes=True):
275 if self.to_append:
276 raise ValueError('Iteration on this object is invalid because the '
277 'value is incomplete. Please access the value of '
278 'the setting in a section to iterate through it.')
279 return StringConverter.__iter__(self, remove_backslashes)
280
281 @property
282 def key(self):
283 return self._key
284
285 @key.setter
286 def key(self, key):
287 newkey = str(key)
288 if newkey == '':
289 raise ValueError('An empty key is not allowed for a setting.')
290
291 self._key = newkey
292
293 @StringConverter.value.getter
294 def value(self):
295 if self.to_append:
296 raise ValueError('This property is invalid because the value is '
297 'incomplete. Please access the value of the '
298 'setting in a section to get the complete value.')
299 return self._value
300
301 @property
302 def origin(self):
303 """
304 Returns the filename.
305 """
306 if isinstance(self._origin, SourcePosition):
307 return self._origin.filename
308 else:
309 return self._origin
310
311 @property
312 def line_number(self):
313 if isinstance(self._origin, SourcePosition):
314 return self._origin.line
315 else:
316 raise TypeError("Instantiated with str 'origin' "
317 'which does not have line numbers. '
318 'Use SourcePosition for line numbers.')
319
[end of coalib/settings/Setting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/coalib/settings/Setting.py b/coalib/settings/Setting.py
--- a/coalib/settings/Setting.py
+++ b/coalib/settings/Setting.py
@@ -171,7 +171,7 @@
to this value.
:param origin: The originating file. This will be
used for path conversions and the
- last part will be stripped of. If
+ last part will be stripped off. If
you want to specify a directory as
origin be sure to end it with a
directory separator.
|
{"golden_diff": "diff --git a/coalib/settings/Setting.py b/coalib/settings/Setting.py\n--- a/coalib/settings/Setting.py\n+++ b/coalib/settings/Setting.py\n@@ -171,7 +171,7 @@\n to this value.\n :param origin: The originating file. This will be\n used for path conversions and the\n- last part will be stripped of. If\n+ last part will be stripped off. If\n you want to specify a directory as\n origin be sure to end it with a\n directory separator.\n", "issue": "Typo in docstring\nIn `coala/coalib/settings/Setting.py`,\r\nIn line 174, the word `of` should be replaced with `off`. \n", "before_files": [{"content": "import os\nfrom collections import Iterable, OrderedDict\n\nfrom coala_utils.decorators import (\n enforce_signature,\n generate_repr,\n)\nfrom coala_utils.string_processing.StringConverter import StringConverter\nfrom coalib.bearlib.languages.Language import Language, UnknownLanguageError\nfrom coalib.parsing.Globbing import glob_escape\nfrom coalib.results.SourcePosition import SourcePosition\n\n\ndef path(obj, *args, **kwargs):\n return obj.__path__(*args, **kwargs)\n\n\ndef path_list(obj, *args, **kwargs):\n return obj.__path_list__(*args, **kwargs)\n\n\ndef url(obj, *args, **kwargs):\n return obj.__url__(*args, **kwargs)\n\n\ndef glob(obj, *args, **kwargs):\n \"\"\"\n Creates a path in which all special glob characters in all the\n parent directories in the given setting are properly escaped.\n\n :param obj: The ``Setting`` object from which the key is obtained.\n :return: Returns a path in which special glob characters are escaped.\n \"\"\"\n return obj.__glob__(*args, **kwargs)\n\n\ndef glob_list(obj, *args, **kwargs):\n \"\"\"\n Creates a list of paths in which all special glob characters in all the\n parent directories of all paths in the given setting are properly escaped.\n\n :param obj: The ``Setting`` object from which the key is obtained.\n :return: Returns a list of paths in which special glob characters are\n escaped.\n \"\"\"\n return obj.__glob_list__(*args, **kwargs)\n\n\ndef language(name):\n \"\"\"\n Convert a string into ``Language`` object.\n\n :param name: String containing language name.\n :return: ``Language`` object.\n :raises ValueError: If the ``name`` contain invalid language name.\n \"\"\"\n try:\n return Language[name]\n except UnknownLanguageError as e:\n raise ValueError(e)\n\n\ndef typed_list(conversion_func):\n \"\"\"\n Creates a class that converts a setting into a list of elements each\n converted with the given conversion function.\n\n :param conversion_func: The conversion function that converts a string into\n your desired list item object.\n :return: An instance of the created conversion class.\n \"\"\"\n\n class Converter:\n\n def __call__(self, setting):\n return [conversion_func(StringConverter(elem))\n for elem in setting]\n\n def __repr__(self):\n return 'typed_list(%s)' % conversion_func.__name__\n\n return Converter()\n\n\nstr_list = typed_list(str)\n\n\nint_list = typed_list(int)\n\n\nfloat_list = typed_list(float)\n\n\nbool_list = typed_list(bool)\n\n\ndef typed_dict(key_type, value_type, default):\n \"\"\"\n Creates a class that converts a setting into a dict with the given types.\n\n :param key_type: The type conversion function for the keys.\n :param value_type: The type conversion function for the values.\n :param default: The default value to use if no one is given by the user.\n :return: An instance of the created conversion class.\n \"\"\"\n\n class Converter:\n\n def __call__(self, setting):\n return {key_type(StringConverter(key)):\n value_type(StringConverter(value))\n if value != '' else default\n for key, value in dict(setting).items()}\n\n def __repr__(self):\n return 'typed_dict(%s, %s, default=%s)' % (\n key_type.__name__, value_type.__name__, default)\n\n return Converter()\n\n\ndef typed_ordered_dict(key_type, value_type, default):\n \"\"\"\n Creates a class that converts a setting into an ordered dict with the\n given types.\n\n :param key_type: The type conversion function for the keys.\n :param value_type: The type conversion function for the values.\n :param default: The default value to use if no one is given by the user.\n :return: An instance of the created conversion class.\n \"\"\"\n\n class Converter:\n\n def __call__(self, setting):\n return OrderedDict((key_type(StringConverter(key)),\n value_type(StringConverter(value))\n if value != '' else default)\n for key, value in OrderedDict(setting).items())\n\n def __repr__(self):\n return 'typed_ordered_dict(%s, %s, default=%s)' % (\n key_type.__name__, value_type.__name__, default)\n\n return Converter()\n\n\n@generate_repr('key', 'value', 'origin', 'from_cli', 'to_append')\nclass Setting(StringConverter):\n \"\"\"\n A Setting consists mainly of a key and a value. It mainly offers many\n conversions into common data types.\n \"\"\"\n\n @enforce_signature\n def __init__(self,\n key,\n value,\n origin: (str, SourcePosition) = '',\n strip_whitespaces: bool = True,\n list_delimiters: Iterable = (',', ';'),\n from_cli: bool = False,\n remove_empty_iter_elements: bool = True,\n to_append: bool = False):\n \"\"\"\n Initializes a new Setting,\n\n :param key: The key of the Setting.\n :param value: The value, if you apply conversions\n to this object these will be applied\n to this value.\n :param origin: The originating file. This will be\n used for path conversions and the\n last part will be stripped of. If\n you want to specify a directory as\n origin be sure to end it with a\n directory separator.\n :param strip_whitespaces: Whether to strip whitespaces from\n the value or not\n :param list_delimiters: Delimiters for list conversion\n :param from_cli: True if this setting was read by the\n CliParser.\n :param remove_empty_iter_elements: Whether to remove empty elements in\n iterable values.\n :param to_append: The boolean value if setting value\n needs to be appended to a setting in\n the defaults of a section.\n \"\"\"\n self.to_append = to_append\n\n StringConverter.__init__(\n self,\n value,\n strip_whitespaces=strip_whitespaces,\n list_delimiters=list_delimiters,\n remove_empty_iter_elements=remove_empty_iter_elements)\n\n self.from_cli = from_cli\n self.key = key\n self._origin = origin\n\n def __path__(self, origin=None, glob_escape_origin=False):\n \"\"\"\n Determines the path of this setting.\n\n Note: You can also use this function on strings, in that case the\n origin argument will be taken in every case.\n\n :param origin: The origin file to take if no origin is\n specified for the given setting. If you\n want to provide a directory, make sure it\n ends with a directory separator.\n :param glob_escape_origin: When this is set to true, the origin of\n this setting will be escaped with\n ``glob_escape``.\n :return: An absolute path.\n :raises ValueError: If no origin is specified in the setting\n nor the given origin parameter.\n \"\"\"\n strrep = str(self).strip()\n if os.path.isabs(strrep):\n return strrep\n\n if hasattr(self, 'origin') and self.origin != '':\n origin = self.origin\n\n if origin is None:\n raise ValueError('Cannot determine path without origin.')\n\n # We need to get full path before escaping since the full path\n # may introduce unintended glob characters\n origin = os.path.abspath(os.path.dirname(origin))\n\n if glob_escape_origin:\n origin = glob_escape(origin)\n\n return os.path.normpath(os.path.join(origin, strrep))\n\n def __glob__(self, origin=None):\n \"\"\"\n Determines the path of this setting with proper escaping of its\n parent directories.\n\n :param origin: The origin file to take if no origin is specified\n for the given setting. If you want to provide a\n directory, make sure it ends with a directory\n separator.\n :return: An absolute path in which the parent directories\n are escaped.\n :raises ValueError: If no origin is specified in the setting nor the\n given origin parameter.\n \"\"\"\n return Setting.__path__(self, origin, glob_escape_origin=True)\n\n def __path_list__(self):\n \"\"\"\n Splits the value into a list and creates a path out of each item taking\n the origin of the setting into account.\n\n :return: A list of absolute paths.\n \"\"\"\n return [Setting.__path__(elem, self.origin) for elem in self]\n\n def __glob_list__(self):\n \"\"\"\n Splits the value into a list and creates a path out of each item in\n which the special glob characters in origin are escaped.\n\n :return: A list of absolute paths in which the special characters in\n the parent directories of the setting are escaped.\n \"\"\"\n return [Setting.__glob__(elem, self.origin) for elem in self]\n\n def __iter__(self, remove_backslashes=True):\n if self.to_append:\n raise ValueError('Iteration on this object is invalid because the '\n 'value is incomplete. Please access the value of '\n 'the setting in a section to iterate through it.')\n return StringConverter.__iter__(self, remove_backslashes)\n\n @property\n def key(self):\n return self._key\n\n @key.setter\n def key(self, key):\n newkey = str(key)\n if newkey == '':\n raise ValueError('An empty key is not allowed for a setting.')\n\n self._key = newkey\n\n @StringConverter.value.getter\n def value(self):\n if self.to_append:\n raise ValueError('This property is invalid because the value is '\n 'incomplete. Please access the value of the '\n 'setting in a section to get the complete value.')\n return self._value\n\n @property\n def origin(self):\n \"\"\"\n Returns the filename.\n \"\"\"\n if isinstance(self._origin, SourcePosition):\n return self._origin.filename\n else:\n return self._origin\n\n @property\n def line_number(self):\n if isinstance(self._origin, SourcePosition):\n return self._origin.line\n else:\n raise TypeError(\"Instantiated with str 'origin' \"\n 'which does not have line numbers. '\n 'Use SourcePosition for line numbers.')\n", "path": "coalib/settings/Setting.py"}]}
| 3,678 | 126 |
gh_patches_debug_15663
|
rasdani/github-patches
|
git_diff
|
huggingface__peft-653
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deepcopy not copying the LoraConfig
I'm trying to make a deepcopy (using copy.deepcopy) of a LoraModel on the version '0.2.0'. But the values in PeftConfig remain the default ones (e.g. r = 8), not the ones of the copied model. Is it normal ? Am I supposed to do a `model_copy = get_peft_model(model_copy, peft_config)` ?
It is possible to implement `__deepcopy__` and `__copy__` if necessary.
Here is a reproduction example :
``` Python
import copy
from transformers import AutoModelForCausalLM
from peft import get_peft_config, get_peft_model, LoraConfig, TaskType
model_name_or_path = "gpt2"
tokenizer_name_or_path = "gpt2"
peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, inference_mode=False, r=87, lora_alpha=32, lora_dropout=0.1)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
model = get_peft_model(model, peft_config)
model_copy = copy.deepcopy(model)
assert(model.peft_config.r == model_copy.peft_config.r)
```
Moreover, I also get an AssertionError if I continue with :
``` Python
model_copy = get_peft_model(model_copy, peft_config)
assert(model == model_copy)
```
</issue>
<code>
[start of src/peft/utils/config.py]
1 # coding=utf-8
2 # Copyright 2023-present the HuggingFace Inc. team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 import enum
16 import inspect
17 import json
18 import os
19 from dataclasses import asdict, dataclass, field
20 from typing import Optional, Union
21
22 from huggingface_hub import hf_hub_download
23 from transformers.utils import PushToHubMixin
24
25 from .other import CONFIG_NAME
26
27
28 class PeftType(str, enum.Enum):
29 PROMPT_TUNING = "PROMPT_TUNING"
30 P_TUNING = "P_TUNING"
31 PREFIX_TUNING = "PREFIX_TUNING"
32 LORA = "LORA"
33 ADALORA = "ADALORA"
34 ADAPTION_PROMPT = "ADAPTION_PROMPT"
35
36
37 class TaskType(str, enum.Enum):
38 SEQ_CLS = "SEQ_CLS"
39 SEQ_2_SEQ_LM = "SEQ_2_SEQ_LM"
40 CAUSAL_LM = "CAUSAL_LM"
41 TOKEN_CLS = "TOKEN_CLS"
42 QUESTION_ANS = "QUESTION_ANS"
43
44
45 @dataclass
46 class PeftConfigMixin(PushToHubMixin):
47 r"""
48 This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all
49 PEFT adapter models. This class inherits from [`~transformers.utils.PushToHubMixin`] which contains the methods to
50 push your model to the Hub. The method `save_pretrained` will save the configuration of your adapter model in a
51 directory. The method `from_pretrained` will load the configuration of your adapter model from a directory.
52
53 Args:
54 peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.
55 """
56 peft_type: Optional[PeftType] = field(default=None, metadata={"help": "The type of PEFT model."})
57
58 @property
59 def __dict__(self):
60 return asdict(self)
61
62 def to_dict(self):
63 return self.__dict__
64
65 def save_pretrained(self, save_directory, **kwargs):
66 r"""
67 This method saves the configuration of your adapter model in a directory.
68
69 Args:
70 save_directory (`str`):
71 The directory where the configuration will be saved.
72 kwargs (additional keyword arguments, *optional*):
73 Additional keyword arguments passed along to the [`~transformers.utils.PushToHubMixin.push_to_hub`]
74 method.
75 """
76 if os.path.isfile(save_directory):
77 raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")
78
79 os.makedirs(save_directory, exist_ok=True)
80
81 output_dict = self.__dict__
82 output_path = os.path.join(save_directory, CONFIG_NAME)
83
84 # save it
85 with open(output_path, "w") as writer:
86 writer.write(json.dumps(output_dict, indent=2, sort_keys=True))
87
88 @classmethod
89 def from_pretrained(cls, pretrained_model_name_or_path, subfolder=None, **kwargs):
90 r"""
91 This method loads the configuration of your adapter model from a directory.
92
93 Args:
94 pretrained_model_name_or_path (`str`):
95 The directory or the Hub repository id where the configuration is saved.
96 kwargs (additional keyword arguments, *optional*):
97 Additional keyword arguments passed along to the child class initialization.
98 """
99 path = (
100 os.path.join(pretrained_model_name_or_path, subfolder)
101 if subfolder is not None
102 else pretrained_model_name_or_path
103 )
104
105 hf_hub_download_kwargs, class_kwargs, other_kwargs = cls._split_kwargs(kwargs)
106
107 if os.path.isfile(os.path.join(path, CONFIG_NAME)):
108 config_file = os.path.join(path, CONFIG_NAME)
109 else:
110 try:
111 config_file = hf_hub_download(
112 pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, **hf_hub_download_kwargs
113 )
114 except Exception:
115 raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
116
117 loaded_attributes = cls.from_json_file(config_file)
118
119 config = cls(**class_kwargs)
120
121 for key, value in loaded_attributes.items():
122 if hasattr(config, key):
123 setattr(config, key, value)
124
125 return config
126
127 @classmethod
128 def from_json_file(cls, path_json_file, **kwargs):
129 r"""
130 Loads a configuration file from a json file.
131
132 Args:
133 path_json_file (`str`):
134 The path to the json file.
135 """
136 with open(path_json_file, "r") as file:
137 json_object = json.load(file)
138
139 return json_object
140
141 @classmethod
142 def _split_kwargs(cls, kwargs):
143 hf_hub_download_kwargs = {}
144 class_kwargs = {}
145 other_kwargs = {}
146
147 for key, value in kwargs.items():
148 if key in inspect.signature(hf_hub_download).parameters:
149 hf_hub_download_kwargs[key] = value
150 elif key in list(cls.__annotations__):
151 class_kwargs[key] = value
152 else:
153 other_kwargs[key] = value
154
155 return hf_hub_download_kwargs, class_kwargs, other_kwargs
156
157 @classmethod
158 def _get_peft_type(
159 cls,
160 model_id,
161 subfolder: Optional[str] = None,
162 revision: Optional[str] = None,
163 cache_dir: Optional[str] = None,
164 ):
165 path = os.path.join(model_id, subfolder) if subfolder is not None else model_id
166
167 if os.path.isfile(os.path.join(path, CONFIG_NAME)):
168 config_file = os.path.join(path, CONFIG_NAME)
169 else:
170 try:
171 config_file = hf_hub_download(
172 model_id, CONFIG_NAME, subfolder=subfolder, revision=revision, cache_dir=cache_dir
173 )
174 except Exception:
175 raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'")
176
177 loaded_attributes = cls.from_json_file(config_file)
178 return loaded_attributes["peft_type"]
179
180
181 @dataclass
182 class PeftConfig(PeftConfigMixin):
183 """
184 This is the base configuration class to store the configuration of a [`PeftModel`].
185
186 Args:
187 peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.
188 task_type (Union[[`~peft.utils.config.TaskType`], `str`]): The type of task to perform.
189 inference_mode (`bool`, defaults to `False`): Whether to use the Peft model in inference mode.
190 """
191
192 base_model_name_or_path: str = field(default=None, metadata={"help": "The name of the base model to use."})
193 revision: str = field(default=None, metadata={"help": "The specific model version to use."})
194 peft_type: Union[str, PeftType] = field(default=None, metadata={"help": "Peft type"})
195 task_type: Union[str, TaskType] = field(default=None, metadata={"help": "Task type"})
196 inference_mode: bool = field(default=False, metadata={"help": "Whether to use inference mode"})
197
198
199 @dataclass
200 class PromptLearningConfig(PeftConfig):
201 """
202 This is the base configuration class to store the configuration of [`PrefixTuning`], [`PromptEncoder`], or
203 [`PromptTuning`].
204
205 Args:
206 num_virtual_tokens (`int`): The number of virtual tokens to use.
207 token_dim (`int`): The hidden embedding dimension of the base transformer model.
208 num_transformer_submodules (`int`): The number of transformer submodules in the base transformer model.
209 num_attention_heads (`int`): The number of attention heads in the base transformer model.
210 num_layers (`int`): The number of layers in the base transformer model.
211 """
212
213 num_virtual_tokens: int = field(default=None, metadata={"help": "Number of virtual tokens"})
214 token_dim: int = field(
215 default=None, metadata={"help": "The hidden embedding dimension of the base transformer model"}
216 )
217 num_transformer_submodules: Optional[int] = field(
218 default=None, metadata={"help": "Number of transformer submodules"}
219 )
220 num_attention_heads: Optional[int] = field(default=None, metadata={"help": "Number of attention heads"})
221 num_layers: Optional[int] = field(default=None, metadata={"help": "Number of transformer layers"})
222
[end of src/peft/utils/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/peft/utils/config.py b/src/peft/utils/config.py
--- a/src/peft/utils/config.py
+++ b/src/peft/utils/config.py
@@ -55,12 +55,8 @@
"""
peft_type: Optional[PeftType] = field(default=None, metadata={"help": "The type of PEFT model."})
- @property
- def __dict__(self):
- return asdict(self)
-
def to_dict(self):
- return self.__dict__
+ return asdict(self)
def save_pretrained(self, save_directory, **kwargs):
r"""
@@ -78,7 +74,7 @@
os.makedirs(save_directory, exist_ok=True)
- output_dict = self.__dict__
+ output_dict = asdict(self)
output_path = os.path.join(save_directory, CONFIG_NAME)
# save it
|
{"golden_diff": "diff --git a/src/peft/utils/config.py b/src/peft/utils/config.py\n--- a/src/peft/utils/config.py\n+++ b/src/peft/utils/config.py\n@@ -55,12 +55,8 @@\n \"\"\"\n peft_type: Optional[PeftType] = field(default=None, metadata={\"help\": \"The type of PEFT model.\"})\n \n- @property\n- def __dict__(self):\n- return asdict(self)\n-\n def to_dict(self):\n- return self.__dict__\n+ return asdict(self)\n \n def save_pretrained(self, save_directory, **kwargs):\n r\"\"\"\n@@ -78,7 +74,7 @@\n \n os.makedirs(save_directory, exist_ok=True)\n \n- output_dict = self.__dict__\n+ output_dict = asdict(self)\n output_path = os.path.join(save_directory, CONFIG_NAME)\n \n # save it\n", "issue": "Deepcopy not copying the LoraConfig\nI'm trying to make a deepcopy (using copy.deepcopy) of a LoraModel on the version '0.2.0'. But the values in PeftConfig remain the default ones (e.g. r = 8), not the ones of the copied model. Is it normal ? Am I supposed to do a `model_copy = get_peft_model(model_copy, peft_config)` ?\r\nIt is possible to implement `__deepcopy__` and `__copy__` if necessary.\r\n\r\nHere is a reproduction example :\r\n``` Python\r\nimport copy\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import get_peft_config, get_peft_model, LoraConfig, TaskType\r\nmodel_name_or_path = \"gpt2\"\r\ntokenizer_name_or_path = \"gpt2\"\r\npeft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, inference_mode=False, r=87, lora_alpha=32, lora_dropout=0.1)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name_or_path)\r\nmodel = get_peft_model(model, peft_config)\r\n\r\nmodel_copy = copy.deepcopy(model)\r\nassert(model.peft_config.r == model_copy.peft_config.r)\r\n```\r\n\r\nMoreover, I also get an AssertionError if I continue with :\r\n``` Python\r\nmodel_copy = get_peft_model(model_copy, peft_config)\r\nassert(model == model_copy)\r\n```\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2023-present the HuggingFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport enum\nimport inspect\nimport json\nimport os\nfrom dataclasses import asdict, dataclass, field\nfrom typing import Optional, Union\n\nfrom huggingface_hub import hf_hub_download\nfrom transformers.utils import PushToHubMixin\n\nfrom .other import CONFIG_NAME\n\n\nclass PeftType(str, enum.Enum):\n PROMPT_TUNING = \"PROMPT_TUNING\"\n P_TUNING = \"P_TUNING\"\n PREFIX_TUNING = \"PREFIX_TUNING\"\n LORA = \"LORA\"\n ADALORA = \"ADALORA\"\n ADAPTION_PROMPT = \"ADAPTION_PROMPT\"\n\n\nclass TaskType(str, enum.Enum):\n SEQ_CLS = \"SEQ_CLS\"\n SEQ_2_SEQ_LM = \"SEQ_2_SEQ_LM\"\n CAUSAL_LM = \"CAUSAL_LM\"\n TOKEN_CLS = \"TOKEN_CLS\"\n QUESTION_ANS = \"QUESTION_ANS\"\n\n\n@dataclass\nclass PeftConfigMixin(PushToHubMixin):\n r\"\"\"\n This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all\n PEFT adapter models. This class inherits from [`~transformers.utils.PushToHubMixin`] which contains the methods to\n push your model to the Hub. The method `save_pretrained` will save the configuration of your adapter model in a\n directory. The method `from_pretrained` will load the configuration of your adapter model from a directory.\n\n Args:\n peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.\n \"\"\"\n peft_type: Optional[PeftType] = field(default=None, metadata={\"help\": \"The type of PEFT model.\"})\n\n @property\n def __dict__(self):\n return asdict(self)\n\n def to_dict(self):\n return self.__dict__\n\n def save_pretrained(self, save_directory, **kwargs):\n r\"\"\"\n This method saves the configuration of your adapter model in a directory.\n\n Args:\n save_directory (`str`):\n The directory where the configuration will be saved.\n kwargs (additional keyword arguments, *optional*):\n Additional keyword arguments passed along to the [`~transformers.utils.PushToHubMixin.push_to_hub`]\n method.\n \"\"\"\n if os.path.isfile(save_directory):\n raise AssertionError(f\"Provided path ({save_directory}) should be a directory, not a file\")\n\n os.makedirs(save_directory, exist_ok=True)\n\n output_dict = self.__dict__\n output_path = os.path.join(save_directory, CONFIG_NAME)\n\n # save it\n with open(output_path, \"w\") as writer:\n writer.write(json.dumps(output_dict, indent=2, sort_keys=True))\n\n @classmethod\n def from_pretrained(cls, pretrained_model_name_or_path, subfolder=None, **kwargs):\n r\"\"\"\n This method loads the configuration of your adapter model from a directory.\n\n Args:\n pretrained_model_name_or_path (`str`):\n The directory or the Hub repository id where the configuration is saved.\n kwargs (additional keyword arguments, *optional*):\n Additional keyword arguments passed along to the child class initialization.\n \"\"\"\n path = (\n os.path.join(pretrained_model_name_or_path, subfolder)\n if subfolder is not None\n else pretrained_model_name_or_path\n )\n\n hf_hub_download_kwargs, class_kwargs, other_kwargs = cls._split_kwargs(kwargs)\n\n if os.path.isfile(os.path.join(path, CONFIG_NAME)):\n config_file = os.path.join(path, CONFIG_NAME)\n else:\n try:\n config_file = hf_hub_download(\n pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, **hf_hub_download_kwargs\n )\n except Exception:\n raise ValueError(f\"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'\")\n\n loaded_attributes = cls.from_json_file(config_file)\n\n config = cls(**class_kwargs)\n\n for key, value in loaded_attributes.items():\n if hasattr(config, key):\n setattr(config, key, value)\n\n return config\n\n @classmethod\n def from_json_file(cls, path_json_file, **kwargs):\n r\"\"\"\n Loads a configuration file from a json file.\n\n Args:\n path_json_file (`str`):\n The path to the json file.\n \"\"\"\n with open(path_json_file, \"r\") as file:\n json_object = json.load(file)\n\n return json_object\n\n @classmethod\n def _split_kwargs(cls, kwargs):\n hf_hub_download_kwargs = {}\n class_kwargs = {}\n other_kwargs = {}\n\n for key, value in kwargs.items():\n if key in inspect.signature(hf_hub_download).parameters:\n hf_hub_download_kwargs[key] = value\n elif key in list(cls.__annotations__):\n class_kwargs[key] = value\n else:\n other_kwargs[key] = value\n\n return hf_hub_download_kwargs, class_kwargs, other_kwargs\n\n @classmethod\n def _get_peft_type(\n cls,\n model_id,\n subfolder: Optional[str] = None,\n revision: Optional[str] = None,\n cache_dir: Optional[str] = None,\n ):\n path = os.path.join(model_id, subfolder) if subfolder is not None else model_id\n\n if os.path.isfile(os.path.join(path, CONFIG_NAME)):\n config_file = os.path.join(path, CONFIG_NAME)\n else:\n try:\n config_file = hf_hub_download(\n model_id, CONFIG_NAME, subfolder=subfolder, revision=revision, cache_dir=cache_dir\n )\n except Exception:\n raise ValueError(f\"Can't find '{CONFIG_NAME}' at '{model_id}'\")\n\n loaded_attributes = cls.from_json_file(config_file)\n return loaded_attributes[\"peft_type\"]\n\n\n@dataclass\nclass PeftConfig(PeftConfigMixin):\n \"\"\"\n This is the base configuration class to store the configuration of a [`PeftModel`].\n\n Args:\n peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.\n task_type (Union[[`~peft.utils.config.TaskType`], `str`]): The type of task to perform.\n inference_mode (`bool`, defaults to `False`): Whether to use the Peft model in inference mode.\n \"\"\"\n\n base_model_name_or_path: str = field(default=None, metadata={\"help\": \"The name of the base model to use.\"})\n revision: str = field(default=None, metadata={\"help\": \"The specific model version to use.\"})\n peft_type: Union[str, PeftType] = field(default=None, metadata={\"help\": \"Peft type\"})\n task_type: Union[str, TaskType] = field(default=None, metadata={\"help\": \"Task type\"})\n inference_mode: bool = field(default=False, metadata={\"help\": \"Whether to use inference mode\"})\n\n\n@dataclass\nclass PromptLearningConfig(PeftConfig):\n \"\"\"\n This is the base configuration class to store the configuration of [`PrefixTuning`], [`PromptEncoder`], or\n [`PromptTuning`].\n\n Args:\n num_virtual_tokens (`int`): The number of virtual tokens to use.\n token_dim (`int`): The hidden embedding dimension of the base transformer model.\n num_transformer_submodules (`int`): The number of transformer submodules in the base transformer model.\n num_attention_heads (`int`): The number of attention heads in the base transformer model.\n num_layers (`int`): The number of layers in the base transformer model.\n \"\"\"\n\n num_virtual_tokens: int = field(default=None, metadata={\"help\": \"Number of virtual tokens\"})\n token_dim: int = field(\n default=None, metadata={\"help\": \"The hidden embedding dimension of the base transformer model\"}\n )\n num_transformer_submodules: Optional[int] = field(\n default=None, metadata={\"help\": \"Number of transformer submodules\"}\n )\n num_attention_heads: Optional[int] = field(default=None, metadata={\"help\": \"Number of attention heads\"})\n num_layers: Optional[int] = field(default=None, metadata={\"help\": \"Number of transformer layers\"})\n", "path": "src/peft/utils/config.py"}]}
| 3,322 | 204 |
gh_patches_debug_8931
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-1306
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: missing changelog in documentation
Our changelog is available only as a .md file in the root folder. It should be part of the documentation online as well as @StevenLi-DS correctly pointed out in https://github.com/geopandas/geopandas/issues/1076#issuecomment-590126250.
</issue>
<code>
[start of doc/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # GeoPandas documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 15 08:08:14 2013.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import sys, os
15 import warnings
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20 #sys.path.insert(0, os.path.abspath('.'))
21
22 # -- General configuration -----------------------------------------------------
23
24 # If your documentation needs a minimal Sphinx version, state it here.
25 #needs_sphinx = '1.0'
26
27 # Add any Sphinx extension module names here, as strings. They can be extensions
28 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
29 extensions = ['IPython.sphinxext.ipython_console_highlighting',
30 'IPython.sphinxext.ipython_directive',
31 'sphinx_gallery.gen_gallery',
32 'sphinx.ext.autosummary',
33 'sphinx.ext.intersphinx',
34 'sphinx.ext.autodoc',
35 'numpydoc',
36 ]
37
38 # continue doc build and only print warnings/errors in examples
39 ipython_warning_is_error = False
40 ipython_exec_lines = [
41 # ensure that dataframes are not truncated in the IPython code blocks
42 'import pandas as _pd',
43 '_pd.set_option("display.max_columns", 20)',
44 '_pd.set_option("display.width", 100)'
45 ]
46
47 # Fix issue with warnings from numpydoc (see discussion in PR #534)
48 numpydoc_show_class_members = False
49
50 def setup(app):
51 app.add_stylesheet('custom.css') # may also be an URL
52
53 # Add any paths that contain templates here, relative to this directory.
54
55 templates_path = ['_templates']
56
57 autosummary_generate = True
58
59 # Sphinx gallery configuration
60 sphinx_gallery_conf = {
61 'examples_dirs': ['../../examples'],
62 'filename_pattern': '^((?!sgskip).)*$',
63 'gallery_dirs': ['gallery'],
64 'doc_module': ('geopandas',),
65 'reference_url': {'matplotlib': 'http://matplotlib.org',
66 'numpy': 'http://docs.scipy.org/doc/numpy',
67 'scipy': 'http://docs.scipy.org/doc/scipy/reference',
68 'pyproj': 'http://pyproj4.github.io/pyproj/stable/',
69 'geopandas': None},
70 'backreferences_dir': 'reference'
71 }
72
73 # suppress matplotlib warning in examples
74 warnings.filterwarnings(
75 "ignore",
76 category=UserWarning,
77 message="Matplotlib is currently using agg, which is a"
78 " non-GUI backend, so cannot show the figure.",
79 )
80
81 # The suffix of source filenames.
82 source_suffix = '.rst'
83
84 # The encoding of source files.
85 #source_encoding = 'utf-8-sig'
86
87 # The master toctree document.
88 master_doc = 'index'
89
90 # General information about the project.
91 project = u'GeoPandas'
92 copyright = u'2013–2019, GeoPandas developers'
93
94 # The version info for the project you're documenting, acts as replacement for
95 # |version| and |release|, also used in various other places throughout the
96 # built documents.
97 import geopandas
98 version = release = geopandas.__version__
99
100 # The language for content autogenerated by Sphinx. Refer to documentation
101 # for a list of supported languages.
102 #language = None
103
104 # There are two options for replacing |today|: either, you set today to some
105 # non-false value, then it is used:
106 #today = ''
107 # Else, today_fmt is used as the format for a strftime call.
108 #today_fmt = '%B %d, %Y'
109
110 # List of patterns, relative to source directory, that match files and
111 # directories to ignore when looking for source files.
112 exclude_patterns = []
113
114 # The reST default role (used for this markup: `text`) to use for all documents.
115 #default_role = None
116
117 # If true, '()' will be appended to :func: etc. cross-reference text.
118 #add_function_parentheses = True
119
120 # If true, the current module name will be prepended to all description
121 # unit titles (such as .. function::).
122 #add_module_names = True
123
124 # If true, sectionauthor and moduleauthor directives will be shown in the
125 # output. They are ignored by default.
126 #show_authors = False
127
128 # The name of the Pygments (syntax highlighting) style to use.
129 pygments_style = 'sphinx'
130
131 # A list of ignored prefixes for module index sorting.
132 #modindex_common_prefix = []
133
134
135 # -- Options for HTML output ---------------------------------------------------
136
137 # The theme to use for HTML and HTML Help pages. See the documentation for
138 # a list of builtin themes.
139 import sphinx_rtd_theme
140 html_theme = "sphinx_rtd_theme"
141 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
142
143 # Theme options are theme-specific and customize the look and feel of a theme
144 # further. For a list of options available for each theme, see the
145 # documentation.
146 #html_theme_options = {}
147
148 # Add any paths that contain custom themes here, relative to this directory.
149 #html_theme_path = []
150
151 # The name for this set of Sphinx documents. If None, it defaults to
152 # "<project> v<release> documentation".
153 #html_title = None
154
155 # A shorter title for the navigation bar. Default is the same as html_title.
156 #html_short_title = None
157
158 # The name of an image file (relative to this directory) to place at the top
159 # of the sidebar.
160 #html_logo = None
161
162 # The name of an image file (within the static path) to use as favicon of the
163 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
164 # pixels large.
165 #html_favicon = None
166
167 # Add any paths that contain custom static files (such as style sheets) here,
168 # relative to this directory. They are copied after the builtin static files,
169 # so a file named "default.css" will overwrite the builtin "default.css".
170 html_static_path = ['_static']
171
172 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
173 # using the given strftime format.
174 #html_last_updated_fmt = '%b %d, %Y'
175
176 # If true, SmartyPants will be used to convert quotes and dashes to
177 # typographically correct entities.
178 #html_use_smartypants = True
179
180 # Custom sidebar templates, maps document names to template names.
181 #html_sidebars = {}
182
183 # Additional templates that should be rendered to pages, maps page names to
184 # template names.
185 #html_additional_pages = {}
186
187 # If false, no module index is generated.
188 #html_domain_indices = True
189
190 # If false, no index is generated.
191 #html_use_index = True
192
193 # If true, the index is split into individual pages for each letter.
194 #html_split_index = False
195
196 # If true, links to the reST sources are added to the pages.
197 #html_show_sourcelink = True
198
199 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
200 #html_show_sphinx = True
201
202 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
203 #html_show_copyright = True
204
205 # If true, an OpenSearch description file will be output, and all pages will
206 # contain a <link> tag referring to it. The value of this option must be the
207 # base URL from which the finished HTML is served.
208 #html_use_opensearch = ''
209
210 # This is the file name suffix for HTML files (e.g. ".xhtml").
211 #html_file_suffix = None
212
213 # Output file base name for HTML help builder.
214 htmlhelp_basename = 'GeoPandasdoc'
215
216
217 # -- Options for LaTeX output --------------------------------------------------
218
219 latex_elements = {
220 # The paper size ('letterpaper' or 'a4paper').
221 #'papersize': 'letterpaper',
222
223 # The font size ('10pt', '11pt' or '12pt').
224 #'pointsize': '10pt',
225
226 # Additional stuff for the LaTeX preamble.
227 #'preamble': '',
228 }
229
230 # Grouping the document tree into LaTeX files. List of tuples
231 # (source start file, target name, title, author, documentclass [howto/manual]).
232 latex_documents = [
233 ('index', 'GeoPandas.tex', u'GeoPandas Documentation',
234 u'Kelsey Jordahl', 'manual'),
235 ]
236
237 # The name of an image file (relative to this directory) to place at the top of
238 # the title page.
239 #latex_logo = None
240
241 # For "manual" documents, if this is true, then toplevel headings are parts,
242 # not chapters.
243 #latex_use_parts = False
244
245 # If true, show page references after internal links.
246 #latex_show_pagerefs = False
247
248 # If true, show URL addresses after external links.
249 #latex_show_urls = False
250
251 # Documents to append as an appendix to all manuals.
252 #latex_appendices = []
253
254 # If false, no module index is generated.
255 #latex_domain_indices = True
256
257
258 # -- Options for manual page output --------------------------------------------
259
260 # One entry per manual page. List of tuples
261 # (source start file, name, description, authors, manual section).
262 man_pages = [
263 ('index', 'geopandas', u'GeoPandas Documentation',
264 [u'Kelsey Jordahl'], 1)
265 ]
266
267 # If true, show URL addresses after external links.
268 #man_show_urls = False
269
270
271 # -- Options for Texinfo output ------------------------------------------------
272
273 # Grouping the document tree into Texinfo files. List of tuples
274 # (source start file, target name, title, author,
275 # dir menu entry, description, category)
276 texinfo_documents = [
277 ('index', 'GeoPandas', u'GeoPandas Documentation',
278 u'Kelsey Jordahl', 'GeoPandas', 'One line description of project.',
279 'Miscellaneous'),
280 ]
281
282 # Documents to append as an appendix to all manuals.
283 #texinfo_appendices = []
284
285 # If false, no module index is generated.
286 #texinfo_domain_indices = True
287
288 # How to display URL addresses: 'footnote', 'no', or 'inline'.
289 #texinfo_show_urls = 'footnote'
290
[end of doc/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/source/conf.py b/doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -32,6 +32,7 @@
'sphinx.ext.autosummary',
'sphinx.ext.intersphinx',
'sphinx.ext.autodoc',
+ 'recommonmark',
'numpydoc',
]
@@ -79,7 +80,7 @@
)
# The suffix of source filenames.
-source_suffix = '.rst'
+source_suffix = ['.rst', '.md']
# The encoding of source files.
#source_encoding = 'utf-8-sig'
|
{"golden_diff": "diff --git a/doc/source/conf.py b/doc/source/conf.py\n--- a/doc/source/conf.py\n+++ b/doc/source/conf.py\n@@ -32,6 +32,7 @@\n 'sphinx.ext.autosummary',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.autodoc',\n+ 'recommonmark',\n 'numpydoc',\n ]\n \n@@ -79,7 +80,7 @@\n )\n \n # The suffix of source filenames.\n-source_suffix = '.rst'\n+source_suffix = ['.rst', '.md']\n \n # The encoding of source files.\n #source_encoding = 'utf-8-sig'\n", "issue": "DOC: missing changelog in documentation\nOur changelog is available only as a .md file in the root folder. It should be part of the documentation online as well as @StevenLi-DS correctly pointed out in https://github.com/geopandas/geopandas/issues/1076#issuecomment-590126250.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# GeoPandas documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 15 08:08:14 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys, os\nimport warnings\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['IPython.sphinxext.ipython_console_highlighting',\n 'IPython.sphinxext.ipython_directive',\n 'sphinx_gallery.gen_gallery',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.autodoc',\n 'numpydoc',\n]\n\n# continue doc build and only print warnings/errors in examples\nipython_warning_is_error = False\nipython_exec_lines = [\n # ensure that dataframes are not truncated in the IPython code blocks\n 'import pandas as _pd',\n '_pd.set_option(\"display.max_columns\", 20)',\n '_pd.set_option(\"display.width\", 100)'\n]\n\n# Fix issue with warnings from numpydoc (see discussion in PR #534)\nnumpydoc_show_class_members = False\n\ndef setup(app):\n app.add_stylesheet('custom.css') # may also be an URL\n\n# Add any paths that contain templates here, relative to this directory.\n\ntemplates_path = ['_templates']\n\nautosummary_generate = True\n\n# Sphinx gallery configuration\nsphinx_gallery_conf = {\n 'examples_dirs': ['../../examples'],\n 'filename_pattern': '^((?!sgskip).)*$',\n 'gallery_dirs': ['gallery'],\n 'doc_module': ('geopandas',),\n 'reference_url': {'matplotlib': 'http://matplotlib.org',\n 'numpy': 'http://docs.scipy.org/doc/numpy',\n 'scipy': 'http://docs.scipy.org/doc/scipy/reference',\n 'pyproj': 'http://pyproj4.github.io/pyproj/stable/',\n 'geopandas': None},\n 'backreferences_dir': 'reference'\n}\n\n# suppress matplotlib warning in examples\nwarnings.filterwarnings(\n \"ignore\",\n category=UserWarning,\n message=\"Matplotlib is currently using agg, which is a\"\n \" non-GUI backend, so cannot show the figure.\",\n)\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'GeoPandas'\ncopyright = u'2013\u20132019, GeoPandas developers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\nimport geopandas\nversion = release = geopandas.__version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = []\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nimport sphinx_rtd_theme\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'GeoPandasdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'GeoPandas.tex', u'GeoPandas Documentation',\n u'Kelsey Jordahl', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'geopandas', u'GeoPandas Documentation',\n [u'Kelsey Jordahl'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'GeoPandas', u'GeoPandas Documentation',\n u'Kelsey Jordahl', 'GeoPandas', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n", "path": "doc/source/conf.py"}]}
| 3,709 | 138 |
gh_patches_debug_16485
|
rasdani/github-patches
|
git_diff
|
Textualize__rich-3192
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `Panel.fit` forgot `height` and `highlight` parameters
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
`Panel.fit` forgot `height` and `highlight` parameters. It should be updated to synchronize with the constructor.
```python
class Panel(JupyterMixin):
def __init__(
self,
renderable: "RenderableType",
box: Box = ROUNDED,
*,
title: Optional[TextType] = None,
title_align: AlignMethod = "center",
subtitle: Optional[TextType] = None,
subtitle_align: AlignMethod = "center",
safe_box: Optional[bool] = None,
expand: bool = True,
style: StyleType = "none",
border_style: StyleType = "none",
width: Optional[int] = None,
height: Optional[int] = None,
padding: PaddingDimensions = (0, 1),
highlight: bool = False,
) -> None:
...
@classmethod
def fit(
cls,
renderable: "RenderableType",
box: Box = ROUNDED,
*,
title: Optional[TextType] = None,
title_align: AlignMethod = "center",
subtitle: Optional[TextType] = None,
subtitle_align: AlignMethod = "center",
safe_box: Optional[bool] = None,
style: StyleType = "none",
border_style: StyleType = "none",
width: Optional[int] = None,
padding: PaddingDimensions = (0, 1),
) -> "Panel":
...
```
</issue>
<code>
[start of rich/panel.py]
1 from typing import TYPE_CHECKING, Optional
2
3 from .align import AlignMethod
4 from .box import ROUNDED, Box
5 from .cells import cell_len
6 from .jupyter import JupyterMixin
7 from .measure import Measurement, measure_renderables
8 from .padding import Padding, PaddingDimensions
9 from .segment import Segment
10 from .style import Style, StyleType
11 from .text import Text, TextType
12
13 if TYPE_CHECKING:
14 from .console import Console, ConsoleOptions, RenderableType, RenderResult
15
16
17 class Panel(JupyterMixin):
18 """A console renderable that draws a border around its contents.
19
20 Example:
21 >>> console.print(Panel("Hello, World!"))
22
23 Args:
24 renderable (RenderableType): A console renderable object.
25 box (Box, optional): A Box instance that defines the look of the border (see :ref:`appendix_box`.
26 Defaults to box.ROUNDED.
27 safe_box (bool, optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
28 expand (bool, optional): If True the panel will stretch to fill the console
29 width, otherwise it will be sized to fit the contents. Defaults to True.
30 style (str, optional): The style of the panel (border and contents). Defaults to "none".
31 border_style (str, optional): The style of the border. Defaults to "none".
32 width (Optional[int], optional): Optional width of panel. Defaults to None to auto-detect.
33 height (Optional[int], optional): Optional height of panel. Defaults to None to auto-detect.
34 padding (Optional[PaddingDimensions]): Optional padding around renderable. Defaults to 0.
35 highlight (bool, optional): Enable automatic highlighting of panel title (if str). Defaults to False.
36 """
37
38 def __init__(
39 self,
40 renderable: "RenderableType",
41 box: Box = ROUNDED,
42 *,
43 title: Optional[TextType] = None,
44 title_align: AlignMethod = "center",
45 subtitle: Optional[TextType] = None,
46 subtitle_align: AlignMethod = "center",
47 safe_box: Optional[bool] = None,
48 expand: bool = True,
49 style: StyleType = "none",
50 border_style: StyleType = "none",
51 width: Optional[int] = None,
52 height: Optional[int] = None,
53 padding: PaddingDimensions = (0, 1),
54 highlight: bool = False,
55 ) -> None:
56 self.renderable = renderable
57 self.box = box
58 self.title = title
59 self.title_align: AlignMethod = title_align
60 self.subtitle = subtitle
61 self.subtitle_align = subtitle_align
62 self.safe_box = safe_box
63 self.expand = expand
64 self.style = style
65 self.border_style = border_style
66 self.width = width
67 self.height = height
68 self.padding = padding
69 self.highlight = highlight
70
71 @classmethod
72 def fit(
73 cls,
74 renderable: "RenderableType",
75 box: Box = ROUNDED,
76 *,
77 title: Optional[TextType] = None,
78 title_align: AlignMethod = "center",
79 subtitle: Optional[TextType] = None,
80 subtitle_align: AlignMethod = "center",
81 safe_box: Optional[bool] = None,
82 style: StyleType = "none",
83 border_style: StyleType = "none",
84 width: Optional[int] = None,
85 padding: PaddingDimensions = (0, 1),
86 ) -> "Panel":
87 """An alternative constructor that sets expand=False."""
88 return cls(
89 renderable,
90 box,
91 title=title,
92 title_align=title_align,
93 subtitle=subtitle,
94 subtitle_align=subtitle_align,
95 safe_box=safe_box,
96 style=style,
97 border_style=border_style,
98 width=width,
99 padding=padding,
100 expand=False,
101 )
102
103 @property
104 def _title(self) -> Optional[Text]:
105 if self.title:
106 title_text = (
107 Text.from_markup(self.title)
108 if isinstance(self.title, str)
109 else self.title.copy()
110 )
111 title_text.end = ""
112 title_text.plain = title_text.plain.replace("\n", " ")
113 title_text.no_wrap = True
114 title_text.expand_tabs()
115 title_text.pad(1)
116 return title_text
117 return None
118
119 @property
120 def _subtitle(self) -> Optional[Text]:
121 if self.subtitle:
122 subtitle_text = (
123 Text.from_markup(self.subtitle)
124 if isinstance(self.subtitle, str)
125 else self.subtitle.copy()
126 )
127 subtitle_text.end = ""
128 subtitle_text.plain = subtitle_text.plain.replace("\n", " ")
129 subtitle_text.no_wrap = True
130 subtitle_text.expand_tabs()
131 subtitle_text.pad(1)
132 return subtitle_text
133 return None
134
135 def __rich_console__(
136 self, console: "Console", options: "ConsoleOptions"
137 ) -> "RenderResult":
138 _padding = Padding.unpack(self.padding)
139 renderable = (
140 Padding(self.renderable, _padding) if any(_padding) else self.renderable
141 )
142 style = console.get_style(self.style)
143 border_style = style + console.get_style(self.border_style)
144 width = (
145 options.max_width
146 if self.width is None
147 else min(options.max_width, self.width)
148 )
149
150 safe_box: bool = console.safe_box if self.safe_box is None else self.safe_box
151 box = self.box.substitute(options, safe=safe_box)
152
153 def align_text(
154 text: Text, width: int, align: str, character: str, style: Style
155 ) -> Text:
156 """Gets new aligned text.
157
158 Args:
159 text (Text): Title or subtitle text.
160 width (int): Desired width.
161 align (str): Alignment.
162 character (str): Character for alignment.
163 style (Style): Border style
164
165 Returns:
166 Text: New text instance
167 """
168 text = text.copy()
169 text.truncate(width)
170 excess_space = width - cell_len(text.plain)
171 if excess_space:
172 if align == "left":
173 return Text.assemble(
174 text,
175 (character * excess_space, style),
176 no_wrap=True,
177 end="",
178 )
179 elif align == "center":
180 left = excess_space // 2
181 return Text.assemble(
182 (character * left, style),
183 text,
184 (character * (excess_space - left), style),
185 no_wrap=True,
186 end="",
187 )
188 else:
189 return Text.assemble(
190 (character * excess_space, style),
191 text,
192 no_wrap=True,
193 end="",
194 )
195 return text
196
197 title_text = self._title
198 if title_text is not None:
199 title_text.stylize_before(border_style)
200
201 child_width = (
202 width - 2
203 if self.expand
204 else console.measure(
205 renderable, options=options.update_width(width - 2)
206 ).maximum
207 )
208 child_height = self.height or options.height or None
209 if child_height:
210 child_height -= 2
211 if title_text is not None:
212 child_width = min(
213 options.max_width - 2, max(child_width, title_text.cell_len + 2)
214 )
215
216 width = child_width + 2
217 child_options = options.update(
218 width=child_width, height=child_height, highlight=self.highlight
219 )
220 lines = console.render_lines(renderable, child_options, style=style)
221
222 line_start = Segment(box.mid_left, border_style)
223 line_end = Segment(f"{box.mid_right}", border_style)
224 new_line = Segment.line()
225 if title_text is None or width <= 4:
226 yield Segment(box.get_top([width - 2]), border_style)
227 else:
228 title_text = align_text(
229 title_text,
230 width - 4,
231 self.title_align,
232 box.top,
233 border_style,
234 )
235 yield Segment(box.top_left + box.top, border_style)
236 yield from console.render(title_text, child_options.update_width(width - 4))
237 yield Segment(box.top + box.top_right, border_style)
238
239 yield new_line
240 for line in lines:
241 yield line_start
242 yield from line
243 yield line_end
244 yield new_line
245
246 subtitle_text = self._subtitle
247 if subtitle_text is not None:
248 subtitle_text.stylize_before(border_style)
249
250 if subtitle_text is None or width <= 4:
251 yield Segment(box.get_bottom([width - 2]), border_style)
252 else:
253 subtitle_text = align_text(
254 subtitle_text,
255 width - 4,
256 self.subtitle_align,
257 box.bottom,
258 border_style,
259 )
260 yield Segment(box.bottom_left + box.bottom, border_style)
261 yield from console.render(
262 subtitle_text, child_options.update_width(width - 4)
263 )
264 yield Segment(box.bottom + box.bottom_right, border_style)
265
266 yield new_line
267
268 def __rich_measure__(
269 self, console: "Console", options: "ConsoleOptions"
270 ) -> "Measurement":
271 _title = self._title
272 _, right, _, left = Padding.unpack(self.padding)
273 padding = left + right
274 renderables = [self.renderable, _title] if _title else [self.renderable]
275
276 if self.width is None:
277 width = (
278 measure_renderables(
279 console,
280 options.update_width(options.max_width - padding - 2),
281 renderables,
282 ).maximum
283 + padding
284 + 2
285 )
286 else:
287 width = self.width
288 return Measurement(width, width)
289
290
291 if __name__ == "__main__": # pragma: no cover
292 from .console import Console
293
294 c = Console()
295
296 from .box import DOUBLE, ROUNDED
297 from .padding import Padding
298
299 p = Panel(
300 "Hello, World!",
301 title="rich.Panel",
302 style="white on blue",
303 box=DOUBLE,
304 padding=1,
305 )
306
307 c.print()
308 c.print(p)
309
[end of rich/panel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rich/panel.py b/rich/panel.py
--- a/rich/panel.py
+++ b/rich/panel.py
@@ -82,7 +82,9 @@
style: StyleType = "none",
border_style: StyleType = "none",
width: Optional[int] = None,
+ height: Optional[int] = None,
padding: PaddingDimensions = (0, 1),
+ highlight: bool = False,
) -> "Panel":
"""An alternative constructor that sets expand=False."""
return cls(
@@ -96,7 +98,9 @@
style=style,
border_style=border_style,
width=width,
+ height=height,
padding=padding,
+ highlight=highlight,
expand=False,
)
|
{"golden_diff": "diff --git a/rich/panel.py b/rich/panel.py\n--- a/rich/panel.py\n+++ b/rich/panel.py\n@@ -82,7 +82,9 @@\n style: StyleType = \"none\",\n border_style: StyleType = \"none\",\n width: Optional[int] = None,\n+ height: Optional[int] = None,\n padding: PaddingDimensions = (0, 1),\n+ highlight: bool = False,\n ) -> \"Panel\":\n \"\"\"An alternative constructor that sets expand=False.\"\"\"\n return cls(\n@@ -96,7 +98,9 @@\n style=style,\n border_style=border_style,\n width=width,\n+ height=height,\n padding=padding,\n+ highlight=highlight,\n expand=False,\n )\n", "issue": "[BUG] `Panel.fit` forgot `height` and `highlight` parameters\n- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.\r\n- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).\r\n\r\n**Describe the bug**\r\n\r\n`Panel.fit` forgot `height` and `highlight` parameters. It should be updated to synchronize with the constructor.\r\n\r\n```python\r\nclass Panel(JupyterMixin):\r\n def __init__(\r\n self,\r\n renderable: \"RenderableType\",\r\n box: Box = ROUNDED,\r\n *,\r\n title: Optional[TextType] = None,\r\n title_align: AlignMethod = \"center\",\r\n subtitle: Optional[TextType] = None,\r\n subtitle_align: AlignMethod = \"center\",\r\n safe_box: Optional[bool] = None,\r\n expand: bool = True,\r\n style: StyleType = \"none\",\r\n border_style: StyleType = \"none\",\r\n width: Optional[int] = None,\r\n height: Optional[int] = None,\r\n padding: PaddingDimensions = (0, 1),\r\n highlight: bool = False,\r\n ) -> None:\r\n ...\r\n\r\n @classmethod\r\n def fit(\r\n cls,\r\n renderable: \"RenderableType\",\r\n box: Box = ROUNDED,\r\n *,\r\n title: Optional[TextType] = None,\r\n title_align: AlignMethod = \"center\",\r\n subtitle: Optional[TextType] = None,\r\n subtitle_align: AlignMethod = \"center\",\r\n safe_box: Optional[bool] = None,\r\n style: StyleType = \"none\",\r\n border_style: StyleType = \"none\",\r\n width: Optional[int] = None,\r\n padding: PaddingDimensions = (0, 1),\r\n ) -> \"Panel\":\r\n ...\r\n```\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Optional\n\nfrom .align import AlignMethod\nfrom .box import ROUNDED, Box\nfrom .cells import cell_len\nfrom .jupyter import JupyterMixin\nfrom .measure import Measurement, measure_renderables\nfrom .padding import Padding, PaddingDimensions\nfrom .segment import Segment\nfrom .style import Style, StyleType\nfrom .text import Text, TextType\n\nif TYPE_CHECKING:\n from .console import Console, ConsoleOptions, RenderableType, RenderResult\n\n\nclass Panel(JupyterMixin):\n \"\"\"A console renderable that draws a border around its contents.\n\n Example:\n >>> console.print(Panel(\"Hello, World!\"))\n\n Args:\n renderable (RenderableType): A console renderable object.\n box (Box, optional): A Box instance that defines the look of the border (see :ref:`appendix_box`.\n Defaults to box.ROUNDED.\n safe_box (bool, optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.\n expand (bool, optional): If True the panel will stretch to fill the console\n width, otherwise it will be sized to fit the contents. Defaults to True.\n style (str, optional): The style of the panel (border and contents). Defaults to \"none\".\n border_style (str, optional): The style of the border. Defaults to \"none\".\n width (Optional[int], optional): Optional width of panel. Defaults to None to auto-detect.\n height (Optional[int], optional): Optional height of panel. Defaults to None to auto-detect.\n padding (Optional[PaddingDimensions]): Optional padding around renderable. Defaults to 0.\n highlight (bool, optional): Enable automatic highlighting of panel title (if str). Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n renderable: \"RenderableType\",\n box: Box = ROUNDED,\n *,\n title: Optional[TextType] = None,\n title_align: AlignMethod = \"center\",\n subtitle: Optional[TextType] = None,\n subtitle_align: AlignMethod = \"center\",\n safe_box: Optional[bool] = None,\n expand: bool = True,\n style: StyleType = \"none\",\n border_style: StyleType = \"none\",\n width: Optional[int] = None,\n height: Optional[int] = None,\n padding: PaddingDimensions = (0, 1),\n highlight: bool = False,\n ) -> None:\n self.renderable = renderable\n self.box = box\n self.title = title\n self.title_align: AlignMethod = title_align\n self.subtitle = subtitle\n self.subtitle_align = subtitle_align\n self.safe_box = safe_box\n self.expand = expand\n self.style = style\n self.border_style = border_style\n self.width = width\n self.height = height\n self.padding = padding\n self.highlight = highlight\n\n @classmethod\n def fit(\n cls,\n renderable: \"RenderableType\",\n box: Box = ROUNDED,\n *,\n title: Optional[TextType] = None,\n title_align: AlignMethod = \"center\",\n subtitle: Optional[TextType] = None,\n subtitle_align: AlignMethod = \"center\",\n safe_box: Optional[bool] = None,\n style: StyleType = \"none\",\n border_style: StyleType = \"none\",\n width: Optional[int] = None,\n padding: PaddingDimensions = (0, 1),\n ) -> \"Panel\":\n \"\"\"An alternative constructor that sets expand=False.\"\"\"\n return cls(\n renderable,\n box,\n title=title,\n title_align=title_align,\n subtitle=subtitle,\n subtitle_align=subtitle_align,\n safe_box=safe_box,\n style=style,\n border_style=border_style,\n width=width,\n padding=padding,\n expand=False,\n )\n\n @property\n def _title(self) -> Optional[Text]:\n if self.title:\n title_text = (\n Text.from_markup(self.title)\n if isinstance(self.title, str)\n else self.title.copy()\n )\n title_text.end = \"\"\n title_text.plain = title_text.plain.replace(\"\\n\", \" \")\n title_text.no_wrap = True\n title_text.expand_tabs()\n title_text.pad(1)\n return title_text\n return None\n\n @property\n def _subtitle(self) -> Optional[Text]:\n if self.subtitle:\n subtitle_text = (\n Text.from_markup(self.subtitle)\n if isinstance(self.subtitle, str)\n else self.subtitle.copy()\n )\n subtitle_text.end = \"\"\n subtitle_text.plain = subtitle_text.plain.replace(\"\\n\", \" \")\n subtitle_text.no_wrap = True\n subtitle_text.expand_tabs()\n subtitle_text.pad(1)\n return subtitle_text\n return None\n\n def __rich_console__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"RenderResult\":\n _padding = Padding.unpack(self.padding)\n renderable = (\n Padding(self.renderable, _padding) if any(_padding) else self.renderable\n )\n style = console.get_style(self.style)\n border_style = style + console.get_style(self.border_style)\n width = (\n options.max_width\n if self.width is None\n else min(options.max_width, self.width)\n )\n\n safe_box: bool = console.safe_box if self.safe_box is None else self.safe_box\n box = self.box.substitute(options, safe=safe_box)\n\n def align_text(\n text: Text, width: int, align: str, character: str, style: Style\n ) -> Text:\n \"\"\"Gets new aligned text.\n\n Args:\n text (Text): Title or subtitle text.\n width (int): Desired width.\n align (str): Alignment.\n character (str): Character for alignment.\n style (Style): Border style\n\n Returns:\n Text: New text instance\n \"\"\"\n text = text.copy()\n text.truncate(width)\n excess_space = width - cell_len(text.plain)\n if excess_space:\n if align == \"left\":\n return Text.assemble(\n text,\n (character * excess_space, style),\n no_wrap=True,\n end=\"\",\n )\n elif align == \"center\":\n left = excess_space // 2\n return Text.assemble(\n (character * left, style),\n text,\n (character * (excess_space - left), style),\n no_wrap=True,\n end=\"\",\n )\n else:\n return Text.assemble(\n (character * excess_space, style),\n text,\n no_wrap=True,\n end=\"\",\n )\n return text\n\n title_text = self._title\n if title_text is not None:\n title_text.stylize_before(border_style)\n\n child_width = (\n width - 2\n if self.expand\n else console.measure(\n renderable, options=options.update_width(width - 2)\n ).maximum\n )\n child_height = self.height or options.height or None\n if child_height:\n child_height -= 2\n if title_text is not None:\n child_width = min(\n options.max_width - 2, max(child_width, title_text.cell_len + 2)\n )\n\n width = child_width + 2\n child_options = options.update(\n width=child_width, height=child_height, highlight=self.highlight\n )\n lines = console.render_lines(renderable, child_options, style=style)\n\n line_start = Segment(box.mid_left, border_style)\n line_end = Segment(f\"{box.mid_right}\", border_style)\n new_line = Segment.line()\n if title_text is None or width <= 4:\n yield Segment(box.get_top([width - 2]), border_style)\n else:\n title_text = align_text(\n title_text,\n width - 4,\n self.title_align,\n box.top,\n border_style,\n )\n yield Segment(box.top_left + box.top, border_style)\n yield from console.render(title_text, child_options.update_width(width - 4))\n yield Segment(box.top + box.top_right, border_style)\n\n yield new_line\n for line in lines:\n yield line_start\n yield from line\n yield line_end\n yield new_line\n\n subtitle_text = self._subtitle\n if subtitle_text is not None:\n subtitle_text.stylize_before(border_style)\n\n if subtitle_text is None or width <= 4:\n yield Segment(box.get_bottom([width - 2]), border_style)\n else:\n subtitle_text = align_text(\n subtitle_text,\n width - 4,\n self.subtitle_align,\n box.bottom,\n border_style,\n )\n yield Segment(box.bottom_left + box.bottom, border_style)\n yield from console.render(\n subtitle_text, child_options.update_width(width - 4)\n )\n yield Segment(box.bottom + box.bottom_right, border_style)\n\n yield new_line\n\n def __rich_measure__(\n self, console: \"Console\", options: \"ConsoleOptions\"\n ) -> \"Measurement\":\n _title = self._title\n _, right, _, left = Padding.unpack(self.padding)\n padding = left + right\n renderables = [self.renderable, _title] if _title else [self.renderable]\n\n if self.width is None:\n width = (\n measure_renderables(\n console,\n options.update_width(options.max_width - padding - 2),\n renderables,\n ).maximum\n + padding\n + 2\n )\n else:\n width = self.width\n return Measurement(width, width)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n from .console import Console\n\n c = Console()\n\n from .box import DOUBLE, ROUNDED\n from .padding import Padding\n\n p = Panel(\n \"Hello, World!\",\n title=\"rich.Panel\",\n style=\"white on blue\",\n box=DOUBLE,\n padding=1,\n )\n\n c.print()\n c.print(p)\n", "path": "rich/panel.py"}]}
| 3,969 | 174 |
gh_patches_debug_34723
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-58281
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix typing for `sentry.rules.processor`
Many files related to post-process are skipping type checks.
* Remove the file from [sentry/pyproject.toml](https://github.com/getsentry/sentry/blob/b7810a9dca4b57afd2858903a6a9ec7ab50cdead/pyproject.toml)
* Run `mypy sentry.rules.processor` and fix the typing errors
From https://github.com/getsentry/sentry/issues/55193
</issue>
<code>
[start of src/sentry/rules/processor.py]
1 from __future__ import annotations
2
3 import logging
4 import uuid
5 from datetime import timedelta
6 from random import randrange
7 from typing import (
8 Any,
9 Callable,
10 Collection,
11 List,
12 Mapping,
13 MutableMapping,
14 Optional,
15 Sequence,
16 Set,
17 Tuple,
18 )
19
20 from django.core.cache import cache
21 from django.utils import timezone
22
23 from sentry import analytics
24 from sentry.eventstore.models import GroupEvent
25 from sentry.models.environment import Environment
26 from sentry.models.grouprulestatus import GroupRuleStatus
27 from sentry.models.rule import Rule
28 from sentry.models.rulesnooze import RuleSnooze
29 from sentry.rules import EventState, history, rules
30 from sentry.rules.conditions.base import EventCondition
31 from sentry.types.rules import RuleFuture
32 from sentry.utils.hashlib import hash_values
33 from sentry.utils.safe import safe_execute
34
35 SLOW_CONDITION_MATCHES = ["event_frequency"]
36
37
38 def get_match_function(match_name: str) -> Callable[..., bool] | None:
39 if match_name == "all":
40 return all
41 elif match_name == "any":
42 return any
43 elif match_name == "none":
44 return lambda bool_iter: not any(bool_iter)
45 return None
46
47
48 def is_condition_slow(condition: Mapping[str, str]) -> bool:
49 for slow_conditions in SLOW_CONDITION_MATCHES:
50 if slow_conditions in condition["id"]:
51 return True
52 return False
53
54
55 class RuleProcessor:
56 logger = logging.getLogger("sentry.rules")
57
58 def __init__(
59 self,
60 event: GroupEvent,
61 is_new: bool,
62 is_regression: bool,
63 is_new_group_environment: bool,
64 has_reappeared: bool,
65 ) -> None:
66 self.event = event
67 self.group = event.group
68 self.project = event.project
69
70 self.is_new = is_new
71 self.is_regression = is_regression
72 self.is_new_group_environment = is_new_group_environment
73 self.has_reappeared = has_reappeared
74
75 self.grouped_futures: MutableMapping[
76 str, Tuple[Callable[[GroupEvent, Sequence[RuleFuture]], None], List[RuleFuture]]
77 ] = {}
78
79 def get_rules(self) -> Sequence[Rule]:
80 """Get all of the rules for this project from the DB (or cache)."""
81 rules_: Sequence[Rule] = Rule.get_for_project(self.project.id)
82 return rules_
83
84 def _build_rule_status_cache_key(self, rule_id: int) -> str:
85 return "grouprulestatus:1:%s" % hash_values([self.group.id, rule_id])
86
87 def bulk_get_rule_status(self, rules: Sequence[Rule]) -> Mapping[int, GroupRuleStatus]:
88 keys = [self._build_rule_status_cache_key(rule.id) for rule in rules]
89 cache_results: Mapping[str, GroupRuleStatus] = cache.get_many(keys)
90 missing_rule_ids: Set[int] = set()
91 rule_statuses: MutableMapping[int, GroupRuleStatus] = {}
92 for key, rule in zip(keys, rules):
93 rule_status = cache_results.get(key)
94 if not rule_status:
95 missing_rule_ids.add(rule.id)
96 else:
97 rule_statuses[rule.id] = rule_status
98
99 if missing_rule_ids:
100 # If not cached, attempt to fetch status from the database
101 statuses = GroupRuleStatus.objects.filter(
102 group=self.group, rule_id__in=missing_rule_ids
103 )
104 to_cache: List[GroupRuleStatus] = list()
105 for status in statuses:
106 rule_statuses[status.rule_id] = status
107 missing_rule_ids.remove(status.rule_id)
108 to_cache.append(status)
109
110 # We might need to create some statuses if they don't already exist
111 if missing_rule_ids:
112 # We use `ignore_conflicts=True` here to avoid race conditions where the statuses
113 # might be created between when we queried above and attempt to create the rows now.
114 GroupRuleStatus.objects.bulk_create(
115 [
116 GroupRuleStatus(rule_id=rule_id, group=self.group, project=self.project)
117 for rule_id in missing_rule_ids
118 ],
119 ignore_conflicts=True,
120 )
121 # Using `ignore_conflicts=True` prevents the pk from being set on the model
122 # instances. Re-query the database to fetch the rows, they should all exist at this
123 # point.
124 statuses = GroupRuleStatus.objects.filter(
125 group=self.group, rule_id__in=missing_rule_ids
126 )
127 for status in statuses:
128 rule_statuses[status.rule_id] = status
129 missing_rule_ids.remove(status.rule_id)
130 to_cache.append(status)
131
132 if missing_rule_ids:
133 # Shouldn't happen, but log just in case
134 self.logger.error(
135 "Failed to fetch some GroupRuleStatuses in RuleProcessor",
136 extra={"missing_rule_ids": missing_rule_ids, "group_id": self.group.id},
137 )
138 if to_cache:
139 cache.set_many(
140 {self._build_rule_status_cache_key(item.rule_id): item for item in to_cache}
141 )
142
143 return rule_statuses
144
145 def condition_matches(
146 self, condition: Mapping[str, Any], state: EventState, rule: Rule
147 ) -> bool | None:
148 condition_cls = rules.get(condition["id"])
149 if condition_cls is None:
150 self.logger.warning("Unregistered condition %r", condition["id"])
151 return None
152
153 condition_inst: EventCondition = condition_cls(self.project, data=condition, rule=rule)
154 passes: bool = safe_execute(
155 condition_inst.passes, self.event, state, _with_transaction=False
156 )
157 return passes
158
159 def get_rule_type(self, condition: Mapping[str, Any]) -> str | None:
160 rule_cls = rules.get(condition["id"])
161 if rule_cls is None:
162 self.logger.warning("Unregistered condition or filter %r", condition["id"])
163 return None
164
165 rule_type: str = rule_cls.rule_type
166 return rule_type
167
168 def get_state(self) -> EventState:
169 return EventState(
170 is_new=self.is_new,
171 is_regression=self.is_regression,
172 is_new_group_environment=self.is_new_group_environment,
173 has_reappeared=self.has_reappeared,
174 )
175
176 def apply_rule(self, rule: Rule, status: GroupRuleStatus) -> None:
177 """
178 If all conditions and filters pass, execute every action.
179
180 :param rule: `Rule` object
181 :return: void
182 """
183 logging_details = {
184 "rule_id": rule.id,
185 "group_id": self.group.id,
186 "event_id": self.event.event_id,
187 "project_id": self.project.id,
188 "is_new": self.is_new,
189 "is_regression": self.is_regression,
190 "has_reappeared": self.has_reappeared,
191 "new_group_environment": self.is_new_group_environment,
192 }
193
194 condition_match = rule.data.get("action_match") or Rule.DEFAULT_CONDITION_MATCH
195 filter_match = rule.data.get("filter_match") or Rule.DEFAULT_FILTER_MATCH
196 rule_condition_list = rule.data.get("conditions", ())
197 frequency = rule.data.get("frequency") or Rule.DEFAULT_FREQUENCY
198 try:
199 environment = self.event.get_environment()
200 except Environment.DoesNotExist:
201 return
202
203 if rule.environment_id is not None and environment.id != rule.environment_id:
204 return
205
206 now = timezone.now()
207 freq_offset = now - timedelta(minutes=frequency)
208 if status.last_active and status.last_active > freq_offset:
209 return
210
211 state = self.get_state()
212
213 condition_list = []
214 filter_list = []
215 for rule_cond in rule_condition_list:
216 if self.get_rule_type(rule_cond) == "condition/event":
217 condition_list.append(rule_cond)
218 else:
219 filter_list.append(rule_cond)
220
221 # Sort `condition_list` so that most expensive conditions run last.
222 condition_list.sort(key=lambda condition: is_condition_slow(condition))
223
224 for predicate_list, match, name in (
225 (filter_list, filter_match, "filter"),
226 (condition_list, condition_match, "condition"),
227 ):
228 if not predicate_list:
229 continue
230 predicate_iter = (self.condition_matches(f, state, rule) for f in predicate_list)
231 predicate_func = get_match_function(match)
232 if predicate_func:
233 if not predicate_func(predicate_iter):
234 return
235 else:
236 self.logger.error(
237 f"Unsupported {name}_match {match!r} for rule {rule.id}",
238 filter_match,
239 rule.id,
240 extra={**logging_details},
241 )
242 return
243
244 updated = (
245 GroupRuleStatus.objects.filter(id=status.id)
246 .exclude(last_active__gt=freq_offset)
247 .update(last_active=now)
248 )
249
250 if not updated:
251 return
252
253 if randrange(10) == 0:
254 analytics.record(
255 "issue_alert.fired",
256 issue_id=self.group.id,
257 project_id=rule.project.id,
258 organization_id=rule.project.organization.id,
259 rule_id=rule.id,
260 )
261
262 notification_uuid = str(uuid.uuid4())
263 history.record(rule, self.group, self.event.event_id, notification_uuid)
264 self.activate_downstream_actions(rule, notification_uuid)
265
266 def activate_downstream_actions(
267 self, rule: Rule, notification_uuid: Optional[str] = None
268 ) -> None:
269 state = self.get_state()
270 for action in rule.data.get("actions", ()):
271 action_cls = rules.get(action["id"])
272 if action_cls is None:
273 self.logger.warning("Unregistered action %r", action["id"])
274 continue
275
276 action_inst = action_cls(self.project, data=action, rule=rule)
277
278 results = safe_execute(
279 action_inst.after,
280 event=self.event,
281 state=state,
282 _with_transaction=False,
283 notification_uuid=notification_uuid,
284 )
285 if results is None:
286 self.logger.warning("Action %s did not return any futures", action["id"])
287 continue
288
289 for future in results:
290 key = future.key if future.key is not None else future.callback
291 rule_future = RuleFuture(rule=rule, kwargs=future.kwargs)
292
293 if key not in self.grouped_futures:
294 self.grouped_futures[key] = (future.callback, [rule_future])
295 else:
296 self.grouped_futures[key][1].append(rule_future)
297
298 def apply(
299 self,
300 ) -> Collection[Tuple[Callable[[GroupEvent, Sequence[RuleFuture]], None], List[RuleFuture]]]:
301 # we should only apply rules on unresolved issues
302 if not self.event.group.is_unresolved():
303 return {}.values()
304
305 self.grouped_futures.clear()
306 rules = self.get_rules()
307 snoozed_rules = RuleSnooze.objects.filter(rule__in=rules, user_id=None).values_list(
308 "rule", flat=True
309 )
310 rule_statuses = self.bulk_get_rule_status(rules)
311 for rule in rules:
312 if rule.id not in snoozed_rules:
313 self.apply_rule(rule, rule_statuses[rule.id])
314
315 return self.grouped_futures.values()
316
[end of src/sentry/rules/processor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/rules/processor.py b/src/sentry/rules/processor.py
--- a/src/sentry/rules/processor.py
+++ b/src/sentry/rules/processor.py
@@ -27,7 +27,9 @@
from sentry.models.rule import Rule
from sentry.models.rulesnooze import RuleSnooze
from sentry.rules import EventState, history, rules
+from sentry.rules.actions.base import EventAction
from sentry.rules.conditions.base import EventCondition
+from sentry.rules.filters.base import EventFilter
from sentry.types.rules import RuleFuture
from sentry.utils.hashlib import hash_values
from sentry.utils.safe import safe_execute
@@ -143,16 +145,22 @@
return rule_statuses
def condition_matches(
- self, condition: Mapping[str, Any], state: EventState, rule: Rule
+ self, condition: dict[str, Any], state: EventState, rule: Rule
) -> bool | None:
condition_cls = rules.get(condition["id"])
if condition_cls is None:
self.logger.warning("Unregistered condition %r", condition["id"])
return None
- condition_inst: EventCondition = condition_cls(self.project, data=condition, rule=rule)
+ condition_inst = condition_cls(self.project, data=condition, rule=rule)
+ if not isinstance(condition_inst, (EventCondition, EventFilter)):
+ self.logger.warning("Unregistered condition %r", condition["id"])
+ return None
passes: bool = safe_execute(
- condition_inst.passes, self.event, state, _with_transaction=False
+ condition_inst.passes,
+ self.event,
+ state,
+ _with_transaction=False,
)
return passes
@@ -274,6 +282,9 @@
continue
action_inst = action_cls(self.project, data=action, rule=rule)
+ if not isinstance(action_inst, EventAction):
+ self.logger.warning("Unregistered action %r", action["id"])
+ continue
results = safe_execute(
action_inst.after,
|
{"golden_diff": "diff --git a/src/sentry/rules/processor.py b/src/sentry/rules/processor.py\n--- a/src/sentry/rules/processor.py\n+++ b/src/sentry/rules/processor.py\n@@ -27,7 +27,9 @@\n from sentry.models.rule import Rule\n from sentry.models.rulesnooze import RuleSnooze\n from sentry.rules import EventState, history, rules\n+from sentry.rules.actions.base import EventAction\n from sentry.rules.conditions.base import EventCondition\n+from sentry.rules.filters.base import EventFilter\n from sentry.types.rules import RuleFuture\n from sentry.utils.hashlib import hash_values\n from sentry.utils.safe import safe_execute\n@@ -143,16 +145,22 @@\n return rule_statuses\n \n def condition_matches(\n- self, condition: Mapping[str, Any], state: EventState, rule: Rule\n+ self, condition: dict[str, Any], state: EventState, rule: Rule\n ) -> bool | None:\n condition_cls = rules.get(condition[\"id\"])\n if condition_cls is None:\n self.logger.warning(\"Unregistered condition %r\", condition[\"id\"])\n return None\n \n- condition_inst: EventCondition = condition_cls(self.project, data=condition, rule=rule)\n+ condition_inst = condition_cls(self.project, data=condition, rule=rule)\n+ if not isinstance(condition_inst, (EventCondition, EventFilter)):\n+ self.logger.warning(\"Unregistered condition %r\", condition[\"id\"])\n+ return None\n passes: bool = safe_execute(\n- condition_inst.passes, self.event, state, _with_transaction=False\n+ condition_inst.passes,\n+ self.event,\n+ state,\n+ _with_transaction=False,\n )\n return passes\n \n@@ -274,6 +282,9 @@\n continue\n \n action_inst = action_cls(self.project, data=action, rule=rule)\n+ if not isinstance(action_inst, EventAction):\n+ self.logger.warning(\"Unregistered action %r\", action[\"id\"])\n+ continue\n \n results = safe_execute(\n action_inst.after,\n", "issue": "Fix typing for `sentry.rules.processor`\nMany files related to post-process are skipping type checks.\n\n* Remove the file from [sentry/pyproject.toml](https://github.com/getsentry/sentry/blob/b7810a9dca4b57afd2858903a6a9ec7ab50cdead/pyproject.toml)\n* Run `mypy sentry.rules.processor` and fix the typing errors\n\nFrom https://github.com/getsentry/sentry/issues/55193\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport uuid\nfrom datetime import timedelta\nfrom random import randrange\nfrom typing import (\n Any,\n Callable,\n Collection,\n List,\n Mapping,\n MutableMapping,\n Optional,\n Sequence,\n Set,\n Tuple,\n)\n\nfrom django.core.cache import cache\nfrom django.utils import timezone\n\nfrom sentry import analytics\nfrom sentry.eventstore.models import GroupEvent\nfrom sentry.models.environment import Environment\nfrom sentry.models.grouprulestatus import GroupRuleStatus\nfrom sentry.models.rule import Rule\nfrom sentry.models.rulesnooze import RuleSnooze\nfrom sentry.rules import EventState, history, rules\nfrom sentry.rules.conditions.base import EventCondition\nfrom sentry.types.rules import RuleFuture\nfrom sentry.utils.hashlib import hash_values\nfrom sentry.utils.safe import safe_execute\n\nSLOW_CONDITION_MATCHES = [\"event_frequency\"]\n\n\ndef get_match_function(match_name: str) -> Callable[..., bool] | None:\n if match_name == \"all\":\n return all\n elif match_name == \"any\":\n return any\n elif match_name == \"none\":\n return lambda bool_iter: not any(bool_iter)\n return None\n\n\ndef is_condition_slow(condition: Mapping[str, str]) -> bool:\n for slow_conditions in SLOW_CONDITION_MATCHES:\n if slow_conditions in condition[\"id\"]:\n return True\n return False\n\n\nclass RuleProcessor:\n logger = logging.getLogger(\"sentry.rules\")\n\n def __init__(\n self,\n event: GroupEvent,\n is_new: bool,\n is_regression: bool,\n is_new_group_environment: bool,\n has_reappeared: bool,\n ) -> None:\n self.event = event\n self.group = event.group\n self.project = event.project\n\n self.is_new = is_new\n self.is_regression = is_regression\n self.is_new_group_environment = is_new_group_environment\n self.has_reappeared = has_reappeared\n\n self.grouped_futures: MutableMapping[\n str, Tuple[Callable[[GroupEvent, Sequence[RuleFuture]], None], List[RuleFuture]]\n ] = {}\n\n def get_rules(self) -> Sequence[Rule]:\n \"\"\"Get all of the rules for this project from the DB (or cache).\"\"\"\n rules_: Sequence[Rule] = Rule.get_for_project(self.project.id)\n return rules_\n\n def _build_rule_status_cache_key(self, rule_id: int) -> str:\n return \"grouprulestatus:1:%s\" % hash_values([self.group.id, rule_id])\n\n def bulk_get_rule_status(self, rules: Sequence[Rule]) -> Mapping[int, GroupRuleStatus]:\n keys = [self._build_rule_status_cache_key(rule.id) for rule in rules]\n cache_results: Mapping[str, GroupRuleStatus] = cache.get_many(keys)\n missing_rule_ids: Set[int] = set()\n rule_statuses: MutableMapping[int, GroupRuleStatus] = {}\n for key, rule in zip(keys, rules):\n rule_status = cache_results.get(key)\n if not rule_status:\n missing_rule_ids.add(rule.id)\n else:\n rule_statuses[rule.id] = rule_status\n\n if missing_rule_ids:\n # If not cached, attempt to fetch status from the database\n statuses = GroupRuleStatus.objects.filter(\n group=self.group, rule_id__in=missing_rule_ids\n )\n to_cache: List[GroupRuleStatus] = list()\n for status in statuses:\n rule_statuses[status.rule_id] = status\n missing_rule_ids.remove(status.rule_id)\n to_cache.append(status)\n\n # We might need to create some statuses if they don't already exist\n if missing_rule_ids:\n # We use `ignore_conflicts=True` here to avoid race conditions where the statuses\n # might be created between when we queried above and attempt to create the rows now.\n GroupRuleStatus.objects.bulk_create(\n [\n GroupRuleStatus(rule_id=rule_id, group=self.group, project=self.project)\n for rule_id in missing_rule_ids\n ],\n ignore_conflicts=True,\n )\n # Using `ignore_conflicts=True` prevents the pk from being set on the model\n # instances. Re-query the database to fetch the rows, they should all exist at this\n # point.\n statuses = GroupRuleStatus.objects.filter(\n group=self.group, rule_id__in=missing_rule_ids\n )\n for status in statuses:\n rule_statuses[status.rule_id] = status\n missing_rule_ids.remove(status.rule_id)\n to_cache.append(status)\n\n if missing_rule_ids:\n # Shouldn't happen, but log just in case\n self.logger.error(\n \"Failed to fetch some GroupRuleStatuses in RuleProcessor\",\n extra={\"missing_rule_ids\": missing_rule_ids, \"group_id\": self.group.id},\n )\n if to_cache:\n cache.set_many(\n {self._build_rule_status_cache_key(item.rule_id): item for item in to_cache}\n )\n\n return rule_statuses\n\n def condition_matches(\n self, condition: Mapping[str, Any], state: EventState, rule: Rule\n ) -> bool | None:\n condition_cls = rules.get(condition[\"id\"])\n if condition_cls is None:\n self.logger.warning(\"Unregistered condition %r\", condition[\"id\"])\n return None\n\n condition_inst: EventCondition = condition_cls(self.project, data=condition, rule=rule)\n passes: bool = safe_execute(\n condition_inst.passes, self.event, state, _with_transaction=False\n )\n return passes\n\n def get_rule_type(self, condition: Mapping[str, Any]) -> str | None:\n rule_cls = rules.get(condition[\"id\"])\n if rule_cls is None:\n self.logger.warning(\"Unregistered condition or filter %r\", condition[\"id\"])\n return None\n\n rule_type: str = rule_cls.rule_type\n return rule_type\n\n def get_state(self) -> EventState:\n return EventState(\n is_new=self.is_new,\n is_regression=self.is_regression,\n is_new_group_environment=self.is_new_group_environment,\n has_reappeared=self.has_reappeared,\n )\n\n def apply_rule(self, rule: Rule, status: GroupRuleStatus) -> None:\n \"\"\"\n If all conditions and filters pass, execute every action.\n\n :param rule: `Rule` object\n :return: void\n \"\"\"\n logging_details = {\n \"rule_id\": rule.id,\n \"group_id\": self.group.id,\n \"event_id\": self.event.event_id,\n \"project_id\": self.project.id,\n \"is_new\": self.is_new,\n \"is_regression\": self.is_regression,\n \"has_reappeared\": self.has_reappeared,\n \"new_group_environment\": self.is_new_group_environment,\n }\n\n condition_match = rule.data.get(\"action_match\") or Rule.DEFAULT_CONDITION_MATCH\n filter_match = rule.data.get(\"filter_match\") or Rule.DEFAULT_FILTER_MATCH\n rule_condition_list = rule.data.get(\"conditions\", ())\n frequency = rule.data.get(\"frequency\") or Rule.DEFAULT_FREQUENCY\n try:\n environment = self.event.get_environment()\n except Environment.DoesNotExist:\n return\n\n if rule.environment_id is not None and environment.id != rule.environment_id:\n return\n\n now = timezone.now()\n freq_offset = now - timedelta(minutes=frequency)\n if status.last_active and status.last_active > freq_offset:\n return\n\n state = self.get_state()\n\n condition_list = []\n filter_list = []\n for rule_cond in rule_condition_list:\n if self.get_rule_type(rule_cond) == \"condition/event\":\n condition_list.append(rule_cond)\n else:\n filter_list.append(rule_cond)\n\n # Sort `condition_list` so that most expensive conditions run last.\n condition_list.sort(key=lambda condition: is_condition_slow(condition))\n\n for predicate_list, match, name in (\n (filter_list, filter_match, \"filter\"),\n (condition_list, condition_match, \"condition\"),\n ):\n if not predicate_list:\n continue\n predicate_iter = (self.condition_matches(f, state, rule) for f in predicate_list)\n predicate_func = get_match_function(match)\n if predicate_func:\n if not predicate_func(predicate_iter):\n return\n else:\n self.logger.error(\n f\"Unsupported {name}_match {match!r} for rule {rule.id}\",\n filter_match,\n rule.id,\n extra={**logging_details},\n )\n return\n\n updated = (\n GroupRuleStatus.objects.filter(id=status.id)\n .exclude(last_active__gt=freq_offset)\n .update(last_active=now)\n )\n\n if not updated:\n return\n\n if randrange(10) == 0:\n analytics.record(\n \"issue_alert.fired\",\n issue_id=self.group.id,\n project_id=rule.project.id,\n organization_id=rule.project.organization.id,\n rule_id=rule.id,\n )\n\n notification_uuid = str(uuid.uuid4())\n history.record(rule, self.group, self.event.event_id, notification_uuid)\n self.activate_downstream_actions(rule, notification_uuid)\n\n def activate_downstream_actions(\n self, rule: Rule, notification_uuid: Optional[str] = None\n ) -> None:\n state = self.get_state()\n for action in rule.data.get(\"actions\", ()):\n action_cls = rules.get(action[\"id\"])\n if action_cls is None:\n self.logger.warning(\"Unregistered action %r\", action[\"id\"])\n continue\n\n action_inst = action_cls(self.project, data=action, rule=rule)\n\n results = safe_execute(\n action_inst.after,\n event=self.event,\n state=state,\n _with_transaction=False,\n notification_uuid=notification_uuid,\n )\n if results is None:\n self.logger.warning(\"Action %s did not return any futures\", action[\"id\"])\n continue\n\n for future in results:\n key = future.key if future.key is not None else future.callback\n rule_future = RuleFuture(rule=rule, kwargs=future.kwargs)\n\n if key not in self.grouped_futures:\n self.grouped_futures[key] = (future.callback, [rule_future])\n else:\n self.grouped_futures[key][1].append(rule_future)\n\n def apply(\n self,\n ) -> Collection[Tuple[Callable[[GroupEvent, Sequence[RuleFuture]], None], List[RuleFuture]]]:\n # we should only apply rules on unresolved issues\n if not self.event.group.is_unresolved():\n return {}.values()\n\n self.grouped_futures.clear()\n rules = self.get_rules()\n snoozed_rules = RuleSnooze.objects.filter(rule__in=rules, user_id=None).values_list(\n \"rule\", flat=True\n )\n rule_statuses = self.bulk_get_rule_status(rules)\n for rule in rules:\n if rule.id not in snoozed_rules:\n self.apply_rule(rule, rule_statuses[rule.id])\n\n return self.grouped_futures.values()\n", "path": "src/sentry/rules/processor.py"}]}
| 3,866 | 459 |
gh_patches_debug_18891
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-3198
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix `safely_reserve_a_username`
This function keeps using a cursor after an `IntegrityError` exception is raised, that's invalid and raises another exception. See https://github.com/gratipay/gratipay.com/pull/2752#issuecomment-65266388.
Fix `safely_reserve_a_username`
This function keeps using a cursor after an `IntegrityError` exception is raised, that's invalid and raises another exception. See https://github.com/gratipay/gratipay.com/pull/2752#issuecomment-65266388.
</issue>
<code>
[start of gratipay/utils/username.py]
1 from psycopg2 import IntegrityError
2 import random
3
4
5 class FailedToReserveUsername(Exception): pass
6 class RanOutOfUsernameAttempts(Exception): pass
7
8
9 def gen_random_usernames():
10 """Yield random 12-hex-digit unicodes.
11 """
12 while 1:
13 yield hex(int(random.random() * 16**12))[2:].zfill(12).decode('ASCII')
14
15
16 def insert_into_participants(cursor, username):
17 return cursor.one( "INSERT INTO participants (username, username_lower) "
18 "VALUES (%s, %s) RETURNING username"
19 , (username, username.lower())
20 )
21
22
23 def safely_reserve_a_username(cursor, gen_usernames=gen_random_usernames,
24 reserve=insert_into_participants):
25 """Safely reserve a username.
26
27 :param cursor: a :py:class:`psycopg2.cursor` managed as a :py:mod:`postgres`
28 transaction
29 :param gen_usernames: a generator of usernames to try
30 :param reserve: a function that takes the cursor and does the SQL
31 stuff
32 :database: one ``INSERT`` on average
33 :returns: a 12-hex-digit unicode
34 :raises: :py:class:`FailedToReserveUsername` if no acceptable username is found
35 within 100 attempts, or :py:class:`RanOutOfUsernameAttempts` if the username
36 generator runs out first
37
38 The returned value is guaranteed to have been reserved in the database.
39
40 """
41 seatbelt = 0
42 for username in gen_usernames():
43 seatbelt += 1
44 if seatbelt > 100:
45 raise FailedToReserveUsername
46
47 try:
48 check = reserve(cursor, username)
49 except IntegrityError: # Collision, try again with another value.
50 continue
51 else:
52 assert check == username
53 break
54 else:
55 raise RanOutOfUsernameAttempts
56 return username
57
[end of gratipay/utils/username.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gratipay/utils/username.py b/gratipay/utils/username.py
--- a/gratipay/utils/username.py
+++ b/gratipay/utils/username.py
@@ -38,6 +38,8 @@
The returned value is guaranteed to have been reserved in the database.
"""
+ cursor.execute("SAVEPOINT safely_reserve_a_username")
+
seatbelt = 0
for username in gen_usernames():
seatbelt += 1
@@ -47,10 +49,13 @@
try:
check = reserve(cursor, username)
except IntegrityError: # Collision, try again with another value.
+ cursor.execute("ROLLBACK TO safely_reserve_a_username")
continue
else:
assert check == username
break
else:
raise RanOutOfUsernameAttempts
+
+ cursor.execute("RELEASE safely_reserve_a_username")
return username
|
{"golden_diff": "diff --git a/gratipay/utils/username.py b/gratipay/utils/username.py\n--- a/gratipay/utils/username.py\n+++ b/gratipay/utils/username.py\n@@ -38,6 +38,8 @@\n The returned value is guaranteed to have been reserved in the database.\n \n \"\"\"\n+ cursor.execute(\"SAVEPOINT safely_reserve_a_username\")\n+\n seatbelt = 0\n for username in gen_usernames():\n seatbelt += 1\n@@ -47,10 +49,13 @@\n try:\n check = reserve(cursor, username)\n except IntegrityError: # Collision, try again with another value.\n+ cursor.execute(\"ROLLBACK TO safely_reserve_a_username\")\n continue\n else:\n assert check == username\n break\n else:\n raise RanOutOfUsernameAttempts\n+\n+ cursor.execute(\"RELEASE safely_reserve_a_username\")\n return username\n", "issue": "Fix `safely_reserve_a_username`\nThis function keeps using a cursor after an `IntegrityError` exception is raised, that's invalid and raises another exception. See https://github.com/gratipay/gratipay.com/pull/2752#issuecomment-65266388.\n\nFix `safely_reserve_a_username`\nThis function keeps using a cursor after an `IntegrityError` exception is raised, that's invalid and raises another exception. See https://github.com/gratipay/gratipay.com/pull/2752#issuecomment-65266388.\n\n", "before_files": [{"content": "from psycopg2 import IntegrityError\nimport random\n\n\nclass FailedToReserveUsername(Exception): pass\nclass RanOutOfUsernameAttempts(Exception): pass\n\n\ndef gen_random_usernames():\n \"\"\"Yield random 12-hex-digit unicodes.\n \"\"\"\n while 1:\n yield hex(int(random.random() * 16**12))[2:].zfill(12).decode('ASCII')\n\n\ndef insert_into_participants(cursor, username):\n return cursor.one( \"INSERT INTO participants (username, username_lower) \"\n \"VALUES (%s, %s) RETURNING username\"\n , (username, username.lower())\n )\n\n\ndef safely_reserve_a_username(cursor, gen_usernames=gen_random_usernames,\n reserve=insert_into_participants):\n \"\"\"Safely reserve a username.\n\n :param cursor: a :py:class:`psycopg2.cursor` managed as a :py:mod:`postgres`\n transaction\n :param gen_usernames: a generator of usernames to try\n :param reserve: a function that takes the cursor and does the SQL\n stuff\n :database: one ``INSERT`` on average\n :returns: a 12-hex-digit unicode\n :raises: :py:class:`FailedToReserveUsername` if no acceptable username is found\n within 100 attempts, or :py:class:`RanOutOfUsernameAttempts` if the username\n generator runs out first\n\n The returned value is guaranteed to have been reserved in the database.\n\n \"\"\"\n seatbelt = 0\n for username in gen_usernames():\n seatbelt += 1\n if seatbelt > 100:\n raise FailedToReserveUsername\n\n try:\n check = reserve(cursor, username)\n except IntegrityError: # Collision, try again with another value.\n continue\n else:\n assert check == username\n break\n else:\n raise RanOutOfUsernameAttempts\n return username\n", "path": "gratipay/utils/username.py"}]}
| 1,196 | 198 |
gh_patches_debug_26872
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-22956
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Custom Page Redirection in Notifications
<!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, refer to https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, use https://frappeframework.com/docs/user/en or the developer cheetsheet https://frappeframework.com/docs/user/en/bench/resources/bench-commands-cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a feature request, make sure to be as verbose as possible. The better you convey your message, the greater the drive to make it happen.
-->
Is your feature request related to a problem? Please describe.
Allow redirection to custom URL on notification click.
Describe the solution you'd like
Currently, in the Frappe framework, notifications only support a default redirection mechanism to a particular doctype.
We should have ability to specify:
portal URL (custom app like desk has some notification)
Framework page
Kanban link or something like that
Suggested Changes:
1. We will add “Custom Link” in type
2. We will have another field called “Link Uri” where we will store the custom link.
3. We will check if it is “Custom Link”. If it is Custom Link then we will Open the specified link instead of Opening Reference Document
4. We will specify the
Example use case:
We are having a custom WhatsApp-like which is built inside the framework, we want the user to be directed to a custom page in Frappe when they click on a notification (which is linked to whatsapp message), rather than the standard WhatsApp message doctype.
</issue>
<code>
[start of frappe/desk/doctype/notification_log/notification_log.py]
1 # Copyright (c) 2019, Frappe Technologies and contributors
2 # License: MIT. See LICENSE
3
4 import frappe
5 from frappe import _
6 from frappe.desk.doctype.notification_settings.notification_settings import (
7 is_email_notifications_enabled_for_type,
8 is_notifications_enabled,
9 )
10 from frappe.model.document import Document
11
12
13 class NotificationLog(Document):
14 # begin: auto-generated types
15 # This code is auto-generated. Do not modify anything in this block.
16
17 from typing import TYPE_CHECKING
18
19 if TYPE_CHECKING:
20 from frappe.types import DF
21
22 attached_file: DF.Code | None
23 document_name: DF.Data | None
24 document_type: DF.Link | None
25 email_content: DF.TextEditor | None
26 for_user: DF.Link | None
27 from_user: DF.Link | None
28 read: DF.Check
29 subject: DF.Text | None
30 type: DF.Literal["Mention", "Energy Point", "Assignment", "Share", "Alert"]
31 # end: auto-generated types
32 def after_insert(self):
33 frappe.publish_realtime("notification", after_commit=True, user=self.for_user)
34 set_notifications_as_unseen(self.for_user)
35 if is_email_notifications_enabled_for_type(self.for_user, self.type):
36 try:
37 send_notification_email(self)
38 except frappe.OutgoingEmailError:
39 self.log_error(_("Failed to send notification email"))
40
41 @staticmethod
42 def clear_old_logs(days=180):
43 from frappe.query_builder import Interval
44 from frappe.query_builder.functions import Now
45
46 table = frappe.qb.DocType("Notification Log")
47 frappe.db.delete(table, filters=(table.modified < (Now() - Interval(days=days))))
48
49
50 def get_permission_query_conditions(for_user):
51 if not for_user:
52 for_user = frappe.session.user
53
54 if for_user == "Administrator":
55 return
56
57 return f"""(`tabNotification Log`.for_user = {frappe.db.escape(for_user)})"""
58
59
60 def get_title(doctype, docname, title_field=None):
61 if not title_field:
62 title_field = frappe.get_meta(doctype).get_title_field()
63 return docname if title_field == "name" else frappe.db.get_value(doctype, docname, title_field)
64
65
66 def get_title_html(title):
67 return f'<b class="subject-title">{title}</b>'
68
69
70 def enqueue_create_notification(users: list[str] | str, doc: dict):
71 """Send notification to users.
72
73 users: list of user emails or string of users with comma separated emails
74 doc: contents of `Notification` doc
75 """
76
77 # During installation of new site, enqueue_create_notification tries to connect to Redis.
78 # This breaks new site creation if Redis server is not running.
79 # We do not need any notifications in fresh installation
80 if frappe.flags.in_install:
81 return
82
83 doc = frappe._dict(doc)
84
85 if isinstance(users, str):
86 users = [user.strip() for user in users.split(",") if user.strip()]
87 users = list(set(users))
88
89 frappe.enqueue(
90 "frappe.desk.doctype.notification_log.notification_log.make_notification_logs",
91 doc=doc,
92 users=users,
93 now=frappe.flags.in_test,
94 )
95
96
97 def make_notification_logs(doc, users):
98 for user in _get_user_ids(users):
99 notification = frappe.new_doc("Notification Log")
100 notification.update(doc)
101 notification.for_user = user
102 if (
103 notification.for_user != notification.from_user
104 or doc.type == "Energy Point"
105 or doc.type == "Alert"
106 ):
107 notification.insert(ignore_permissions=True)
108
109
110 def _get_user_ids(user_emails):
111 user_names = frappe.db.get_values(
112 "User", {"enabled": 1, "email": ("in", user_emails)}, "name", pluck=True
113 )
114 return [user for user in user_names if is_notifications_enabled(user)]
115
116
117 def send_notification_email(doc):
118
119 if doc.type == "Energy Point" and doc.email_content is None:
120 return
121
122 from frappe.utils import get_url_to_form, strip_html
123
124 email = frappe.db.get_value("User", doc.for_user, "email")
125 if not email:
126 return
127
128 doc_link = get_url_to_form(doc.document_type, doc.document_name)
129 header = get_email_header(doc)
130 email_subject = strip_html(doc.subject)
131
132 frappe.sendmail(
133 recipients=email,
134 subject=email_subject,
135 template="new_notification",
136 args = {
137 "body_content": doc.subject,
138 "description": doc.email_content,
139 "document_type": doc.document_type,
140 "document_name": doc.document_name,
141 "doc_link": doc_link,
142 },
143 header=[header, "orange"],
144 now=frappe.flags.in_test,
145 )
146
147
148 def get_email_header(doc):
149 docname = doc.document_name
150 header_map = {
151 "Default": _("New Notification"),
152 "Mention": _("New Mention on {0}").format(docname),
153 "Assignment": _("Assignment Update on {0}").format(docname),
154 "Share": _("New Document Shared {0}").format(docname),
155 "Energy Point": _("Energy Point Update on {0}").format(docname),
156 }
157
158 return header_map[doc.type or "Default"]
159
160
161 @frappe.whitelist()
162 def get_notification_logs(limit=20):
163 notification_logs = frappe.db.get_list(
164 "Notification Log", fields=["*"], limit=limit, order_by="modified desc"
165 )
166
167 users = [log.from_user for log in notification_logs]
168 users = [*set(users)] # remove duplicates
169 user_info = frappe._dict()
170
171 for user in users:
172 frappe.utils.add_user_info(user, user_info)
173
174 return {"notification_logs": notification_logs, "user_info": user_info}
175
176
177 @frappe.whitelist()
178 def mark_all_as_read():
179 unread_docs_list = frappe.get_all(
180 "Notification Log", filters={"read": 0, "for_user": frappe.session.user}
181 )
182 unread_docnames = [doc.name for doc in unread_docs_list]
183 if unread_docnames:
184 filters = {"name": ["in", unread_docnames]}
185 frappe.db.set_value("Notification Log", filters, "read", 1, update_modified=False)
186
187
188 @frappe.whitelist()
189 def mark_as_read(docname: str):
190 if frappe.flags.read_only:
191 return
192
193 if docname:
194 frappe.db.set_value("Notification Log", str(docname), "read", 1, update_modified=False)
195
196
197 @frappe.whitelist()
198 def trigger_indicator_hide():
199 frappe.publish_realtime("indicator_hide", user=frappe.session.user)
200
201
202 def set_notifications_as_unseen(user):
203 try:
204 frappe.db.set_value("Notification Settings", user, "seen", 0, update_modified=False)
205 except frappe.DoesNotExistError:
206 return
207
[end of frappe/desk/doctype/notification_log/notification_log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/desk/doctype/notification_log/notification_log.py b/frappe/desk/doctype/notification_log/notification_log.py
--- a/frappe/desk/doctype/notification_log/notification_log.py
+++ b/frappe/desk/doctype/notification_log/notification_log.py
@@ -25,6 +25,7 @@
email_content: DF.TextEditor | None
for_user: DF.Link | None
from_user: DF.Link | None
+ link: DF.Data | None
read: DF.Check
subject: DF.Text | None
type: DF.Literal["Mention", "Energy Point", "Assignment", "Share", "Alert"]
@@ -125,21 +126,24 @@
if not email:
return
- doc_link = get_url_to_form(doc.document_type, doc.document_name)
header = get_email_header(doc)
email_subject = strip_html(doc.subject)
+ args = {
+ "body_content": doc.subject,
+ "description": doc.email_content,
+ }
+ if doc.link:
+ args["doc_link"] = doc.link
+ else:
+ args["document_type"] = doc.document_type
+ args["document_name"] = doc.document_name
+ args["doc_link"] = get_url_to_form(doc.document_type, doc.document_name)
frappe.sendmail(
recipients=email,
subject=email_subject,
template="new_notification",
- args = {
- "body_content": doc.subject,
- "description": doc.email_content,
- "document_type": doc.document_type,
- "document_name": doc.document_name,
- "doc_link": doc_link,
- },
+ args=args,
header=[header, "orange"],
now=frappe.flags.in_test,
)
|
{"golden_diff": "diff --git a/frappe/desk/doctype/notification_log/notification_log.py b/frappe/desk/doctype/notification_log/notification_log.py\n--- a/frappe/desk/doctype/notification_log/notification_log.py\n+++ b/frappe/desk/doctype/notification_log/notification_log.py\n@@ -25,6 +25,7 @@\n \t\temail_content: DF.TextEditor | None\n \t\tfor_user: DF.Link | None\n \t\tfrom_user: DF.Link | None\n+\t\tlink: DF.Data | None\n \t\tread: DF.Check\n \t\tsubject: DF.Text | None\n \t\ttype: DF.Literal[\"Mention\", \"Energy Point\", \"Assignment\", \"Share\", \"Alert\"]\n@@ -125,21 +126,24 @@\n \tif not email:\n \t\treturn\n \n-\tdoc_link = get_url_to_form(doc.document_type, doc.document_name)\n \theader = get_email_header(doc)\n \temail_subject = strip_html(doc.subject)\n+\targs = {\n+\t\t\"body_content\": doc.subject,\n+\t\t\"description\": doc.email_content,\n+\t}\n+\tif doc.link:\n+\t\targs[\"doc_link\"] = doc.link\n+\telse:\n+\t\targs[\"document_type\"] = doc.document_type\n+\t\targs[\"document_name\"] = doc.document_name\n+\t\targs[\"doc_link\"] = get_url_to_form(doc.document_type, doc.document_name)\n \n \tfrappe.sendmail(\n \t\trecipients=email,\n \t\tsubject=email_subject,\n \t\ttemplate=\"new_notification\",\n-\t\targs = {\n-\t\t\t\"body_content\": doc.subject,\n-\t\t\t\"description\": doc.email_content,\n-\t\t\t\"document_type\": doc.document_type,\n-\t\t\t\"document_name\": doc.document_name,\n-\t\t\t\"doc_link\": doc_link,\n-\t\t},\n+\t\targs=args,\n \t\theader=[header, \"orange\"],\n \t\tnow=frappe.flags.in_test,\n \t)\n", "issue": "Custom Page Redirection in Notifications\n<!--\r\nWelcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:\r\n\r\n1. This tracker should only be used to report bugs and request features / enhancements to Frappe\r\n - For questions and general support, refer to https://stackoverflow.com/questions/tagged/frappe\r\n - For documentation issues, use https://frappeframework.com/docs/user/en or the developer cheetsheet https://frappeframework.com/docs/user/en/bench/resources/bench-commands-cheatsheet\r\n2. Use the search function before creating a new issue. Duplicates will be closed and directed to\r\n the original discussion.\r\n3. When making a feature request, make sure to be as verbose as possible. The better you convey your message, the greater the drive to make it happen.\r\n-->\r\n\r\nIs your feature request related to a problem? Please describe.\r\nAllow redirection to custom URL on notification click.\r\n\r\nDescribe the solution you'd like\r\nCurrently, in the Frappe framework, notifications only support a default redirection mechanism to a particular doctype.\r\n\r\nWe should have ability to specify:\r\n\r\nportal URL (custom app like desk has some notification)\r\nFramework page \r\nKanban link or something like that\r\n\r\nSuggested Changes:\r\n\r\n1. We will add \u201cCustom Link\u201d in type\r\n2. We will have another field called \u201cLink Uri\u201d where we will store the custom link.\r\n3. We will check if it is \u201cCustom Link\u201d. If it is Custom Link then we will Open the specified link instead of Opening Reference Document\r\n4. We will specify the\r\n\r\nExample use case:\r\n\r\nWe are having a custom WhatsApp-like which is built inside the framework, we want the user to be directed to a custom page in Frappe when they click on a notification (which is linked to whatsapp message), rather than the standard WhatsApp message doctype.\r\n\n", "before_files": [{"content": "# Copyright (c) 2019, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport frappe\nfrom frappe import _\nfrom frappe.desk.doctype.notification_settings.notification_settings import (\n\tis_email_notifications_enabled_for_type,\n\tis_notifications_enabled,\n)\nfrom frappe.model.document import Document\n\n\nclass NotificationLog(Document):\n\t# begin: auto-generated types\n\t# This code is auto-generated. Do not modify anything in this block.\n\n\tfrom typing import TYPE_CHECKING\n\n\tif TYPE_CHECKING:\n\t\tfrom frappe.types import DF\n\n\t\tattached_file: DF.Code | None\n\t\tdocument_name: DF.Data | None\n\t\tdocument_type: DF.Link | None\n\t\temail_content: DF.TextEditor | None\n\t\tfor_user: DF.Link | None\n\t\tfrom_user: DF.Link | None\n\t\tread: DF.Check\n\t\tsubject: DF.Text | None\n\t\ttype: DF.Literal[\"Mention\", \"Energy Point\", \"Assignment\", \"Share\", \"Alert\"]\n\t# end: auto-generated types\n\tdef after_insert(self):\n\t\tfrappe.publish_realtime(\"notification\", after_commit=True, user=self.for_user)\n\t\tset_notifications_as_unseen(self.for_user)\n\t\tif is_email_notifications_enabled_for_type(self.for_user, self.type):\n\t\t\ttry:\n\t\t\t\tsend_notification_email(self)\n\t\t\texcept frappe.OutgoingEmailError:\n\t\t\t\tself.log_error(_(\"Failed to send notification email\"))\n\n\t@staticmethod\n\tdef clear_old_logs(days=180):\n\t\tfrom frappe.query_builder import Interval\n\t\tfrom frappe.query_builder.functions import Now\n\n\t\ttable = frappe.qb.DocType(\"Notification Log\")\n\t\tfrappe.db.delete(table, filters=(table.modified < (Now() - Interval(days=days))))\n\n\ndef get_permission_query_conditions(for_user):\n\tif not for_user:\n\t\tfor_user = frappe.session.user\n\n\tif for_user == \"Administrator\":\n\t\treturn\n\n\treturn f\"\"\"(`tabNotification Log`.for_user = {frappe.db.escape(for_user)})\"\"\"\n\n\ndef get_title(doctype, docname, title_field=None):\n\tif not title_field:\n\t\ttitle_field = frappe.get_meta(doctype).get_title_field()\n\treturn docname if title_field == \"name\" else frappe.db.get_value(doctype, docname, title_field)\n\n\ndef get_title_html(title):\n\treturn f'<b class=\"subject-title\">{title}</b>'\n\n\ndef enqueue_create_notification(users: list[str] | str, doc: dict):\n\t\"\"\"Send notification to users.\n\n\tusers: list of user emails or string of users with comma separated emails\n\tdoc: contents of `Notification` doc\n\t\"\"\"\n\n\t# During installation of new site, enqueue_create_notification tries to connect to Redis.\n\t# This breaks new site creation if Redis server is not running.\n\t# We do not need any notifications in fresh installation\n\tif frappe.flags.in_install:\n\t\treturn\n\n\tdoc = frappe._dict(doc)\n\n\tif isinstance(users, str):\n\t\tusers = [user.strip() for user in users.split(\",\") if user.strip()]\n\tusers = list(set(users))\n\n\tfrappe.enqueue(\n\t\t\"frappe.desk.doctype.notification_log.notification_log.make_notification_logs\",\n\t\tdoc=doc,\n\t\tusers=users,\n\t\tnow=frappe.flags.in_test,\n\t)\n\n\ndef make_notification_logs(doc, users):\n\tfor user in _get_user_ids(users):\n\t\tnotification = frappe.new_doc(\"Notification Log\")\n\t\tnotification.update(doc)\n\t\tnotification.for_user = user\n\t\tif (\n\t\t\tnotification.for_user != notification.from_user\n\t\t\tor doc.type == \"Energy Point\"\n\t\t\tor doc.type == \"Alert\"\n\t\t):\n\t\t\tnotification.insert(ignore_permissions=True)\n\n\ndef _get_user_ids(user_emails):\n\tuser_names = frappe.db.get_values(\n\t\t\"User\", {\"enabled\": 1, \"email\": (\"in\", user_emails)}, \"name\", pluck=True\n\t)\n\treturn [user for user in user_names if is_notifications_enabled(user)]\n\n\ndef send_notification_email(doc):\n\n\tif doc.type == \"Energy Point\" and doc.email_content is None:\n\t\treturn\n\n\tfrom frappe.utils import get_url_to_form, strip_html\n\n\temail = frappe.db.get_value(\"User\", doc.for_user, \"email\")\n\tif not email:\n\t\treturn\n\n\tdoc_link = get_url_to_form(doc.document_type, doc.document_name)\n\theader = get_email_header(doc)\n\temail_subject = strip_html(doc.subject)\n\n\tfrappe.sendmail(\n\t\trecipients=email,\n\t\tsubject=email_subject,\n\t\ttemplate=\"new_notification\",\n\t\targs = {\n\t\t\t\"body_content\": doc.subject,\n\t\t\t\"description\": doc.email_content,\n\t\t\t\"document_type\": doc.document_type,\n\t\t\t\"document_name\": doc.document_name,\n\t\t\t\"doc_link\": doc_link,\n\t\t},\n\t\theader=[header, \"orange\"],\n\t\tnow=frappe.flags.in_test,\n\t)\n\n\ndef get_email_header(doc):\n\tdocname = doc.document_name\n\theader_map = {\n\t\t\"Default\": _(\"New Notification\"),\n\t\t\"Mention\": _(\"New Mention on {0}\").format(docname),\n\t\t\"Assignment\": _(\"Assignment Update on {0}\").format(docname),\n\t\t\"Share\": _(\"New Document Shared {0}\").format(docname),\n\t\t\"Energy Point\": _(\"Energy Point Update on {0}\").format(docname),\n\t}\n\n\treturn header_map[doc.type or \"Default\"]\n\n\[email protected]()\ndef get_notification_logs(limit=20):\n\tnotification_logs = frappe.db.get_list(\n\t\t\"Notification Log\", fields=[\"*\"], limit=limit, order_by=\"modified desc\"\n\t)\n\n\tusers = [log.from_user for log in notification_logs]\n\tusers = [*set(users)] # remove duplicates\n\tuser_info = frappe._dict()\n\n\tfor user in users:\n\t\tfrappe.utils.add_user_info(user, user_info)\n\n\treturn {\"notification_logs\": notification_logs, \"user_info\": user_info}\n\n\[email protected]()\ndef mark_all_as_read():\n\tunread_docs_list = frappe.get_all(\n\t\t\"Notification Log\", filters={\"read\": 0, \"for_user\": frappe.session.user}\n\t)\n\tunread_docnames = [doc.name for doc in unread_docs_list]\n\tif unread_docnames:\n\t\tfilters = {\"name\": [\"in\", unread_docnames]}\n\t\tfrappe.db.set_value(\"Notification Log\", filters, \"read\", 1, update_modified=False)\n\n\[email protected]()\ndef mark_as_read(docname: str):\n\tif frappe.flags.read_only:\n\t\treturn\n\n\tif docname:\n\t\tfrappe.db.set_value(\"Notification Log\", str(docname), \"read\", 1, update_modified=False)\n\n\[email protected]()\ndef trigger_indicator_hide():\n\tfrappe.publish_realtime(\"indicator_hide\", user=frappe.session.user)\n\n\ndef set_notifications_as_unseen(user):\n\ttry:\n\t\tfrappe.db.set_value(\"Notification Settings\", user, \"seen\", 0, update_modified=False)\n\texcept frappe.DoesNotExistError:\n\t\treturn\n", "path": "frappe/desk/doctype/notification_log/notification_log.py"}]}
| 3,007 | 399 |
gh_patches_debug_32331
|
rasdani/github-patches
|
git_diff
|
lk-geimfari__mimesis-435
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Arguments shadow builtin names
After we have updated `flake8-builtins` we got new several new errors.
```
=================================== FAILURES ===================================
_________________________________ FLAKE8-check _________________________________
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/business.py:47:5: A002 "copyright" is used as an argument and thus shadows a python builtin, consider renaming the argument
_________________________________ FLAKE8-check _________________________________
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:37:5: A002 "hash" is used as an argument and thus shadows a python builtin, consider renaming the argument
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:50:5: A002 "bytes" is used as an argument and thus shadows a python builtin, consider renaming the argument
```
Arguments shadow builtin names
After we have updated `flake8-builtins` we got new several new errors.
```
=================================== FAILURES ===================================
_________________________________ FLAKE8-check _________________________________
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/business.py:47:5: A002 "copyright" is used as an argument and thus shadows a python builtin, consider renaming the argument
_________________________________ FLAKE8-check _________________________________
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:37:5: A002 "hash" is used as an argument and thus shadows a python builtin, consider renaming the argument
/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:50:5: A002 "bytes" is used as an argument and thus shadows a python builtin, consider renaming the argument
```
</issue>
<code>
[start of mimesis/providers/business.py]
1 """Business data provider."""
2
3 from mimesis.data import (CRYPTOCURRENCY_ISO_CODES, CRYPTOCURRENCY_SYMBOLS,
4 CURRENCY_ISO_CODES, CURRENCY_SYMBOLS)
5 from mimesis.providers.base import BaseDataProvider
6 from mimesis.utils import pull
7
8 __all__ = ['Business']
9
10
11 class Business(BaseDataProvider):
12 """Class for generating data for business."""
13
14 def __init__(self, *args, **kwargs):
15 """Initialize attributes.
16
17 :param locale: Current locale.
18 """
19 super().__init__(*args, **kwargs)
20 self._data = pull('business.json', self.locale)
21
22 def company(self) -> str:
23 """Get a random company name.
24
25 :return: Company name.
26
27 :Example:
28 Gamma Systems.
29 """
30 return self.random.choice(
31 self._data['company']['name'])
32
33 def company_type(self, abbr: bool = False) -> str:
34 """Get a random type of business entity.
35
36 :param abbr: Abbreviated company type.
37 :return: Types of business entity.
38
39 :Example:
40 Incorporated.
41 """
42 return self.random.choice(
43 self._data['company']['type'].get(
44 'abbr' if abbr else 'title'),
45 )
46
47 def copyright(self) -> str:
48 """Generate a random copyright.
49
50 :return: Copyright of company.
51
52 :Example:
53 © Komercia, Inc.
54 """
55 return '© {}, {}'.format(
56 self.company(),
57 self.company_type(abbr=True),
58 )
59
60 def currency_iso_code(self) -> str:
61 """Get code of the currency.
62
63 :return: Currency code.
64
65 :Example:
66 RUR.
67 """
68 return self.random.choice(CURRENCY_ISO_CODES)
69
70 def cryptocurrency_iso_code(self) -> str:
71 """Get symbol of random cryptocurrency.
72
73 :return: Symbol of cryptocurrency.
74 """
75 return self.random.choice(CRYPTOCURRENCY_ISO_CODES)
76
77 def currency_symbol(self):
78 """Get a currency symbol for current locale.
79
80 :return: Currency symbol.
81 """
82 return CURRENCY_SYMBOLS[self.locale]
83
84 def cryptocurrency_symbol(self) -> str:
85 """Get a cryptocurrency symbol.
86
87 :return: Symbol of cryptocurrency.
88
89 :Example:
90 Ƀ
91 """
92 return self.random.choice(CRYPTOCURRENCY_SYMBOLS)
93
94 def price(self, minimum: float = 10.00,
95 maximum: float = 1000.00) -> str:
96 """Generate a random price.
97
98 :param minimum: Max value of price.
99 :param maximum: Min value of price.
100 :return: Price.
101
102 :Example:
103 599.99 $.
104 """
105 price = self.random.uniform(minimum, maximum, precision=2)
106 return '{0} {1}'.format(price, self.currency_symbol())
107
108 def price_in_btc(self, minimum: float = 0, maximum: float = 2) -> str:
109 """Generate random price in BTC.
110
111 :param minimum: Minimum value of price
112 :param maximum: Maximum value of price.
113 :return: Price in BTC.
114
115 :Example:
116 0.5885238 BTC
117 """
118 return '{} BTC'.format(
119 self.random.uniform(
120 minimum,
121 maximum,
122 precision=7,
123 ),
124 )
125
[end of mimesis/providers/business.py]
[start of mimesis/providers/cryptographic.py]
1 """Cryptographic data provider."""
2
3 import hashlib
4 import string
5 import uuid
6 from typing import Optional
7
8 from mimesis.enums import Algorithm
9 from mimesis.providers.base import BaseDataProvider
10 from mimesis.providers.text import Text
11 from mimesis.typing import Bytes
12
13 __all__ = ['Cryptographic']
14
15
16 class Cryptographic(BaseDataProvider):
17 """Class that provides cryptographic data."""
18
19 def __init__(self, *args, **kwargs) -> None:
20 """Initialize attributes.
21
22 :param seed: Seed.
23 """
24 super().__init__(*args, **kwargs)
25 self.__words = Text('en')._data['words']
26 self.__chars = string.ascii_letters + string.digits + string.punctuation
27
28 def uuid(self, version: Optional[int] = None) -> str:
29 """Generate random UUID.
30
31 :param version: UUID version.
32 :return: UUID
33 """
34 bits = self.random.getrandbits(128)
35 return str(uuid.UUID(int=bits, version=version))
36
37 def hash(self, algorithm: Optional[Algorithm] = None) -> str:
38 """Generate random hash.
39
40 :param algorithm: Enum object ``Algorithm``.
41 :return: Hash.
42 :raises NonEnumerableError: if algorithm is not supported.
43 """
44 key = self._validate_enum(algorithm, Algorithm)
45
46 if hasattr(hashlib, key):
47 fn = getattr(hashlib, key)
48 return fn(self.uuid().encode()).hexdigest()
49
50 def bytes(self, entropy: int = 32) -> Bytes:
51 """Generate byte string containing *entropy* bytes.
52
53 The string has *entropy* random bytes, each byte
54 converted to two hex digits.
55
56 :param entropy: Number of bytes.
57 :return: Bytes.
58 :rtype: bytes
59 """
60 return bytes(self.random.getrandbits(8)
61 for _ in range(entropy))
62
63 def token(self, entropy: int = 32) -> str:
64 """Generate hexadecimal string.
65
66 :param entropy: Number of bytes.
67 :return: Token.
68 """
69 return self.bytes(entropy).hex()
70
71 def salt(self, size: int = 16) -> str:
72 """Generate salt chars (not cryptographically safe).
73
74 :param size: Salt size.
75 :return: Salt.
76 """
77 char_sequence = [
78 self.random.choice(self.__chars)
79 for _ in range(size)
80 ]
81 return ''.join(char_sequence)
82
83 def mnemonic_phrase(self, length: int = 12) -> str:
84 """Generate pseudo mnemonic phrase.
85
86 :param length: Number of words.
87 :return: Mnemonic code.
88 """
89 words = self.__words['normal']
90 return ' '.join(self.random.choice(words) for _ in range(length))
91
[end of mimesis/providers/cryptographic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mimesis/providers/business.py b/mimesis/providers/business.py
--- a/mimesis/providers/business.py
+++ b/mimesis/providers/business.py
@@ -27,8 +27,7 @@
:Example:
Gamma Systems.
"""
- return self.random.choice(
- self._data['company']['name'])
+ return self.random.choice(self._data['company']['name'])
def company_type(self, abbr: bool = False) -> str:
"""Get a random type of business entity.
@@ -40,11 +39,10 @@
Incorporated.
"""
return self.random.choice(
- self._data['company']['type'].get(
- 'abbr' if abbr else 'title'),
+ self._data['company']['type'].get('abbr' if abbr else 'title'),
)
- def copyright(self) -> str:
+ def copyright(self) -> str: # noqa: A002
"""Generate a random copyright.
:return: Copyright of company.
diff --git a/mimesis/providers/cryptographic.py b/mimesis/providers/cryptographic.py
--- a/mimesis/providers/cryptographic.py
+++ b/mimesis/providers/cryptographic.py
@@ -34,7 +34,7 @@
bits = self.random.getrandbits(128)
return str(uuid.UUID(int=bits, version=version))
- def hash(self, algorithm: Optional[Algorithm] = None) -> str:
+ def hash(self, algorithm: Optional[Algorithm] = None) -> str: # noqa: A002
"""Generate random hash.
:param algorithm: Enum object ``Algorithm``.
@@ -47,7 +47,7 @@
fn = getattr(hashlib, key)
return fn(self.uuid().encode()).hexdigest()
- def bytes(self, entropy: int = 32) -> Bytes:
+ def bytes(self, entropy: int = 32) -> Bytes: # noqa: A002
"""Generate byte string containing *entropy* bytes.
The string has *entropy* random bytes, each byte
|
{"golden_diff": "diff --git a/mimesis/providers/business.py b/mimesis/providers/business.py\n--- a/mimesis/providers/business.py\n+++ b/mimesis/providers/business.py\n@@ -27,8 +27,7 @@\n :Example:\n Gamma Systems.\n \"\"\"\n- return self.random.choice(\n- self._data['company']['name'])\n+ return self.random.choice(self._data['company']['name'])\n \n def company_type(self, abbr: bool = False) -> str:\n \"\"\"Get a random type of business entity.\n@@ -40,11 +39,10 @@\n Incorporated.\n \"\"\"\n return self.random.choice(\n- self._data['company']['type'].get(\n- 'abbr' if abbr else 'title'),\n+ self._data['company']['type'].get('abbr' if abbr else 'title'),\n )\n \n- def copyright(self) -> str:\n+ def copyright(self) -> str: # noqa: A002\n \"\"\"Generate a random copyright.\n \n :return: Copyright of company.\ndiff --git a/mimesis/providers/cryptographic.py b/mimesis/providers/cryptographic.py\n--- a/mimesis/providers/cryptographic.py\n+++ b/mimesis/providers/cryptographic.py\n@@ -34,7 +34,7 @@\n bits = self.random.getrandbits(128)\n return str(uuid.UUID(int=bits, version=version))\n \n- def hash(self, algorithm: Optional[Algorithm] = None) -> str:\n+ def hash(self, algorithm: Optional[Algorithm] = None) -> str: # noqa: A002\n \"\"\"Generate random hash.\n \n :param algorithm: Enum object ``Algorithm``.\n@@ -47,7 +47,7 @@\n fn = getattr(hashlib, key)\n return fn(self.uuid().encode()).hexdigest()\n \n- def bytes(self, entropy: int = 32) -> Bytes:\n+ def bytes(self, entropy: int = 32) -> Bytes: # noqa: A002\n \"\"\"Generate byte string containing *entropy* bytes.\n \n The string has *entropy* random bytes, each byte\n", "issue": "Arguments shadow builtin names\nAfter we have updated `flake8-builtins` we got new several new errors.\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n_________________________________ FLAKE8-check _________________________________\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/business.py:47:5: A002 \"copyright\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n_________________________________ FLAKE8-check _________________________________\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:37:5: A002 \"hash\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:50:5: A002 \"bytes\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n```\nArguments shadow builtin names\nAfter we have updated `flake8-builtins` we got new several new errors.\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n_________________________________ FLAKE8-check _________________________________\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/business.py:47:5: A002 \"copyright\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n_________________________________ FLAKE8-check _________________________________\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:37:5: A002 \"hash\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n/home/travis/build/lk-geimfari/mimesis/mimesis/providers/cryptographic.py:50:5: A002 \"bytes\" is used as an argument and thus shadows a python builtin, consider renaming the argument\r\n```\n", "before_files": [{"content": "\"\"\"Business data provider.\"\"\"\n\nfrom mimesis.data import (CRYPTOCURRENCY_ISO_CODES, CRYPTOCURRENCY_SYMBOLS,\n CURRENCY_ISO_CODES, CURRENCY_SYMBOLS)\nfrom mimesis.providers.base import BaseDataProvider\nfrom mimesis.utils import pull\n\n__all__ = ['Business']\n\n\nclass Business(BaseDataProvider):\n \"\"\"Class for generating data for business.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize attributes.\n\n :param locale: Current locale.\n \"\"\"\n super().__init__(*args, **kwargs)\n self._data = pull('business.json', self.locale)\n\n def company(self) -> str:\n \"\"\"Get a random company name.\n\n :return: Company name.\n\n :Example:\n Gamma Systems.\n \"\"\"\n return self.random.choice(\n self._data['company']['name'])\n\n def company_type(self, abbr: bool = False) -> str:\n \"\"\"Get a random type of business entity.\n\n :param abbr: Abbreviated company type.\n :return: Types of business entity.\n\n :Example:\n Incorporated.\n \"\"\"\n return self.random.choice(\n self._data['company']['type'].get(\n 'abbr' if abbr else 'title'),\n )\n\n def copyright(self) -> str:\n \"\"\"Generate a random copyright.\n\n :return: Copyright of company.\n\n :Example:\n \u00a9 Komercia, Inc.\n \"\"\"\n return '\u00a9 {}, {}'.format(\n self.company(),\n self.company_type(abbr=True),\n )\n\n def currency_iso_code(self) -> str:\n \"\"\"Get code of the currency.\n\n :return: Currency code.\n\n :Example:\n RUR.\n \"\"\"\n return self.random.choice(CURRENCY_ISO_CODES)\n\n def cryptocurrency_iso_code(self) -> str:\n \"\"\"Get symbol of random cryptocurrency.\n\n :return: Symbol of cryptocurrency.\n \"\"\"\n return self.random.choice(CRYPTOCURRENCY_ISO_CODES)\n\n def currency_symbol(self):\n \"\"\"Get a currency symbol for current locale.\n\n :return: Currency symbol.\n \"\"\"\n return CURRENCY_SYMBOLS[self.locale]\n\n def cryptocurrency_symbol(self) -> str:\n \"\"\"Get a cryptocurrency symbol.\n\n :return: Symbol of cryptocurrency.\n\n :Example:\n \u0243\n \"\"\"\n return self.random.choice(CRYPTOCURRENCY_SYMBOLS)\n\n def price(self, minimum: float = 10.00,\n maximum: float = 1000.00) -> str:\n \"\"\"Generate a random price.\n\n :param minimum: Max value of price.\n :param maximum: Min value of price.\n :return: Price.\n\n :Example:\n 599.99 $.\n \"\"\"\n price = self.random.uniform(minimum, maximum, precision=2)\n return '{0} {1}'.format(price, self.currency_symbol())\n\n def price_in_btc(self, minimum: float = 0, maximum: float = 2) -> str:\n \"\"\"Generate random price in BTC.\n\n :param minimum: Minimum value of price\n :param maximum: Maximum value of price.\n :return: Price in BTC.\n\n :Example:\n 0.5885238 BTC\n \"\"\"\n return '{} BTC'.format(\n self.random.uniform(\n minimum,\n maximum,\n precision=7,\n ),\n )\n", "path": "mimesis/providers/business.py"}, {"content": "\"\"\"Cryptographic data provider.\"\"\"\n\nimport hashlib\nimport string\nimport uuid\nfrom typing import Optional\n\nfrom mimesis.enums import Algorithm\nfrom mimesis.providers.base import BaseDataProvider\nfrom mimesis.providers.text import Text\nfrom mimesis.typing import Bytes\n\n__all__ = ['Cryptographic']\n\n\nclass Cryptographic(BaseDataProvider):\n \"\"\"Class that provides cryptographic data.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes.\n\n :param seed: Seed.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__words = Text('en')._data['words']\n self.__chars = string.ascii_letters + string.digits + string.punctuation\n\n def uuid(self, version: Optional[int] = None) -> str:\n \"\"\"Generate random UUID.\n\n :param version: UUID version.\n :return: UUID\n \"\"\"\n bits = self.random.getrandbits(128)\n return str(uuid.UUID(int=bits, version=version))\n\n def hash(self, algorithm: Optional[Algorithm] = None) -> str:\n \"\"\"Generate random hash.\n\n :param algorithm: Enum object ``Algorithm``.\n :return: Hash.\n :raises NonEnumerableError: if algorithm is not supported.\n \"\"\"\n key = self._validate_enum(algorithm, Algorithm)\n\n if hasattr(hashlib, key):\n fn = getattr(hashlib, key)\n return fn(self.uuid().encode()).hexdigest()\n\n def bytes(self, entropy: int = 32) -> Bytes:\n \"\"\"Generate byte string containing *entropy* bytes.\n\n The string has *entropy* random bytes, each byte\n converted to two hex digits.\n\n :param entropy: Number of bytes.\n :return: Bytes.\n :rtype: bytes\n \"\"\"\n return bytes(self.random.getrandbits(8)\n for _ in range(entropy))\n\n def token(self, entropy: int = 32) -> str:\n \"\"\"Generate hexadecimal string.\n\n :param entropy: Number of bytes.\n :return: Token.\n \"\"\"\n return self.bytes(entropy).hex()\n\n def salt(self, size: int = 16) -> str:\n \"\"\"Generate salt chars (not cryptographically safe).\n\n :param size: Salt size.\n :return: Salt.\n \"\"\"\n char_sequence = [\n self.random.choice(self.__chars)\n for _ in range(size)\n ]\n return ''.join(char_sequence)\n\n def mnemonic_phrase(self, length: int = 12) -> str:\n \"\"\"Generate pseudo mnemonic phrase.\n\n :param length: Number of words.\n :return: Mnemonic code.\n \"\"\"\n words = self.__words['normal']\n return ' '.join(self.random.choice(words) for _ in range(length))\n", "path": "mimesis/providers/cryptographic.py"}]}
| 2,753 | 478 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.