problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_15811 | rasdani/github-patches | git_diff | sunpy__sunpy-1260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Possible issue with checking of Quantity input
In my branch ji_diff_rot3 I've started using the quantity_input decorator. It now decorates rot_hpc, a differential rotation calculator. The decorator crashes when applied to this function
``` python
In [1]: from sunpy.physics.transforms.differential_rotation import rot_hpc
In [2]: import astropy.units as u
In [3]: rot_hpc( -570 * u.arcsec, 120 * u.arcsec, '2010-09-10 12:34:56', '2010-09-10 13:34:56')
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-0803dabeac04> in <module>()
----> 1 rot_hpc( -570 * u.arcsec, 120 * u.arcsec, '2010-09-10 12:34:56', '2010-09-10 13:34:56')
/Users/ireland/sunpy/sunpy/util/unit_decorators.pyc in wrapper(*func_args, **func_kwargs)
81
82 # Get the value of this parameter (argument to new function)
---> 83 arg = bound_args.arguments[param.name]
84
85 # Get target unit, either from decotrator kwargs or annotations
KeyError: 'kwargs'
```
The error is reproducible with much simpler functions:
``` python
In [2]: import astropy.units as u
In [3]: from sunpy.util import quantity_input
In [4]: @quantity_input(a=u.deg)
...: def funct(a, **kwargs):
...: pass
...:
In [5]: funct(1 * u.deg)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-5fb4e89adcc1> in <module>()
----> 1 funct(1 * u.deg)
/Users/ireland/sunpy/sunpy/util/unit_decorators.pyc in wrapper(*func_args, **func_kwargs)
81
82 # Get the value of this parameter (argument to new function)
---> 83 arg = bound_args.arguments[param.name]
84
85 # Get target unit, either from decotrator kwargs or annotations
KeyError: 'kwargs'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/util/unit_decorators.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Lovingly borrowed from Astropy
3 # Licensed under a 3-clause BSD style license - see licences/ASTROPY.rst
4
5 __all__ = ['quantity_input']
6
7 from ..util.compat import funcsigs
8
9 from astropy.units import UnitsError, add_enabled_equivalencies
10
11 class QuantityInput(object):
12
13 @classmethod
14 def as_decorator(cls, func=None, **kwargs):
15 """
16 A decorator for validating the units of arguments to functions.
17
18 Unit specifications can be provided as keyword arguments to the decorator,
19 or by using Python 3's function annotation syntax. Arguments to the decorator
20 take precidence over any function annotations present.
21
22 A `~astropy.units.UnitsError` will be raised if the unit attribute of
23 the argument is not equivalent to the unit specified to the decorator
24 or in the annotation.
25 If the argument has no unit attribute, i.e. it is not a Quantity object, a
26 `~exceptions.ValueError` will be raised.
27
28 Where an equivalency is specified in the decorator, the function will be
29 executed with that equivalency in force.
30
31 Examples
32 --------
33
34 Python 2 and 3::
35
36 import astropy.units as u
37 @u.quantity_input(myangle=u.arcsec)
38 def myfunction(myangle):
39 return myangle**2
40
41 Python 3 only::
42
43 import astropy.units as u
44 @u.quantity_input
45 def myfunction(myangle: u.arcsec):
46 return myangle**2
47
48 Using equivalencies::
49
50 import astropy.units as u
51 @u.quantity_input(myenergy=u.eV, equivalencies=u.mass_energy())
52 def myfunction(myenergy):
53 return myenergy**2
54
55 """
56 self = cls(**kwargs)
57 if func is not None and not kwargs:
58 return self(func)
59 else:
60 return self
61
62 def __init__(self, func=None, **kwargs):
63 self.equivalencies = kwargs.pop('equivalencies', [])
64 self.decorator_kwargs = kwargs
65
66 def __call__(self, wrapped_function):
67
68 # Extract the function signature for the function we are wrapping.
69 wrapped_signature = funcsigs.signature(wrapped_function)
70
71 # Define a new function to return in place of the wrapped one
72 def wrapper(*func_args, **func_kwargs):
73 # Bind the arguments to our new function to the signature of the original.
74 bound_args = wrapped_signature.bind(*func_args, **func_kwargs)
75
76 # Iterate through the parameters of the original signature
77 for param in wrapped_signature.parameters.values():
78 # Catch the (never triggered) case where bind relied on a default value.
79 if param.name not in bound_args.arguments and param.default is not param.empty:
80 bound_args.arguments[param.name] = param.default
81
82 # Get the value of this parameter (argument to new function)
83 arg = bound_args.arguments[param.name]
84
85 # Get target unit, either from decotrator kwargs or annotations
86 if param.name in self.decorator_kwargs:
87 target_unit = self.decorator_kwargs[param.name]
88 else:
89 target_unit = param.annotation
90
91 # If the target unit is empty, then no unit was specified so we
92 # move past it
93 if target_unit is not funcsigs.Parameter.empty:
94 try:
95 equivalent = arg.unit.is_equivalent(target_unit,
96 equivalencies=self.equivalencies)
97
98 if not equivalent:
99 raise UnitsError("Argument '{0}' to function '{1}'"
100 " must be in units convertable to"
101 " '{2}'.".format(param.name,
102 wrapped_function.__name__,
103 target_unit.to_string()))
104
105 # Either there is no .unit or no .is_equivalent
106 except AttributeError:
107 if hasattr(arg, "unit"):
108 error_msg = "a 'unit' attribute without an 'is_equivalent' method"
109 else:
110 error_msg = "no 'unit' attribute"
111 raise TypeError("Argument '{0}' to function has '{1}' {2}. "
112 "You may want to pass in an astropy Quantity instead."
113 .format(param.name, wrapped_function.__name__, error_msg))
114
115 # Call the original function with any equivalencies in force.
116 with add_enabled_equivalencies(self.equivalencies):
117 return wrapped_function(*func_args, **func_kwargs)
118
119 return wrapper
120
121 quantity_input = QuantityInput.as_decorator
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/util/unit_decorators.py b/sunpy/util/unit_decorators.py
--- a/sunpy/util/unit_decorators.py
+++ b/sunpy/util/unit_decorators.py
@@ -75,6 +75,11 @@
# Iterate through the parameters of the original signature
for param in wrapped_signature.parameters.values():
+ # We do not support variable arguments (*args,
+ # **kwargs)
+ if param.kind in (funcsigs.Parameter.VAR_KEYWORD,
+ funcsigs.Parameter.VAR_POSITIONAL):
+ continue
# Catch the (never triggered) case where bind relied on a default value.
if param.name not in bound_args.arguments and param.default is not param.empty:
bound_args.arguments[param.name] = param.default
| {"golden_diff": "diff --git a/sunpy/util/unit_decorators.py b/sunpy/util/unit_decorators.py\n--- a/sunpy/util/unit_decorators.py\n+++ b/sunpy/util/unit_decorators.py\n@@ -75,6 +75,11 @@\n \n # Iterate through the parameters of the original signature\n for param in wrapped_signature.parameters.values():\n+ # We do not support variable arguments (*args,\n+ # **kwargs)\n+ if param.kind in (funcsigs.Parameter.VAR_KEYWORD,\n+ funcsigs.Parameter.VAR_POSITIONAL):\n+ continue\n # Catch the (never triggered) case where bind relied on a default value.\n if param.name not in bound_args.arguments and param.default is not param.empty:\n bound_args.arguments[param.name] = param.default\n", "issue": "Possible issue with checking of Quantity input\nIn my branch ji_diff_rot3 I've started using the quantity_input decorator. It now decorates rot_hpc, a differential rotation calculator. The decorator crashes when applied to this function\n\n``` python\nIn [1]: from sunpy.physics.transforms.differential_rotation import rot_hpc\nIn [2]: import astropy.units as u\nIn [3]: rot_hpc( -570 * u.arcsec, 120 * u.arcsec, '2010-09-10 12:34:56', '2010-09-10 13:34:56')\n---------------------------------------------------------------------------\nKeyError Traceback (most recent call last)\n<ipython-input-3-0803dabeac04> in <module>()\n----> 1 rot_hpc( -570 * u.arcsec, 120 * u.arcsec, '2010-09-10 12:34:56', '2010-09-10 13:34:56')\n\n/Users/ireland/sunpy/sunpy/util/unit_decorators.pyc in wrapper(*func_args, **func_kwargs)\n 81 \n 82 # Get the value of this parameter (argument to new function)\n---> 83 arg = bound_args.arguments[param.name]\n 84 \n 85 # Get target unit, either from decotrator kwargs or annotations\n\nKeyError: 'kwargs'\n```\n\nThe error is reproducible with much simpler functions:\n\n``` python\nIn [2]: import astropy.units as u\nIn [3]: from sunpy.util import quantity_input\n\nIn [4]: @quantity_input(a=u.deg)\n ...: def funct(a, **kwargs):\n ...: pass\n ...: \n\nIn [5]: funct(1 * u.deg)\n---------------------------------------------------------------------------\nKeyError Traceback (most recent call last)\n<ipython-input-5-5fb4e89adcc1> in <module>()\n----> 1 funct(1 * u.deg)\n\n/Users/ireland/sunpy/sunpy/util/unit_decorators.pyc in wrapper(*func_args, **func_kwargs)\n 81 \n 82 # Get the value of this parameter (argument to new function)\n---> 83 arg = bound_args.arguments[param.name]\n 84 \n 85 # Get target unit, either from decotrator kwargs or annotations\n\nKeyError: 'kwargs'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Lovingly borrowed from Astropy\n# Licensed under a 3-clause BSD style license - see licences/ASTROPY.rst\n\n__all__ = ['quantity_input']\n\nfrom ..util.compat import funcsigs\n\nfrom astropy.units import UnitsError, add_enabled_equivalencies\n\nclass QuantityInput(object):\n\n @classmethod\n def as_decorator(cls, func=None, **kwargs):\n \"\"\"\n A decorator for validating the units of arguments to functions.\n\n Unit specifications can be provided as keyword arguments to the decorator,\n or by using Python 3's function annotation syntax. Arguments to the decorator\n take precidence over any function annotations present.\n\n A `~astropy.units.UnitsError` will be raised if the unit attribute of\n the argument is not equivalent to the unit specified to the decorator\n or in the annotation.\n If the argument has no unit attribute, i.e. it is not a Quantity object, a\n `~exceptions.ValueError` will be raised.\n\n Where an equivalency is specified in the decorator, the function will be\n executed with that equivalency in force.\n\n Examples\n --------\n\n Python 2 and 3::\n\n import astropy.units as u\n @u.quantity_input(myangle=u.arcsec)\n def myfunction(myangle):\n return myangle**2\n\n Python 3 only::\n\n import astropy.units as u\n @u.quantity_input\n def myfunction(myangle: u.arcsec):\n return myangle**2\n\n Using equivalencies::\n\n import astropy.units as u\n @u.quantity_input(myenergy=u.eV, equivalencies=u.mass_energy())\n def myfunction(myenergy):\n return myenergy**2\n\n \"\"\"\n self = cls(**kwargs)\n if func is not None and not kwargs:\n return self(func)\n else:\n return self\n\n def __init__(self, func=None, **kwargs):\n self.equivalencies = kwargs.pop('equivalencies', [])\n self.decorator_kwargs = kwargs\n\n def __call__(self, wrapped_function):\n\n # Extract the function signature for the function we are wrapping.\n wrapped_signature = funcsigs.signature(wrapped_function)\n\n # Define a new function to return in place of the wrapped one\n def wrapper(*func_args, **func_kwargs):\n # Bind the arguments to our new function to the signature of the original.\n bound_args = wrapped_signature.bind(*func_args, **func_kwargs)\n\n # Iterate through the parameters of the original signature\n for param in wrapped_signature.parameters.values():\n # Catch the (never triggered) case where bind relied on a default value.\n if param.name not in bound_args.arguments and param.default is not param.empty:\n bound_args.arguments[param.name] = param.default\n\n # Get the value of this parameter (argument to new function)\n arg = bound_args.arguments[param.name]\n\n # Get target unit, either from decotrator kwargs or annotations\n if param.name in self.decorator_kwargs:\n target_unit = self.decorator_kwargs[param.name]\n else:\n target_unit = param.annotation\n\n # If the target unit is empty, then no unit was specified so we\n # move past it\n if target_unit is not funcsigs.Parameter.empty:\n try:\n equivalent = arg.unit.is_equivalent(target_unit,\n equivalencies=self.equivalencies)\n\n if not equivalent:\n raise UnitsError(\"Argument '{0}' to function '{1}'\"\n \" must be in units convertable to\"\n \" '{2}'.\".format(param.name,\n wrapped_function.__name__,\n target_unit.to_string()))\n\n # Either there is no .unit or no .is_equivalent\n except AttributeError:\n if hasattr(arg, \"unit\"):\n error_msg = \"a 'unit' attribute without an 'is_equivalent' method\"\n else:\n error_msg = \"no 'unit' attribute\"\n raise TypeError(\"Argument '{0}' to function has '{1}' {2}. \"\n \"You may want to pass in an astropy Quantity instead.\"\n .format(param.name, wrapped_function.__name__, error_msg))\n\n # Call the original function with any equivalencies in force.\n with add_enabled_equivalencies(self.equivalencies):\n return wrapped_function(*func_args, **func_kwargs)\n\n return wrapper\n\nquantity_input = QuantityInput.as_decorator\n", "path": "sunpy/util/unit_decorators.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Lovingly borrowed from Astropy\n# Licensed under a 3-clause BSD style license - see licences/ASTROPY.rst\n\n__all__ = ['quantity_input']\n\nfrom ..util.compat import funcsigs\n\nfrom astropy.units import UnitsError, add_enabled_equivalencies\n\nclass QuantityInput(object):\n\n @classmethod\n def as_decorator(cls, func=None, **kwargs):\n \"\"\"\n A decorator for validating the units of arguments to functions.\n\n Unit specifications can be provided as keyword arguments to the decorator,\n or by using Python 3's function annotation syntax. Arguments to the decorator\n take precidence over any function annotations present.\n\n A `~astropy.units.UnitsError` will be raised if the unit attribute of\n the argument is not equivalent to the unit specified to the decorator\n or in the annotation.\n If the argument has no unit attribute, i.e. it is not a Quantity object, a\n `~exceptions.ValueError` will be raised.\n\n Where an equivalency is specified in the decorator, the function will be\n executed with that equivalency in force.\n\n Examples\n --------\n\n Python 2 and 3::\n\n import astropy.units as u\n @u.quantity_input(myangle=u.arcsec)\n def myfunction(myangle):\n return myangle**2\n\n Python 3 only::\n\n import astropy.units as u\n @u.quantity_input\n def myfunction(myangle: u.arcsec):\n return myangle**2\n\n Using equivalencies::\n\n import astropy.units as u\n @u.quantity_input(myenergy=u.eV, equivalencies=u.mass_energy())\n def myfunction(myenergy):\n return myenergy**2\n\n \"\"\"\n self = cls(**kwargs)\n if func is not None and not kwargs:\n return self(func)\n else:\n return self\n\n def __init__(self, func=None, **kwargs):\n self.equivalencies = kwargs.pop('equivalencies', [])\n self.decorator_kwargs = kwargs\n\n def __call__(self, wrapped_function):\n\n # Extract the function signature for the function we are wrapping.\n wrapped_signature = funcsigs.signature(wrapped_function)\n\n # Define a new function to return in place of the wrapped one\n def wrapper(*func_args, **func_kwargs):\n # Bind the arguments to our new function to the signature of the original.\n bound_args = wrapped_signature.bind(*func_args, **func_kwargs)\n\n # Iterate through the parameters of the original signature\n for param in wrapped_signature.parameters.values():\n # We do not support variable arguments (*args,\n # **kwargs)\n if param.kind in (funcsigs.Parameter.VAR_KEYWORD,\n funcsigs.Parameter.VAR_POSITIONAL):\n continue\n # Catch the (never triggered) case where bind relied on a default value.\n if param.name not in bound_args.arguments and param.default is not param.empty:\n bound_args.arguments[param.name] = param.default\n\n # Get the value of this parameter (argument to new function)\n arg = bound_args.arguments[param.name]\n\n # Get target unit, either from decotrator kwargs or annotations\n if param.name in self.decorator_kwargs:\n target_unit = self.decorator_kwargs[param.name]\n else:\n target_unit = param.annotation\n\n # If the target unit is empty, then no unit was specified so we\n # move past it\n if target_unit is not funcsigs.Parameter.empty:\n try:\n equivalent = arg.unit.is_equivalent(target_unit,\n equivalencies=self.equivalencies)\n\n if not equivalent:\n raise UnitsError(\"Argument '{0}' to function '{1}'\"\n \" must be in units convertable to\"\n \" '{2}'.\".format(param.name,\n wrapped_function.__name__,\n target_unit.to_string()))\n\n # Either there is no .unit or no .is_equivalent\n except AttributeError:\n if hasattr(arg, \"unit\"):\n error_msg = \"a 'unit' attribute without an 'is_equivalent' method\"\n else:\n error_msg = \"no 'unit' attribute\"\n raise TypeError(\"Argument '{0}' to function has '{1}' {2}. \"\n \"You may want to pass in an astropy Quantity instead.\"\n .format(param.name, wrapped_function.__name__, error_msg))\n\n # Call the original function with any equivalencies in force.\n with add_enabled_equivalencies(self.equivalencies):\n return wrapped_function(*func_args, **func_kwargs)\n\n return wrapper\n\nquantity_input = QuantityInput.as_decorator\n", "path": "sunpy/util/unit_decorators.py"}]} | 2,031 | 169 |
gh_patches_debug_18953 | rasdani/github-patches | git_diff | goauthentik__authentik-4920 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Session duration not working correctly
**Describe the bug**
I changed the session duration of the default-authentication-login to 18 hours. Still, after a login the session is valid for 14 days.
For me, it looks like the session duration value is ignored.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Admin interface'
2. Click on 'Flows & Stages'
3. Edit 'default-authentication-login'
4. Change 'Session duration' to 'hours=18;minutes=0;seconds=0'
5. Logout & Login
6. Click on the settings-icon (βwheelβ)
7. Check the session duration.
**Expected behavior**
I want to achieve, that every user has to authenticate once a day. Therefore, the session duration of 18 hours.
**Screenshots**


**Logs**
There are no logs.
**Version and Deployment (please complete the following information):**
- authentik 2023.1.2
- Deployment: docker-compose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/stages/user_login/stage.py`
Content:
```
1 """Login stage logic"""
2 from django.contrib import messages
3 from django.contrib.auth import login
4 from django.http import HttpRequest, HttpResponse
5 from django.utils.translation import gettext as _
6
7 from authentik.core.models import AuthenticatedSession, User
8 from authentik.flows.planner import PLAN_CONTEXT_PENDING_USER, PLAN_CONTEXT_SOURCE
9 from authentik.flows.stage import StageView
10 from authentik.lib.utils.time import timedelta_from_string
11 from authentik.stages.password import BACKEND_INBUILT
12 from authentik.stages.password.stage import PLAN_CONTEXT_AUTHENTICATION_BACKEND
13
14
15 class UserLoginStageView(StageView):
16 """Finalise Authentication flow by logging the user in"""
17
18 def post(self, request: HttpRequest) -> HttpResponse:
19 """Wrapper for post requests"""
20 return self.get(request)
21
22 def get(self, request: HttpRequest) -> HttpResponse:
23 """Attach the currently pending user to the current session"""
24 if PLAN_CONTEXT_PENDING_USER not in self.executor.plan.context:
25 message = _("No Pending user to login.")
26 messages.error(request, message)
27 self.logger.debug(message)
28 return self.executor.stage_invalid()
29 backend = self.executor.plan.context.get(
30 PLAN_CONTEXT_AUTHENTICATION_BACKEND, BACKEND_INBUILT
31 )
32 user: User = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]
33 if not user.is_active:
34 self.logger.warning("User is not active, login will not work.")
35 login(
36 self.request,
37 user,
38 backend=backend,
39 )
40 delta = timedelta_from_string(self.executor.current_stage.session_duration)
41 if delta.total_seconds() == 0:
42 self.request.session.set_expiry(0)
43 else:
44 self.request.session.set_expiry(delta)
45 self.logger.debug(
46 "Logged in",
47 backend=backend,
48 user=user.username,
49 flow_slug=self.executor.flow.slug,
50 session_duration=self.executor.current_stage.session_duration,
51 )
52 # Only show success message if we don't have a source in the flow
53 # as sources show their own success messages
54 if not self.executor.plan.context.get(PLAN_CONTEXT_SOURCE, None):
55 messages.success(self.request, _("Successfully logged in!"))
56 if self.executor.current_stage.terminate_other_sessions:
57 AuthenticatedSession.objects.filter(
58 user=user,
59 ).exclude(session_key=self.request.session.session_key).delete()
60 return self.executor.stage_ok()
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/stages/user_login/stage.py b/authentik/stages/user_login/stage.py
--- a/authentik/stages/user_login/stage.py
+++ b/authentik/stages/user_login/stage.py
@@ -32,16 +32,16 @@
user: User = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]
if not user.is_active:
self.logger.warning("User is not active, login will not work.")
- login(
- self.request,
- user,
- backend=backend,
- )
delta = timedelta_from_string(self.executor.current_stage.session_duration)
if delta.total_seconds() == 0:
self.request.session.set_expiry(0)
else:
self.request.session.set_expiry(delta)
+ login(
+ self.request,
+ user,
+ backend=backend,
+ )
self.logger.debug(
"Logged in",
backend=backend,
| {"golden_diff": "diff --git a/authentik/stages/user_login/stage.py b/authentik/stages/user_login/stage.py\n--- a/authentik/stages/user_login/stage.py\n+++ b/authentik/stages/user_login/stage.py\n@@ -32,16 +32,16 @@\n user: User = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]\n if not user.is_active:\n self.logger.warning(\"User is not active, login will not work.\")\n- login(\n- self.request,\n- user,\n- backend=backend,\n- )\n delta = timedelta_from_string(self.executor.current_stage.session_duration)\n if delta.total_seconds() == 0:\n self.request.session.set_expiry(0)\n else:\n self.request.session.set_expiry(delta)\n+ login(\n+ self.request,\n+ user,\n+ backend=backend,\n+ )\n self.logger.debug(\n \"Logged in\",\n backend=backend,\n", "issue": "Session duration not working correctly\n**Describe the bug**\r\nI changed the session duration of the default-authentication-login to 18 hours. Still, after a login the session is valid for 14 days.\r\nFor me, it looks like the session duration value is ignored.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to 'Admin interface'\r\n2. Click on 'Flows & Stages'\r\n3. Edit 'default-authentication-login'\r\n4. Change 'Session duration' to 'hours=18;minutes=0;seconds=0'\r\n5. Logout & Login\r\n6. Click on the settings-icon (\u201cwheel\u201d)\r\n7. Check the session duration.\r\n\r\n**Expected behavior**\r\nI want to achieve, that every user has to authenticate once a day. Therefore, the session duration of 18 hours. \r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n\r\n**Logs**\r\nThere are no logs.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik 2023.1.2\r\n - Deployment: docker-compose\r\n\n", "before_files": [{"content": "\"\"\"Login stage logic\"\"\"\nfrom django.contrib import messages\nfrom django.contrib.auth import login\nfrom django.http import HttpRequest, HttpResponse\nfrom django.utils.translation import gettext as _\n\nfrom authentik.core.models import AuthenticatedSession, User\nfrom authentik.flows.planner import PLAN_CONTEXT_PENDING_USER, PLAN_CONTEXT_SOURCE\nfrom authentik.flows.stage import StageView\nfrom authentik.lib.utils.time import timedelta_from_string\nfrom authentik.stages.password import BACKEND_INBUILT\nfrom authentik.stages.password.stage import PLAN_CONTEXT_AUTHENTICATION_BACKEND\n\n\nclass UserLoginStageView(StageView):\n \"\"\"Finalise Authentication flow by logging the user in\"\"\"\n\n def post(self, request: HttpRequest) -> HttpResponse:\n \"\"\"Wrapper for post requests\"\"\"\n return self.get(request)\n\n def get(self, request: HttpRequest) -> HttpResponse:\n \"\"\"Attach the currently pending user to the current session\"\"\"\n if PLAN_CONTEXT_PENDING_USER not in self.executor.plan.context:\n message = _(\"No Pending user to login.\")\n messages.error(request, message)\n self.logger.debug(message)\n return self.executor.stage_invalid()\n backend = self.executor.plan.context.get(\n PLAN_CONTEXT_AUTHENTICATION_BACKEND, BACKEND_INBUILT\n )\n user: User = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]\n if not user.is_active:\n self.logger.warning(\"User is not active, login will not work.\")\n login(\n self.request,\n user,\n backend=backend,\n )\n delta = timedelta_from_string(self.executor.current_stage.session_duration)\n if delta.total_seconds() == 0:\n self.request.session.set_expiry(0)\n else:\n self.request.session.set_expiry(delta)\n self.logger.debug(\n \"Logged in\",\n backend=backend,\n user=user.username,\n flow_slug=self.executor.flow.slug,\n session_duration=self.executor.current_stage.session_duration,\n )\n # Only show success message if we don't have a source in the flow\n # as sources show their own success messages\n if not self.executor.plan.context.get(PLAN_CONTEXT_SOURCE, None):\n messages.success(self.request, _(\"Successfully logged in!\"))\n if self.executor.current_stage.terminate_other_sessions:\n AuthenticatedSession.objects.filter(\n user=user,\n ).exclude(session_key=self.request.session.session_key).delete()\n return self.executor.stage_ok()\n", "path": "authentik/stages/user_login/stage.py"}], "after_files": [{"content": "\"\"\"Login stage logic\"\"\"\nfrom django.contrib import messages\nfrom django.contrib.auth import login\nfrom django.http import HttpRequest, HttpResponse\nfrom django.utils.translation import gettext as _\n\nfrom authentik.core.models import AuthenticatedSession, User\nfrom authentik.flows.planner import PLAN_CONTEXT_PENDING_USER, PLAN_CONTEXT_SOURCE\nfrom authentik.flows.stage import StageView\nfrom authentik.lib.utils.time import timedelta_from_string\nfrom authentik.stages.password import BACKEND_INBUILT\nfrom authentik.stages.password.stage import PLAN_CONTEXT_AUTHENTICATION_BACKEND\n\n\nclass UserLoginStageView(StageView):\n \"\"\"Finalise Authentication flow by logging the user in\"\"\"\n\n def post(self, request: HttpRequest) -> HttpResponse:\n \"\"\"Wrapper for post requests\"\"\"\n return self.get(request)\n\n def get(self, request: HttpRequest) -> HttpResponse:\n \"\"\"Attach the currently pending user to the current session\"\"\"\n if PLAN_CONTEXT_PENDING_USER not in self.executor.plan.context:\n message = _(\"No Pending user to login.\")\n messages.error(request, message)\n self.logger.debug(message)\n return self.executor.stage_invalid()\n backend = self.executor.plan.context.get(\n PLAN_CONTEXT_AUTHENTICATION_BACKEND, BACKEND_INBUILT\n )\n user: User = self.executor.plan.context[PLAN_CONTEXT_PENDING_USER]\n if not user.is_active:\n self.logger.warning(\"User is not active, login will not work.\")\n delta = timedelta_from_string(self.executor.current_stage.session_duration)\n if delta.total_seconds() == 0:\n self.request.session.set_expiry(0)\n else:\n self.request.session.set_expiry(delta)\n login(\n self.request,\n user,\n backend=backend,\n )\n self.logger.debug(\n \"Logged in\",\n backend=backend,\n user=user.username,\n flow_slug=self.executor.flow.slug,\n session_duration=self.executor.current_stage.session_duration,\n )\n # Only show success message if we don't have a source in the flow\n # as sources show their own success messages\n if not self.executor.plan.context.get(PLAN_CONTEXT_SOURCE, None):\n messages.success(self.request, _(\"Successfully logged in!\"))\n if self.executor.current_stage.terminate_other_sessions:\n AuthenticatedSession.objects.filter(\n user=user,\n ).exclude(session_key=self.request.session.session_key).delete()\n return self.executor.stage_ok()\n", "path": "authentik/stages/user_login/stage.py"}]} | 1,207 | 205 |
gh_patches_debug_35310 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-57 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cloud SDK credentials should use the 'active' config not the 'default' config
Context: https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2588
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/auth/_cloud_sdk.py`
Content:
```
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helpers for reading the Google Cloud SDK's configuration."""
16
17 import os
18
19 import six
20 from six.moves import configparser
21
22 from google.auth import environment_vars
23 import google.oauth2.credentials
24
25 # The Google OAuth 2.0 token endpoint. Used for authorized user credentials.
26 _GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'
27
28 # The ~/.config subdirectory containing gcloud credentials.
29 _CONFIG_DIRECTORY = 'gcloud'
30 # Windows systems store config at %APPDATA%\gcloud
31 _WINDOWS_CONFIG_ROOT_ENV_VAR = 'APPDATA'
32 # The name of the file in the Cloud SDK config that contains default
33 # credentials.
34 _CREDENTIALS_FILENAME = 'application_default_credentials.json'
35 # The name of the file in the Cloud SDK config that contains the
36 # active configuration.
37 _ACTIVE_CONFIG_FILENAME = os.path.join(
38 'configurations', 'config_default')
39 # The config section and key for the project ID in the cloud SDK config.
40 _PROJECT_CONFIG_SECTION = 'core'
41 _PROJECT_CONFIG_KEY = 'project'
42
43
44 def get_config_path():
45 """Returns the absolute path the the Cloud SDK's configuration directory.
46
47 Returns:
48 str: The Cloud SDK config path.
49 """
50 # If the path is explicitly set, return that.
51 try:
52 return os.environ[environment_vars.CLOUD_SDK_CONFIG_DIR]
53 except KeyError:
54 pass
55
56 # Non-windows systems store this at ~/.config/gcloud
57 if os.name != 'nt':
58 return os.path.join(
59 os.path.expanduser('~'), '.config', _CONFIG_DIRECTORY)
60 # Windows systems store config at %APPDATA%\gcloud
61 else:
62 try:
63 return os.path.join(
64 os.environ[_WINDOWS_CONFIG_ROOT_ENV_VAR],
65 _CONFIG_DIRECTORY)
66 except KeyError:
67 # This should never happen unless someone is really
68 # messing with things, but we'll cover the case anyway.
69 drive = os.environ.get('SystemDrive', 'C:')
70 return os.path.join(
71 drive, '\\', _CONFIG_DIRECTORY)
72
73
74 def get_application_default_credentials_path():
75 """Gets the path to the application default credentials file.
76
77 The path may or may not exist.
78
79 Returns:
80 str: The full path to application default credentials.
81 """
82 config_path = get_config_path()
83 return os.path.join(config_path, _CREDENTIALS_FILENAME)
84
85
86 def get_project_id():
87 """Gets the project ID from the Cloud SDK's configuration.
88
89 Returns:
90 Optional[str]: The project ID.
91 """
92 config_path = get_config_path()
93 config_file = os.path.join(config_path, _ACTIVE_CONFIG_FILENAME)
94
95 if not os.path.isfile(config_file):
96 return None
97
98 config = configparser.RawConfigParser()
99
100 try:
101 config.read(config_file)
102 except configparser.Error:
103 return None
104
105 if config.has_section(_PROJECT_CONFIG_SECTION):
106 return config.get(
107 _PROJECT_CONFIG_SECTION, _PROJECT_CONFIG_KEY)
108
109
110 def load_authorized_user_credentials(info):
111 """Loads an authorized user credential.
112
113 Args:
114 info (Mapping[str, str]): The loaded file's data.
115
116 Returns:
117 google.oauth2.credentials.Credentials: The constructed credentials.
118
119 Raises:
120 ValueError: if the info is in the wrong format or missing data.
121 """
122 keys_needed = set(('refresh_token', 'client_id', 'client_secret'))
123 missing = keys_needed.difference(six.iterkeys(info))
124
125 if missing:
126 raise ValueError(
127 'Authorized user info was not in the expected format, missing '
128 'fields {}.'.format(', '.join(missing)))
129
130 return google.oauth2.credentials.Credentials(
131 None, # No access token, must be refreshed.
132 refresh_token=info['refresh_token'],
133 token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,
134 client_id=info['client_id'],
135 client_secret=info['client_secret'])
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/google/auth/_cloud_sdk.py b/google/auth/_cloud_sdk.py
--- a/google/auth/_cloud_sdk.py
+++ b/google/auth/_cloud_sdk.py
@@ -14,6 +14,7 @@
"""Helpers for reading the Google Cloud SDK's configuration."""
+import io
import os
import six
@@ -32,10 +33,6 @@
# The name of the file in the Cloud SDK config that contains default
# credentials.
_CREDENTIALS_FILENAME = 'application_default_credentials.json'
-# The name of the file in the Cloud SDK config that contains the
-# active configuration.
-_ACTIVE_CONFIG_FILENAME = os.path.join(
- 'configurations', 'config_default')
# The config section and key for the project ID in the cloud SDK config.
_PROJECT_CONFIG_SECTION = 'core'
_PROJECT_CONFIG_KEY = 'project'
@@ -83,6 +80,40 @@
return os.path.join(config_path, _CREDENTIALS_FILENAME)
+def _get_active_config(config_path):
+ """Gets the active config for the Cloud SDK.
+
+ Args:
+ config_path (str): The Cloud SDK's config path.
+
+ Returns:
+ str: The active configuration name.
+ """
+ active_config_filename = os.path.join(config_path, 'active_config')
+
+ if not os.path.isfile(active_config_filename):
+ return 'default'
+
+ with io.open(active_config_filename, 'r', encoding='utf-8') as file_obj:
+ active_config_name = file_obj.read().strip()
+
+ return active_config_name
+
+
+def _get_config_file(config_path, config_name):
+ """Returns the full path to a configuration's config file.
+
+ Args:
+ config_path (str): The Cloud SDK's config path.
+ config_name (str): The configuration name.
+
+ Returns:
+ str: The config file path.
+ """
+ return os.path.join(
+ config_path, 'configurations', 'config_{}'.format(config_name))
+
+
def get_project_id():
"""Gets the project ID from the Cloud SDK's configuration.
@@ -90,7 +121,8 @@
Optional[str]: The project ID.
"""
config_path = get_config_path()
- config_file = os.path.join(config_path, _ACTIVE_CONFIG_FILENAME)
+ active_config = _get_active_config(config_path)
+ config_file = _get_config_file(config_path, active_config)
if not os.path.isfile(config_file):
return None
| {"golden_diff": "diff --git a/google/auth/_cloud_sdk.py b/google/auth/_cloud_sdk.py\n--- a/google/auth/_cloud_sdk.py\n+++ b/google/auth/_cloud_sdk.py\n@@ -14,6 +14,7 @@\n \n \"\"\"Helpers for reading the Google Cloud SDK's configuration.\"\"\"\n \n+import io\n import os\n \n import six\n@@ -32,10 +33,6 @@\n # The name of the file in the Cloud SDK config that contains default\n # credentials.\n _CREDENTIALS_FILENAME = 'application_default_credentials.json'\n-# The name of the file in the Cloud SDK config that contains the\n-# active configuration.\n-_ACTIVE_CONFIG_FILENAME = os.path.join(\n- 'configurations', 'config_default')\n # The config section and key for the project ID in the cloud SDK config.\n _PROJECT_CONFIG_SECTION = 'core'\n _PROJECT_CONFIG_KEY = 'project'\n@@ -83,6 +80,40 @@\n return os.path.join(config_path, _CREDENTIALS_FILENAME)\n \n \n+def _get_active_config(config_path):\n+ \"\"\"Gets the active config for the Cloud SDK.\n+\n+ Args:\n+ config_path (str): The Cloud SDK's config path.\n+\n+ Returns:\n+ str: The active configuration name.\n+ \"\"\"\n+ active_config_filename = os.path.join(config_path, 'active_config')\n+\n+ if not os.path.isfile(active_config_filename):\n+ return 'default'\n+\n+ with io.open(active_config_filename, 'r', encoding='utf-8') as file_obj:\n+ active_config_name = file_obj.read().strip()\n+\n+ return active_config_name\n+\n+\n+def _get_config_file(config_path, config_name):\n+ \"\"\"Returns the full path to a configuration's config file.\n+\n+ Args:\n+ config_path (str): The Cloud SDK's config path.\n+ config_name (str): The configuration name.\n+\n+ Returns:\n+ str: The config file path.\n+ \"\"\"\n+ return os.path.join(\n+ config_path, 'configurations', 'config_{}'.format(config_name))\n+\n+\n def get_project_id():\n \"\"\"Gets the project ID from the Cloud SDK's configuration.\n \n@@ -90,7 +121,8 @@\n Optional[str]: The project ID.\n \"\"\"\n config_path = get_config_path()\n- config_file = os.path.join(config_path, _ACTIVE_CONFIG_FILENAME)\n+ active_config = _get_active_config(config_path)\n+ config_file = _get_config_file(config_path, active_config)\n \n if not os.path.isfile(config_file):\n return None\n", "issue": "Cloud SDK credentials should use the 'active' config not the 'default' config\nContext: https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2588\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for reading the Google Cloud SDK's configuration.\"\"\"\n\nimport os\n\nimport six\nfrom six.moves import configparser\n\nfrom google.auth import environment_vars\nimport google.oauth2.credentials\n\n# The Google OAuth 2.0 token endpoint. Used for authorized user credentials.\n_GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'\n\n# The ~/.config subdirectory containing gcloud credentials.\n_CONFIG_DIRECTORY = 'gcloud'\n# Windows systems store config at %APPDATA%\\gcloud\n_WINDOWS_CONFIG_ROOT_ENV_VAR = 'APPDATA'\n# The name of the file in the Cloud SDK config that contains default\n# credentials.\n_CREDENTIALS_FILENAME = 'application_default_credentials.json'\n# The name of the file in the Cloud SDK config that contains the\n# active configuration.\n_ACTIVE_CONFIG_FILENAME = os.path.join(\n 'configurations', 'config_default')\n# The config section and key for the project ID in the cloud SDK config.\n_PROJECT_CONFIG_SECTION = 'core'\n_PROJECT_CONFIG_KEY = 'project'\n\n\ndef get_config_path():\n \"\"\"Returns the absolute path the the Cloud SDK's configuration directory.\n\n Returns:\n str: The Cloud SDK config path.\n \"\"\"\n # If the path is explicitly set, return that.\n try:\n return os.environ[environment_vars.CLOUD_SDK_CONFIG_DIR]\n except KeyError:\n pass\n\n # Non-windows systems store this at ~/.config/gcloud\n if os.name != 'nt':\n return os.path.join(\n os.path.expanduser('~'), '.config', _CONFIG_DIRECTORY)\n # Windows systems store config at %APPDATA%\\gcloud\n else:\n try:\n return os.path.join(\n os.environ[_WINDOWS_CONFIG_ROOT_ENV_VAR],\n _CONFIG_DIRECTORY)\n except KeyError:\n # This should never happen unless someone is really\n # messing with things, but we'll cover the case anyway.\n drive = os.environ.get('SystemDrive', 'C:')\n return os.path.join(\n drive, '\\\\', _CONFIG_DIRECTORY)\n\n\ndef get_application_default_credentials_path():\n \"\"\"Gets the path to the application default credentials file.\n\n The path may or may not exist.\n\n Returns:\n str: The full path to application default credentials.\n \"\"\"\n config_path = get_config_path()\n return os.path.join(config_path, _CREDENTIALS_FILENAME)\n\n\ndef get_project_id():\n \"\"\"Gets the project ID from the Cloud SDK's configuration.\n\n Returns:\n Optional[str]: The project ID.\n \"\"\"\n config_path = get_config_path()\n config_file = os.path.join(config_path, _ACTIVE_CONFIG_FILENAME)\n\n if not os.path.isfile(config_file):\n return None\n\n config = configparser.RawConfigParser()\n\n try:\n config.read(config_file)\n except configparser.Error:\n return None\n\n if config.has_section(_PROJECT_CONFIG_SECTION):\n return config.get(\n _PROJECT_CONFIG_SECTION, _PROJECT_CONFIG_KEY)\n\n\ndef load_authorized_user_credentials(info):\n \"\"\"Loads an authorized user credential.\n\n Args:\n info (Mapping[str, str]): The loaded file's data.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed credentials.\n\n Raises:\n ValueError: if the info is in the wrong format or missing data.\n \"\"\"\n keys_needed = set(('refresh_token', 'client_id', 'client_secret'))\n missing = keys_needed.difference(six.iterkeys(info))\n\n if missing:\n raise ValueError(\n 'Authorized user info was not in the expected format, missing '\n 'fields {}.'.format(', '.join(missing)))\n\n return google.oauth2.credentials.Credentials(\n None, # No access token, must be refreshed.\n refresh_token=info['refresh_token'],\n token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,\n client_id=info['client_id'],\n client_secret=info['client_secret'])\n", "path": "google/auth/_cloud_sdk.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helpers for reading the Google Cloud SDK's configuration.\"\"\"\n\nimport io\nimport os\n\nimport six\nfrom six.moves import configparser\n\nfrom google.auth import environment_vars\nimport google.oauth2.credentials\n\n# The Google OAuth 2.0 token endpoint. Used for authorized user credentials.\n_GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'\n\n# The ~/.config subdirectory containing gcloud credentials.\n_CONFIG_DIRECTORY = 'gcloud'\n# Windows systems store config at %APPDATA%\\gcloud\n_WINDOWS_CONFIG_ROOT_ENV_VAR = 'APPDATA'\n# The name of the file in the Cloud SDK config that contains default\n# credentials.\n_CREDENTIALS_FILENAME = 'application_default_credentials.json'\n# The config section and key for the project ID in the cloud SDK config.\n_PROJECT_CONFIG_SECTION = 'core'\n_PROJECT_CONFIG_KEY = 'project'\n\n\ndef get_config_path():\n \"\"\"Returns the absolute path the the Cloud SDK's configuration directory.\n\n Returns:\n str: The Cloud SDK config path.\n \"\"\"\n # If the path is explicitly set, return that.\n try:\n return os.environ[environment_vars.CLOUD_SDK_CONFIG_DIR]\n except KeyError:\n pass\n\n # Non-windows systems store this at ~/.config/gcloud\n if os.name != 'nt':\n return os.path.join(\n os.path.expanduser('~'), '.config', _CONFIG_DIRECTORY)\n # Windows systems store config at %APPDATA%\\gcloud\n else:\n try:\n return os.path.join(\n os.environ[_WINDOWS_CONFIG_ROOT_ENV_VAR],\n _CONFIG_DIRECTORY)\n except KeyError:\n # This should never happen unless someone is really\n # messing with things, but we'll cover the case anyway.\n drive = os.environ.get('SystemDrive', 'C:')\n return os.path.join(\n drive, '\\\\', _CONFIG_DIRECTORY)\n\n\ndef get_application_default_credentials_path():\n \"\"\"Gets the path to the application default credentials file.\n\n The path may or may not exist.\n\n Returns:\n str: The full path to application default credentials.\n \"\"\"\n config_path = get_config_path()\n return os.path.join(config_path, _CREDENTIALS_FILENAME)\n\n\ndef _get_active_config(config_path):\n \"\"\"Gets the active config for the Cloud SDK.\n\n Args:\n config_path (str): The Cloud SDK's config path.\n\n Returns:\n str: The active configuration name.\n \"\"\"\n active_config_filename = os.path.join(config_path, 'active_config')\n\n if not os.path.isfile(active_config_filename):\n return 'default'\n\n with io.open(active_config_filename, 'r', encoding='utf-8') as file_obj:\n active_config_name = file_obj.read().strip()\n\n return active_config_name\n\n\ndef _get_config_file(config_path, config_name):\n \"\"\"Returns the full path to a configuration's config file.\n\n Args:\n config_path (str): The Cloud SDK's config path.\n config_name (str): The configuration name.\n\n Returns:\n str: The config file path.\n \"\"\"\n return os.path.join(\n config_path, 'configurations', 'config_{}'.format(config_name))\n\n\ndef get_project_id():\n \"\"\"Gets the project ID from the Cloud SDK's configuration.\n\n Returns:\n Optional[str]: The project ID.\n \"\"\"\n config_path = get_config_path()\n active_config = _get_active_config(config_path)\n config_file = _get_config_file(config_path, active_config)\n\n if not os.path.isfile(config_file):\n return None\n\n config = configparser.RawConfigParser()\n\n try:\n config.read(config_file)\n except configparser.Error:\n return None\n\n if config.has_section(_PROJECT_CONFIG_SECTION):\n return config.get(\n _PROJECT_CONFIG_SECTION, _PROJECT_CONFIG_KEY)\n\n\ndef load_authorized_user_credentials(info):\n \"\"\"Loads an authorized user credential.\n\n Args:\n info (Mapping[str, str]): The loaded file's data.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed credentials.\n\n Raises:\n ValueError: if the info is in the wrong format or missing data.\n \"\"\"\n keys_needed = set(('refresh_token', 'client_id', 'client_secret'))\n missing = keys_needed.difference(six.iterkeys(info))\n\n if missing:\n raise ValueError(\n 'Authorized user info was not in the expected format, missing '\n 'fields {}.'.format(', '.join(missing)))\n\n return google.oauth2.credentials.Credentials(\n None, # No access token, must be refreshed.\n refresh_token=info['refresh_token'],\n token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,\n client_id=info['client_id'],\n client_secret=info['client_secret'])\n", "path": "google/auth/_cloud_sdk.py"}]} | 1,555 | 556 |
gh_patches_debug_28137 | rasdani/github-patches | git_diff | bokeh__bokeh-9234 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOCS] page for selection tools does not tell users how to get the values/indices of the selection
I've been talking with another bokeh user at Scipy 2019 and found we shared very similar frustrations when starting to work with bokeh selections. The problem is the [documentation page for selection tools](https://bokeh.pydata.org/en/latest/docs/user_guide/tools.html#boxselecttool
) at bokeh.pydata.org does not include information on how you can get the indices/values after you have selected something.
(It's not that there's no documentation on this, the problem is that it's split up & scattered around the place, plus is difficult to surface on google.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/models/selections.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7
8 #-----------------------------------------------------------------------------
9 # Boilerplate
10 #-----------------------------------------------------------------------------
11 from __future__ import absolute_import, division, print_function, unicode_literals
12
13 import logging
14 log = logging.getLogger(__name__)
15
16 #-----------------------------------------------------------------------------
17 # Imports
18 #-----------------------------------------------------------------------------
19
20 # Standard library imports
21
22 # External imports
23
24 # Bokeh imports
25 from ..core.has_props import abstract
26 from ..core.properties import Dict, Int, Seq, String
27 from ..model import Model
28
29 #-----------------------------------------------------------------------------
30 # Globals and constants
31 #-----------------------------------------------------------------------------
32
33 __all__ = (
34 'IntersectRenderers',
35 'Selection',
36 'SelectionPolicy',
37 'UnionRenderers',
38 )
39
40 #-----------------------------------------------------------------------------
41 # General API
42 #-----------------------------------------------------------------------------
43
44 class Selection(Model):
45 '''
46 A Selection represents a portion of the data in a ``DataSource``, which
47 can be visually manipulated in a plot.
48
49 Selections are typically created by selecting points in a plot with
50 a ``SelectTool``, but can also be programmatically specified.
51
52 '''
53
54 indices = Seq(Int, default=[], help="""
55 The indices included in a selection.
56 """)
57
58 line_indices = Seq(Int, default=[], help="""
59 """)
60
61 multiline_indices = Dict(String, Seq(Int), default={}, help="""
62 """)
63
64 # TODO (bev) image_indicies
65
66 @abstract
67 class SelectionPolicy(Model):
68 '''
69
70 '''
71
72 pass
73
74 class IntersectRenderers(SelectionPolicy):
75 '''
76 When a data source is shared between multiple renderers, a row in the data
77 source will only be selected if that point for each renderer is selected. The
78 selection is made from the intersection of hit test results from all renderers.
79
80 '''
81
82 pass
83
84 class UnionRenderers(SelectionPolicy):
85 '''
86 When a data source is shared between multiple renderers, selecting a point on
87 from any renderer will cause that row in the data source to be selected. The
88 selection is made from the union of hit test results from all renderers.
89
90 '''
91
92 pass
93
94 #-----------------------------------------------------------------------------
95 # Dev API
96 #-----------------------------------------------------------------------------
97
98 #-----------------------------------------------------------------------------
99 # Private API
100 #-----------------------------------------------------------------------------
101
102 #-----------------------------------------------------------------------------
103 # Code
104 #-----------------------------------------------------------------------------
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/models/selections.py b/bokeh/models/selections.py
--- a/bokeh/models/selections.py
+++ b/bokeh/models/selections.py
@@ -49,16 +49,37 @@
Selections are typically created by selecting points in a plot with
a ``SelectTool``, but can also be programmatically specified.
+ For most glyphs, the ``indices`` property is the relevant value to use.
+
'''
indices = Seq(Int, default=[], help="""
- The indices included in a selection.
+ The "scatter" level indices included in a selection. For example, for a
+ selection on a ``Circle`` glyph, this list records the indices of whicn
+ individual circles are selected.
+
+ For "multi" glyphs such as ``Patches``, ``MultiLine``, ``MultiPolygons``,
+ etc, this list records the indices of which entire sub-items are selected.
+ For example, which indidual polygons of a ``MultiPolygon`` are selected.
""")
line_indices = Seq(Int, default=[], help="""
+ The point indices included in a selection on a ``Line`` glyph.
+
+ This value records the indices of the individual points on a ``Line`` that
+ were selected by a selection tool.
""")
multiline_indices = Dict(String, Seq(Int), default={}, help="""
+ The detailed point indices included in a selection on a ``MultiLine``.
+
+ This value records which points, on which lines, are part of a seletion on
+ a ``MulitLine``. The keys are the top level indices (i.e., which line)
+ which map to lists of indices (i.e. which points on that line).
+
+ If you only need to know which lines are selected, without knowing what
+ individual points on those lines are selected, then you can look at the
+ keys of this dictionary (converted to ints).
""")
# TODO (bev) image_indicies
| {"golden_diff": "diff --git a/bokeh/models/selections.py b/bokeh/models/selections.py\n--- a/bokeh/models/selections.py\n+++ b/bokeh/models/selections.py\n@@ -49,16 +49,37 @@\n Selections are typically created by selecting points in a plot with\n a ``SelectTool``, but can also be programmatically specified.\n \n+ For most glyphs, the ``indices`` property is the relevant value to use.\n+\n '''\n \n indices = Seq(Int, default=[], help=\"\"\"\n- The indices included in a selection.\n+ The \"scatter\" level indices included in a selection. For example, for a\n+ selection on a ``Circle`` glyph, this list records the indices of whicn\n+ individual circles are selected.\n+\n+ For \"multi\" glyphs such as ``Patches``, ``MultiLine``, ``MultiPolygons``,\n+ etc, this list records the indices of which entire sub-items are selected.\n+ For example, which indidual polygons of a ``MultiPolygon`` are selected.\n \"\"\")\n \n line_indices = Seq(Int, default=[], help=\"\"\"\n+ The point indices included in a selection on a ``Line`` glyph.\n+\n+ This value records the indices of the individual points on a ``Line`` that\n+ were selected by a selection tool.\n \"\"\")\n \n multiline_indices = Dict(String, Seq(Int), default={}, help=\"\"\"\n+ The detailed point indices included in a selection on a ``MultiLine``.\n+\n+ This value records which points, on which lines, are part of a seletion on\n+ a ``MulitLine``. The keys are the top level indices (i.e., which line)\n+ which map to lists of indices (i.e. which points on that line).\n+\n+ If you only need to know which lines are selected, without knowing what\n+ individual points on those lines are selected, then you can look at the\n+ keys of this dictionary (converted to ints).\n \"\"\")\n \n # TODO (bev) image_indicies\n", "issue": "[DOCS] page for selection tools does not tell users how to get the values/indices of the selection\nI've been talking with another bokeh user at Scipy 2019 and found we shared very similar frustrations when starting to work with bokeh selections. The problem is the [documentation page for selection tools](https://bokeh.pydata.org/en/latest/docs/user_guide/tools.html#boxselecttool\r\n) at bokeh.pydata.org does not include information on how you can get the indices/values after you have selected something.\r\n\r\n(It's not that there's no documentation on this, the problem is that it's split up & scattered around the place, plus is difficult to surface on google.)\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ..core.has_props import abstract\nfrom ..core.properties import Dict, Int, Seq, String\nfrom ..model import Model\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'IntersectRenderers',\n 'Selection',\n 'SelectionPolicy',\n 'UnionRenderers',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\nclass Selection(Model):\n '''\n A Selection represents a portion of the data in a ``DataSource``, which\n can be visually manipulated in a plot.\n\n Selections are typically created by selecting points in a plot with\n a ``SelectTool``, but can also be programmatically specified.\n\n '''\n\n indices = Seq(Int, default=[], help=\"\"\"\n The indices included in a selection.\n \"\"\")\n\n line_indices = Seq(Int, default=[], help=\"\"\"\n \"\"\")\n\n multiline_indices = Dict(String, Seq(Int), default={}, help=\"\"\"\n \"\"\")\n\n # TODO (bev) image_indicies\n\n@abstract\nclass SelectionPolicy(Model):\n '''\n\n '''\n\n pass\n\nclass IntersectRenderers(SelectionPolicy):\n '''\n When a data source is shared between multiple renderers, a row in the data\n source will only be selected if that point for each renderer is selected. The\n selection is made from the intersection of hit test results from all renderers.\n\n '''\n\n pass\n\nclass UnionRenderers(SelectionPolicy):\n '''\n When a data source is shared between multiple renderers, selecting a point on\n from any renderer will cause that row in the data source to be selected. The\n selection is made from the union of hit test results from all renderers.\n\n '''\n\n pass\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/selections.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ..core.has_props import abstract\nfrom ..core.properties import Dict, Int, Seq, String\nfrom ..model import Model\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'IntersectRenderers',\n 'Selection',\n 'SelectionPolicy',\n 'UnionRenderers',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\nclass Selection(Model):\n '''\n A Selection represents a portion of the data in a ``DataSource``, which\n can be visually manipulated in a plot.\n\n Selections are typically created by selecting points in a plot with\n a ``SelectTool``, but can also be programmatically specified.\n\n For most glyphs, the ``indices`` property is the relevant value to use.\n\n '''\n\n indices = Seq(Int, default=[], help=\"\"\"\n The \"scatter\" level indices included in a selection. For example, for a\n selection on a ``Circle`` glyph, this list records the indices of whicn\n individual circles are selected.\n\n For \"multi\" glyphs such as ``Patches``, ``MultiLine``, ``MultiPolygons``,\n etc, this list records the indices of which entire sub-items are selected.\n For example, which indidual polygons of a ``MultiPolygon`` are selected.\n \"\"\")\n\n line_indices = Seq(Int, default=[], help=\"\"\"\n The point indices included in a selection on a ``Line`` glyph.\n\n This value records the indices of the individual points on a ``Line`` that\n were selected by a selection tool.\n \"\"\")\n\n multiline_indices = Dict(String, Seq(Int), default={}, help=\"\"\"\n The detailed point indices included in a selection on a ``MultiLine``.\n\n This value records which points, on which lines, are part of a seletion on\n a ``MulitLine``. The keys are the top level indices (i.e., which line)\n which map to lists of indices (i.e. which points on that line).\n\n If you only need to know which lines are selected, without knowing what\n individual points on those lines are selected, then you can look at the\n keys of this dictionary (converted to ints).\n \"\"\")\n\n # TODO (bev) image_indicies\n\n@abstract\nclass SelectionPolicy(Model):\n '''\n\n '''\n\n pass\n\nclass IntersectRenderers(SelectionPolicy):\n '''\n When a data source is shared between multiple renderers, a row in the data\n source will only be selected if that point for each renderer is selected. The\n selection is made from the intersection of hit test results from all renderers.\n\n '''\n\n pass\n\nclass UnionRenderers(SelectionPolicy):\n '''\n When a data source is shared between multiple renderers, selecting a point on\n from any renderer will cause that row in the data source to be selected. The\n selection is made from the union of hit test results from all renderers.\n\n '''\n\n pass\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/selections.py"}]} | 1,134 | 453 |
gh_patches_debug_14149 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-3930 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Computing the size of the in-degree histogram tensor dynamically in the PNA example
### π Proposed Refactor
The `deg` tensor in the PNA example is initialized with the size `5`.
https://github.com/pyg-team/pytorch_geometric/blob/50b7bfc4a59b5b6f7ec547ff862985f3b2e22798/examples/pna.py#L23
This value will obviously be different for different datasets. One can iterate over the training data and compute the maximum degree any node has. Then, the histogram tensor can be initialized with that value. Something like this:
```python
# compute the maximum in-degree in the training data
max_degree = 0
for data in train_dataset:
d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)
if d.max().item() > max_degree:
max_degree = d.max().item()
# create the in-degree histogram tensor
deg = torch.zeros(max_degree + 1, dtype=torch.long)
for data in train_dataset:
d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)
deg += torch.bincount(d, minlength=deg.numel())
```
### Suggest a potential alternative/fix
The results of the `degree` function can also be cached to avoid iterating two times. Further, for custom datasets where the split is made with indices or boolean masks e.g `Data(x=[10000, 100], edge_index=[2, 200000], edge_attr=[200000, 20], y=[10000], train_mask=[10000], val_mask=[10000], test_mask=[10000])`, the `subgraph` utility can be used.
```python
tr_subgraph = data.subgraph(data.train_mask)
# compute the in-degree of all the training nodes
d = degree(index=tr_subgraph.edge_index[1], num_nodes=tr_subgraph.num_nodes, dtype=torch.long)
# get the maximum in-degree, this will be the size of the histogram tensor
max_degree = d.max().item()
# create the in-degree histogram tensor
deg = torch.zeros(max_degree + 1, dtype=torch.long)
deg += torch.bincount(d, minlength=deg.numel())
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/pna.py`
Content:
```
1 import os.path as osp
2
3 import torch
4 import torch.nn.functional as F
5 from torch.nn import Embedding, Linear, ModuleList, ReLU, Sequential
6 from torch.optim.lr_scheduler import ReduceLROnPlateau
7
8 from torch_geometric.datasets import ZINC
9 from torch_geometric.loader import DataLoader
10 from torch_geometric.nn import BatchNorm, PNAConv, global_add_pool
11 from torch_geometric.utils import degree
12
13 path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'ZINC')
14 train_dataset = ZINC(path, subset=True, split='train')
15 val_dataset = ZINC(path, subset=True, split='val')
16 test_dataset = ZINC(path, subset=True, split='test')
17
18 train_loader = DataLoader(train_dataset, batch_size=128, shuffle=True)
19 val_loader = DataLoader(val_dataset, batch_size=128)
20 test_loader = DataLoader(test_dataset, batch_size=128)
21
22 # Compute in-degree histogram over training data.
23 deg = torch.zeros(5, dtype=torch.long)
24 for data in train_dataset:
25 d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)
26 deg += torch.bincount(d, minlength=deg.numel())
27
28
29 class Net(torch.nn.Module):
30 def __init__(self):
31 super().__init__()
32
33 self.node_emb = Embedding(21, 75)
34 self.edge_emb = Embedding(4, 50)
35
36 aggregators = ['mean', 'min', 'max', 'std']
37 scalers = ['identity', 'amplification', 'attenuation']
38
39 self.convs = ModuleList()
40 self.batch_norms = ModuleList()
41 for _ in range(4):
42 conv = PNAConv(in_channels=75, out_channels=75,
43 aggregators=aggregators, scalers=scalers, deg=deg,
44 edge_dim=50, towers=5, pre_layers=1, post_layers=1,
45 divide_input=False)
46 self.convs.append(conv)
47 self.batch_norms.append(BatchNorm(75))
48
49 self.mlp = Sequential(Linear(75, 50), ReLU(), Linear(50, 25), ReLU(),
50 Linear(25, 1))
51
52 def forward(self, x, edge_index, edge_attr, batch):
53 x = self.node_emb(x.squeeze())
54 edge_attr = self.edge_emb(edge_attr)
55
56 for conv, batch_norm in zip(self.convs, self.batch_norms):
57 x = F.relu(batch_norm(conv(x, edge_index, edge_attr)))
58
59 x = global_add_pool(x, batch)
60 return self.mlp(x)
61
62
63 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
64 model = Net().to(device)
65 optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
66 scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=20,
67 min_lr=0.00001)
68
69
70 def train(epoch):
71 model.train()
72
73 total_loss = 0
74 for data in train_loader:
75 data = data.to(device)
76 optimizer.zero_grad()
77 out = model(data.x, data.edge_index, data.edge_attr, data.batch)
78 loss = (out.squeeze() - data.y).abs().mean()
79 loss.backward()
80 total_loss += loss.item() * data.num_graphs
81 optimizer.step()
82 return total_loss / len(train_loader.dataset)
83
84
85 @torch.no_grad()
86 def test(loader):
87 model.eval()
88
89 total_error = 0
90 for data in loader:
91 data = data.to(device)
92 out = model(data.x, data.edge_index, data.edge_attr, data.batch)
93 total_error += (out.squeeze() - data.y).abs().sum().item()
94 return total_error / len(loader.dataset)
95
96
97 for epoch in range(1, 301):
98 loss = train(epoch)
99 val_mae = test(val_loader)
100 test_mae = test(test_loader)
101 scheduler.step(val_mae)
102 print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Val: {val_mae:.4f}, '
103 f'Test: {test_mae:.4f}')
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/pna.py b/examples/pna.py
--- a/examples/pna.py
+++ b/examples/pna.py
@@ -19,8 +19,14 @@
val_loader = DataLoader(val_dataset, batch_size=128)
test_loader = DataLoader(test_dataset, batch_size=128)
-# Compute in-degree histogram over training data.
-deg = torch.zeros(5, dtype=torch.long)
+# Compute the maximum in-degree in the training data.
+max_degree = -1
+for data in train_dataset:
+ d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)
+ max_degree = max(max_degree, int(d.max()))
+
+# Compute the in-degree histogram tensor
+deg = torch.zeros(max_degree + 1, dtype=torch.long)
for data in train_dataset:
d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)
deg += torch.bincount(d, minlength=deg.numel())
| {"golden_diff": "diff --git a/examples/pna.py b/examples/pna.py\n--- a/examples/pna.py\n+++ b/examples/pna.py\n@@ -19,8 +19,14 @@\n val_loader = DataLoader(val_dataset, batch_size=128)\n test_loader = DataLoader(test_dataset, batch_size=128)\n \n-# Compute in-degree histogram over training data.\n-deg = torch.zeros(5, dtype=torch.long)\n+# Compute the maximum in-degree in the training data.\n+max_degree = -1\n+for data in train_dataset:\n+ d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n+ max_degree = max(max_degree, int(d.max()))\n+\n+# Compute the in-degree histogram tensor\n+deg = torch.zeros(max_degree + 1, dtype=torch.long)\n for data in train_dataset:\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n deg += torch.bincount(d, minlength=deg.numel())\n", "issue": "Computing the size of the in-degree histogram tensor dynamically in the PNA example\n### \ud83d\udee0 Proposed Refactor\n\nThe `deg` tensor in the PNA example is initialized with the size `5`. \r\nhttps://github.com/pyg-team/pytorch_geometric/blob/50b7bfc4a59b5b6f7ec547ff862985f3b2e22798/examples/pna.py#L23\r\n\r\nThis value will obviously be different for different datasets. One can iterate over the training data and compute the maximum degree any node has. Then, the histogram tensor can be initialized with that value. Something like this:\r\n```python\r\n# compute the maximum in-degree in the training data\r\nmax_degree = 0\r\nfor data in train_dataset:\r\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\r\n if d.max().item() > max_degree:\r\n max_degree = d.max().item()\r\n\r\n# create the in-degree histogram tensor\r\ndeg = torch.zeros(max_degree + 1, dtype=torch.long)\r\nfor data in train_dataset:\r\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\r\n deg += torch.bincount(d, minlength=deg.numel())\r\n``` \n\n### Suggest a potential alternative/fix\n\nThe results of the `degree` function can also be cached to avoid iterating two times. Further, for custom datasets where the split is made with indices or boolean masks e.g `Data(x=[10000, 100], edge_index=[2, 200000], edge_attr=[200000, 20], y=[10000], train_mask=[10000], val_mask=[10000], test_mask=[10000])`, the `subgraph` utility can be used.\r\n```python\r\ntr_subgraph = data.subgraph(data.train_mask)\r\n# compute the in-degree of all the training nodes\r\nd = degree(index=tr_subgraph.edge_index[1], num_nodes=tr_subgraph.num_nodes, dtype=torch.long)\r\n\r\n# get the maximum in-degree, this will be the size of the histogram tensor\r\nmax_degree = d.max().item()\r\n\r\n# create the in-degree histogram tensor\r\ndeg = torch.zeros(max_degree + 1, dtype=torch.long)\r\ndeg += torch.bincount(d, minlength=deg.numel())\r\n```\n", "before_files": [{"content": "import os.path as osp\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.nn import Embedding, Linear, ModuleList, ReLU, Sequential\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\n\nfrom torch_geometric.datasets import ZINC\nfrom torch_geometric.loader import DataLoader\nfrom torch_geometric.nn import BatchNorm, PNAConv, global_add_pool\nfrom torch_geometric.utils import degree\n\npath = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'ZINC')\ntrain_dataset = ZINC(path, subset=True, split='train')\nval_dataset = ZINC(path, subset=True, split='val')\ntest_dataset = ZINC(path, subset=True, split='test')\n\ntrain_loader = DataLoader(train_dataset, batch_size=128, shuffle=True)\nval_loader = DataLoader(val_dataset, batch_size=128)\ntest_loader = DataLoader(test_dataset, batch_size=128)\n\n# Compute in-degree histogram over training data.\ndeg = torch.zeros(5, dtype=torch.long)\nfor data in train_dataset:\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n deg += torch.bincount(d, minlength=deg.numel())\n\n\nclass Net(torch.nn.Module):\n def __init__(self):\n super().__init__()\n\n self.node_emb = Embedding(21, 75)\n self.edge_emb = Embedding(4, 50)\n\n aggregators = ['mean', 'min', 'max', 'std']\n scalers = ['identity', 'amplification', 'attenuation']\n\n self.convs = ModuleList()\n self.batch_norms = ModuleList()\n for _ in range(4):\n conv = PNAConv(in_channels=75, out_channels=75,\n aggregators=aggregators, scalers=scalers, deg=deg,\n edge_dim=50, towers=5, pre_layers=1, post_layers=1,\n divide_input=False)\n self.convs.append(conv)\n self.batch_norms.append(BatchNorm(75))\n\n self.mlp = Sequential(Linear(75, 50), ReLU(), Linear(50, 25), ReLU(),\n Linear(25, 1))\n\n def forward(self, x, edge_index, edge_attr, batch):\n x = self.node_emb(x.squeeze())\n edge_attr = self.edge_emb(edge_attr)\n\n for conv, batch_norm in zip(self.convs, self.batch_norms):\n x = F.relu(batch_norm(conv(x, edge_index, edge_attr)))\n\n x = global_add_pool(x, batch)\n return self.mlp(x)\n\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = Net().to(device)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\nscheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=20,\n min_lr=0.00001)\n\n\ndef train(epoch):\n model.train()\n\n total_loss = 0\n for data in train_loader:\n data = data.to(device)\n optimizer.zero_grad()\n out = model(data.x, data.edge_index, data.edge_attr, data.batch)\n loss = (out.squeeze() - data.y).abs().mean()\n loss.backward()\n total_loss += loss.item() * data.num_graphs\n optimizer.step()\n return total_loss / len(train_loader.dataset)\n\n\[email protected]_grad()\ndef test(loader):\n model.eval()\n\n total_error = 0\n for data in loader:\n data = data.to(device)\n out = model(data.x, data.edge_index, data.edge_attr, data.batch)\n total_error += (out.squeeze() - data.y).abs().sum().item()\n return total_error / len(loader.dataset)\n\n\nfor epoch in range(1, 301):\n loss = train(epoch)\n val_mae = test(val_loader)\n test_mae = test(test_loader)\n scheduler.step(val_mae)\n print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Val: {val_mae:.4f}, '\n f'Test: {test_mae:.4f}')\n", "path": "examples/pna.py"}], "after_files": [{"content": "import os.path as osp\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.nn import Embedding, Linear, ModuleList, ReLU, Sequential\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\n\nfrom torch_geometric.datasets import ZINC\nfrom torch_geometric.loader import DataLoader\nfrom torch_geometric.nn import BatchNorm, PNAConv, global_add_pool\nfrom torch_geometric.utils import degree\n\npath = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'ZINC')\ntrain_dataset = ZINC(path, subset=True, split='train')\nval_dataset = ZINC(path, subset=True, split='val')\ntest_dataset = ZINC(path, subset=True, split='test')\n\ntrain_loader = DataLoader(train_dataset, batch_size=128, shuffle=True)\nval_loader = DataLoader(val_dataset, batch_size=128)\ntest_loader = DataLoader(test_dataset, batch_size=128)\n\n# Compute the maximum in-degree in the training data.\nmax_degree = -1\nfor data in train_dataset:\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n max_degree = max(max_degree, int(d.max()))\n\n# Compute the in-degree histogram tensor\ndeg = torch.zeros(max_degree + 1, dtype=torch.long)\nfor data in train_dataset:\n d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n deg += torch.bincount(d, minlength=deg.numel())\n\n\nclass Net(torch.nn.Module):\n def __init__(self):\n super().__init__()\n\n self.node_emb = Embedding(21, 75)\n self.edge_emb = Embedding(4, 50)\n\n aggregators = ['mean', 'min', 'max', 'std']\n scalers = ['identity', 'amplification', 'attenuation']\n\n self.convs = ModuleList()\n self.batch_norms = ModuleList()\n for _ in range(4):\n conv = PNAConv(in_channels=75, out_channels=75,\n aggregators=aggregators, scalers=scalers, deg=deg,\n edge_dim=50, towers=5, pre_layers=1, post_layers=1,\n divide_input=False)\n self.convs.append(conv)\n self.batch_norms.append(BatchNorm(75))\n\n self.mlp = Sequential(Linear(75, 50), ReLU(), Linear(50, 25), ReLU(),\n Linear(25, 1))\n\n def forward(self, x, edge_index, edge_attr, batch):\n x = self.node_emb(x.squeeze())\n edge_attr = self.edge_emb(edge_attr)\n\n for conv, batch_norm in zip(self.convs, self.batch_norms):\n x = F.relu(batch_norm(conv(x, edge_index, edge_attr)))\n\n x = global_add_pool(x, batch)\n return self.mlp(x)\n\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = Net().to(device)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\nscheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=20,\n min_lr=0.00001)\n\n\ndef train(epoch):\n model.train()\n\n total_loss = 0\n for data in train_loader:\n data = data.to(device)\n optimizer.zero_grad()\n out = model(data.x, data.edge_index, data.edge_attr, data.batch)\n loss = (out.squeeze() - data.y).abs().mean()\n loss.backward()\n total_loss += loss.item() * data.num_graphs\n optimizer.step()\n return total_loss / len(train_loader.dataset)\n\n\[email protected]_grad()\ndef test(loader):\n model.eval()\n\n total_error = 0\n for data in loader:\n data = data.to(device)\n out = model(data.x, data.edge_index, data.edge_attr, data.batch)\n total_error += (out.squeeze() - data.y).abs().sum().item()\n return total_error / len(loader.dataset)\n\n\nfor epoch in range(1, 301):\n loss = train(epoch)\n val_mae = test(val_loader)\n test_mae = test(test_loader)\n scheduler.step(val_mae)\n print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Val: {val_mae:.4f}, '\n f'Test: {test_mae:.4f}')\n", "path": "examples/pna.py"}]} | 1,914 | 215 |
gh_patches_debug_36723 | rasdani/github-patches | git_diff | mindsdb__lightwood-689 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect `Sktime` forecasting horizon starting point
* Lightwood version: 1.5.0
From a few internal tests, it seems the `sktime` time series mixer is not emitting forecasts from the end of the validation dataset, but from the training dataset instead, leading to predictions that will be incorrectly displaced.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightwood/mixer/sktime.py`
Content:
```
1 import numpy as np
2 import pandas as pd
3 from typing import Dict, Union
4 from sktime.forecasting.arima import AutoARIMA
5
6 from lightwood.api import dtype
7 from lightwood.helpers.log import log
8 from lightwood.mixer.base import BaseMixer
9 from lightwood.api.types import PredictionArguments
10 from lightwood.encoder.time_series.helpers.common import get_group_matches
11 from lightwood.data.encoded_ds import EncodedDs, ConcatedEncodedDs
12
13
14 class SkTime(BaseMixer):
15 forecaster: str
16 n_ts_predictions: int
17 target: str
18 supports_proba: bool
19
20 def __init__(
21 self, stop_after: int, target: str, dtype_dict: Dict[str, str],
22 n_ts_predictions: int, ts_analysis: Dict):
23 super().__init__(stop_after)
24 self.target = target
25 dtype_dict[target] = dtype.float
26 self.model_class = AutoARIMA
27 self.models = {}
28 self.n_ts_predictions = n_ts_predictions
29 self.ts_analysis = ts_analysis
30 self.forecasting_horizon = np.arange(1, self.n_ts_predictions)
31 self.cutoff_index = {} # marks index at which training data stops and forecasting window starts
32 self.grouped_by = ['__default'] if not ts_analysis['tss'].group_by else ts_analysis['tss'].group_by
33 self.supports_proba = False
34 self.stable = True
35
36 def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:
37 log.info('Started fitting sktime forecaster for array prediction')
38
39 all_subsets = ConcatedEncodedDs([train_data, dev_data])
40 df = all_subsets.data_frame.sort_values(by=f'__mdb_original_{self.ts_analysis["tss"].order_by[0]}')
41 data = {'data': df[self.target],
42 'group_info': {gcol: df[gcol].tolist()
43 for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}
44
45 for group in self.ts_analysis['group_combinations']:
46 # many warnings might be thrown inside of statsmodels during stepwise procedure
47 self.models[group] = self.model_class(suppress_warnings=True)
48
49 if self.grouped_by == ['__default']:
50 series_idxs = data['data'].index
51 series_data = data['data'].values
52 else:
53 series_idxs, series_data = get_group_matches(data, group)
54
55 if series_data.size > 0:
56 series = pd.Series(series_data.squeeze(), index=series_idxs)
57 series = series.sort_index(ascending=True)
58 series = series.reset_index(drop=True)
59 try:
60 self.models[group].fit(series)
61 except ValueError:
62 self.models[group] = self.model_class(deseasonalize=False)
63 self.models[group].fit(series)
64
65 self.cutoff_index[group] = len(series)
66
67 if self.grouped_by == ['__default']:
68 break
69
70 def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],
71 args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:
72 if args.predict_proba:
73 log.warning('This model does not output probability estimates')
74
75 length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)
76 ydf = pd.DataFrame(0, # zero-filled
77 index=np.arange(length),
78 columns=['prediction'],
79 dtype=object)
80
81 data = {'data': ds.data_frame[self.target].reset_index(drop=True),
82 'group_info': {gcol: ds.data_frame[gcol].tolist()
83 for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}
84
85 # all_idxs = list(range(length)) # @TODO: substract, and assign empty predictions to remainder
86
87 for group in self.ts_analysis['group_combinations']:
88
89 if self.grouped_by == ['__default']:
90 series_idxs = data['data'].index
91 series_data = data['data'].values
92 else:
93 series_idxs, series_data = get_group_matches(data, group)
94
95 if series_data.size > 0:
96 forecaster = self.models[group] if self.models[group].is_fitted else self.models['__default']
97
98 series = pd.Series(series_data.squeeze(), index=series_idxs)
99 series = series.sort_index(ascending=True)
100 series = series.reset_index(drop=True)
101
102 for idx, _ in enumerate(series.iteritems()):
103 ydf['prediction'].iloc[series_idxs[idx]] = forecaster.predict(
104 np.arange(idx, # +cutoff
105 idx + self.n_ts_predictions)).tolist() # +cutoff
106
107 if self.grouped_by == ['__default']:
108 break
109
110 return ydf[['prediction']]
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightwood/mixer/sktime.py b/lightwood/mixer/sktime.py
--- a/lightwood/mixer/sktime.py
+++ b/lightwood/mixer/sktime.py
@@ -32,6 +32,7 @@
self.grouped_by = ['__default'] if not ts_analysis['tss'].group_by else ts_analysis['tss'].group_by
self.supports_proba = False
self.stable = True
+ self.prepared = False
def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:
log.info('Started fitting sktime forecaster for array prediction')
@@ -67,10 +68,26 @@
if self.grouped_by == ['__default']:
break
+ def partial_fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:
+ """
+ Note: sktime asks for "specification of the time points for which forecasts are requested",
+ and this mixer complies by assuming forecasts will start immediately after the last observed
+ value.
+
+ Because of this, `partial_fit` ensures that both `dev` and `test` splits are used to fit the AutoARIMA model.
+
+ Due to how lightwood implements the `update` procedure, expected inputs are (for a train-dev-test split):
+
+ :param dev_data: original `test` split (used to validate and select model if ensemble is `BestOf`)
+ :param train_data: includes original `train` and `dev` split
+ """ # noqa
+ self.fit(dev_data, train_data)
+ self.prepared = True
+
def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],
args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:
if args.predict_proba:
- log.warning('This model does not output probability estimates')
+ log.warning('This mixer does not output probability estimates')
length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)
ydf = pd.DataFrame(0, # zero-filled
@@ -101,8 +118,7 @@
for idx, _ in enumerate(series.iteritems()):
ydf['prediction'].iloc[series_idxs[idx]] = forecaster.predict(
- np.arange(idx, # +cutoff
- idx + self.n_ts_predictions)).tolist() # +cutoff
+ np.arange(idx, idx + self.n_ts_predictions)).tolist()
if self.grouped_by == ['__default']:
break
| {"golden_diff": "diff --git a/lightwood/mixer/sktime.py b/lightwood/mixer/sktime.py\n--- a/lightwood/mixer/sktime.py\n+++ b/lightwood/mixer/sktime.py\n@@ -32,6 +32,7 @@\n self.grouped_by = ['__default'] if not ts_analysis['tss'].group_by else ts_analysis['tss'].group_by\n self.supports_proba = False\n self.stable = True\n+ self.prepared = False\n \n def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n log.info('Started fitting sktime forecaster for array prediction')\n@@ -67,10 +68,26 @@\n if self.grouped_by == ['__default']:\n break\n \n+ def partial_fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n+ \"\"\"\n+ Note: sktime asks for \"specification of the time points for which forecasts are requested\",\n+ and this mixer complies by assuming forecasts will start immediately after the last observed\n+ value.\n+\n+ Because of this, `partial_fit` ensures that both `dev` and `test` splits are used to fit the AutoARIMA model.\n+\n+ Due to how lightwood implements the `update` procedure, expected inputs are (for a train-dev-test split):\n+\n+ :param dev_data: original `test` split (used to validate and select model if ensemble is `BestOf`)\n+ :param train_data: includes original `train` and `dev` split\n+ \"\"\" # noqa\n+ self.fit(dev_data, train_data)\n+ self.prepared = True\n+\n def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],\n args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:\n if args.predict_proba:\n- log.warning('This model does not output probability estimates')\n+ log.warning('This mixer does not output probability estimates')\n \n length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)\n ydf = pd.DataFrame(0, # zero-filled\n@@ -101,8 +118,7 @@\n \n for idx, _ in enumerate(series.iteritems()):\n ydf['prediction'].iloc[series_idxs[idx]] = forecaster.predict(\n- np.arange(idx, # +cutoff\n- idx + self.n_ts_predictions)).tolist() # +cutoff\n+ np.arange(idx, idx + self.n_ts_predictions)).tolist()\n \n if self.grouped_by == ['__default']:\n break\n", "issue": "Incorrect `Sktime` forecasting horizon starting point\n* Lightwood version: 1.5.0\r\n\r\nFrom a few internal tests, it seems the `sktime` time series mixer is not emitting forecasts from the end of the validation dataset, but from the training dataset instead, leading to predictions that will be incorrectly displaced.\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\nfrom typing import Dict, Union\nfrom sktime.forecasting.arima import AutoARIMA\n\nfrom lightwood.api import dtype\nfrom lightwood.helpers.log import log\nfrom lightwood.mixer.base import BaseMixer\nfrom lightwood.api.types import PredictionArguments\nfrom lightwood.encoder.time_series.helpers.common import get_group_matches\nfrom lightwood.data.encoded_ds import EncodedDs, ConcatedEncodedDs\n\n\nclass SkTime(BaseMixer):\n forecaster: str\n n_ts_predictions: int\n target: str\n supports_proba: bool\n\n def __init__(\n self, stop_after: int, target: str, dtype_dict: Dict[str, str],\n n_ts_predictions: int, ts_analysis: Dict):\n super().__init__(stop_after)\n self.target = target\n dtype_dict[target] = dtype.float\n self.model_class = AutoARIMA\n self.models = {}\n self.n_ts_predictions = n_ts_predictions\n self.ts_analysis = ts_analysis\n self.forecasting_horizon = np.arange(1, self.n_ts_predictions)\n self.cutoff_index = {} # marks index at which training data stops and forecasting window starts\n self.grouped_by = ['__default'] if not ts_analysis['tss'].group_by else ts_analysis['tss'].group_by\n self.supports_proba = False\n self.stable = True\n\n def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n log.info('Started fitting sktime forecaster for array prediction')\n\n all_subsets = ConcatedEncodedDs([train_data, dev_data])\n df = all_subsets.data_frame.sort_values(by=f'__mdb_original_{self.ts_analysis[\"tss\"].order_by[0]}')\n data = {'data': df[self.target],\n 'group_info': {gcol: df[gcol].tolist()\n for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}\n\n for group in self.ts_analysis['group_combinations']:\n # many warnings might be thrown inside of statsmodels during stepwise procedure\n self.models[group] = self.model_class(suppress_warnings=True)\n\n if self.grouped_by == ['__default']:\n series_idxs = data['data'].index\n series_data = data['data'].values\n else:\n series_idxs, series_data = get_group_matches(data, group)\n\n if series_data.size > 0:\n series = pd.Series(series_data.squeeze(), index=series_idxs)\n series = series.sort_index(ascending=True)\n series = series.reset_index(drop=True)\n try:\n self.models[group].fit(series)\n except ValueError:\n self.models[group] = self.model_class(deseasonalize=False)\n self.models[group].fit(series)\n\n self.cutoff_index[group] = len(series)\n\n if self.grouped_by == ['__default']:\n break\n\n def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],\n args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:\n if args.predict_proba:\n log.warning('This model does not output probability estimates')\n\n length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)\n ydf = pd.DataFrame(0, # zero-filled\n index=np.arange(length),\n columns=['prediction'],\n dtype=object)\n\n data = {'data': ds.data_frame[self.target].reset_index(drop=True),\n 'group_info': {gcol: ds.data_frame[gcol].tolist()\n for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}\n\n # all_idxs = list(range(length)) # @TODO: substract, and assign empty predictions to remainder\n\n for group in self.ts_analysis['group_combinations']:\n\n if self.grouped_by == ['__default']:\n series_idxs = data['data'].index\n series_data = data['data'].values\n else:\n series_idxs, series_data = get_group_matches(data, group)\n\n if series_data.size > 0:\n forecaster = self.models[group] if self.models[group].is_fitted else self.models['__default']\n\n series = pd.Series(series_data.squeeze(), index=series_idxs)\n series = series.sort_index(ascending=True)\n series = series.reset_index(drop=True)\n\n for idx, _ in enumerate(series.iteritems()):\n ydf['prediction'].iloc[series_idxs[idx]] = forecaster.predict(\n np.arange(idx, # +cutoff\n idx + self.n_ts_predictions)).tolist() # +cutoff\n\n if self.grouped_by == ['__default']:\n break\n\n return ydf[['prediction']]\n", "path": "lightwood/mixer/sktime.py"}], "after_files": [{"content": "import numpy as np\nimport pandas as pd\nfrom typing import Dict, Union\nfrom sktime.forecasting.arima import AutoARIMA\n\nfrom lightwood.api import dtype\nfrom lightwood.helpers.log import log\nfrom lightwood.mixer.base import BaseMixer\nfrom lightwood.api.types import PredictionArguments\nfrom lightwood.encoder.time_series.helpers.common import get_group_matches\nfrom lightwood.data.encoded_ds import EncodedDs, ConcatedEncodedDs\n\n\nclass SkTime(BaseMixer):\n forecaster: str\n n_ts_predictions: int\n target: str\n supports_proba: bool\n\n def __init__(\n self, stop_after: int, target: str, dtype_dict: Dict[str, str],\n n_ts_predictions: int, ts_analysis: Dict):\n super().__init__(stop_after)\n self.target = target\n dtype_dict[target] = dtype.float\n self.model_class = AutoARIMA\n self.models = {}\n self.n_ts_predictions = n_ts_predictions\n self.ts_analysis = ts_analysis\n self.forecasting_horizon = np.arange(1, self.n_ts_predictions)\n self.cutoff_index = {} # marks index at which training data stops and forecasting window starts\n self.grouped_by = ['__default'] if not ts_analysis['tss'].group_by else ts_analysis['tss'].group_by\n self.supports_proba = False\n self.stable = True\n self.prepared = False\n\n def fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n log.info('Started fitting sktime forecaster for array prediction')\n\n all_subsets = ConcatedEncodedDs([train_data, dev_data])\n df = all_subsets.data_frame.sort_values(by=f'__mdb_original_{self.ts_analysis[\"tss\"].order_by[0]}')\n data = {'data': df[self.target],\n 'group_info': {gcol: df[gcol].tolist()\n for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}\n\n for group in self.ts_analysis['group_combinations']:\n # many warnings might be thrown inside of statsmodels during stepwise procedure\n self.models[group] = self.model_class(suppress_warnings=True)\n\n if self.grouped_by == ['__default']:\n series_idxs = data['data'].index\n series_data = data['data'].values\n else:\n series_idxs, series_data = get_group_matches(data, group)\n\n if series_data.size > 0:\n series = pd.Series(series_data.squeeze(), index=series_idxs)\n series = series.sort_index(ascending=True)\n series = series.reset_index(drop=True)\n try:\n self.models[group].fit(series)\n except ValueError:\n self.models[group] = self.model_class(deseasonalize=False)\n self.models[group].fit(series)\n\n self.cutoff_index[group] = len(series)\n\n if self.grouped_by == ['__default']:\n break\n\n def partial_fit(self, train_data: EncodedDs, dev_data: EncodedDs) -> None:\n \"\"\"\n Note: sktime asks for \"specification of the time points for which forecasts are requested\",\n and this mixer complies by assuming forecasts will start immediately after the last observed\n value.\n\n Because of this, `partial_fit` ensures that both `dev` and `test` splits are used to fit the AutoARIMA model.\n\n Due to how lightwood implements the `update` procedure, expected inputs are (for a train-dev-test split):\n\n :param dev_data: original `test` split (used to validate and select model if ensemble is `BestOf`)\n :param train_data: includes original `train` and `dev` split\n \"\"\" # noqa\n self.fit(dev_data, train_data)\n self.prepared = True\n\n def __call__(self, ds: Union[EncodedDs, ConcatedEncodedDs],\n args: PredictionArguments = PredictionArguments()) -> pd.DataFrame:\n if args.predict_proba:\n log.warning('This mixer does not output probability estimates')\n\n length = sum(ds.encoded_ds_lenghts) if isinstance(ds, ConcatedEncodedDs) else len(ds)\n ydf = pd.DataFrame(0, # zero-filled\n index=np.arange(length),\n columns=['prediction'],\n dtype=object)\n\n data = {'data': ds.data_frame[self.target].reset_index(drop=True),\n 'group_info': {gcol: ds.data_frame[gcol].tolist()\n for gcol in self.grouped_by} if self.ts_analysis['tss'].group_by else {}}\n\n # all_idxs = list(range(length)) # @TODO: substract, and assign empty predictions to remainder\n\n for group in self.ts_analysis['group_combinations']:\n\n if self.grouped_by == ['__default']:\n series_idxs = data['data'].index\n series_data = data['data'].values\n else:\n series_idxs, series_data = get_group_matches(data, group)\n\n if series_data.size > 0:\n forecaster = self.models[group] if self.models[group].is_fitted else self.models['__default']\n\n series = pd.Series(series_data.squeeze(), index=series_idxs)\n series = series.sort_index(ascending=True)\n series = series.reset_index(drop=True)\n\n for idx, _ in enumerate(series.iteritems()):\n ydf['prediction'].iloc[series_idxs[idx]] = forecaster.predict(\n np.arange(idx, idx + self.n_ts_predictions)).tolist()\n\n if self.grouped_by == ['__default']:\n break\n\n return ydf[['prediction']]\n", "path": "lightwood/mixer/sktime.py"}]} | 1,586 | 577 |
gh_patches_debug_14510 | rasdani/github-patches | git_diff | huggingface__optimum-808 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BetterTransformer support for Marian
`The model type marian is not yet supported supported to be used with BetterTransformer.`
Wondering if there is a way to get this working with BetterTransformer. I am trying to get this model translating faster. https://huggingface.co/Helsinki-NLP/opus-mt-zh-en?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optimum/bettertransformer/models/__init__.py`
Content:
```
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import warnings
15
16 from .encoder_models import (
17 AlbertLayerBetterTransformer,
18 BartEncoderLayerBetterTransformer,
19 BertLayerBetterTransformer,
20 CLIPLayerBetterTransformer,
21 DistilBertLayerBetterTransformer,
22 FSMTEncoderLayerBetterTransformer,
23 MBartEncoderLayerBetterTransformer,
24 ViltLayerBetterTransformer,
25 ViTLayerBetterTransformer,
26 Wav2Vec2EncoderLayerBetterTransformer,
27 WhisperEncoderLayerBetterTransformer,
28 )
29
30
31 class BetterTransformerManager:
32 MODEL_MAPPING = {
33 "albert": ("AlbertLayer", AlbertLayerBetterTransformer),
34 "bart": ("BartEncoderLayer", BartEncoderLayerBetterTransformer),
35 "bert": ("BertLayer", BertLayerBetterTransformer),
36 "bert-generation": ("BertGenerationLayer", BertLayerBetterTransformer),
37 "camembert": ("CamembertLayer", BertLayerBetterTransformer),
38 "clip": ("CLIPEncoderLayer", CLIPLayerBetterTransformer),
39 "data2vec-text": ("Data2VecTextLayer", BertLayerBetterTransformer),
40 "deit": ("DeiTLayer", ViTLayerBetterTransformer),
41 "distilbert": ("TransformerBlock", DistilBertLayerBetterTransformer),
42 "electra": ("ElectraLayer", BertLayerBetterTransformer),
43 "ernie": ("ErnieLayer", BertLayerBetterTransformer),
44 "fsmt": ("EncoderLayer", FSMTEncoderLayerBetterTransformer),
45 "hubert": ("HubertEncoderLayer", Wav2Vec2EncoderLayerBetterTransformer),
46 "layoutlm": ("LayoutLMLayer", BertLayerBetterTransformer),
47 "m2m_100": ("M2M100EncoderLayer", MBartEncoderLayerBetterTransformer),
48 "markuplm": ("MarkupLMLayer", BertLayerBetterTransformer),
49 "mbart": ("MBartEncoderLayer", MBartEncoderLayerBetterTransformer),
50 "rembert": ("RemBertLayer", BertLayerBetterTransformer),
51 "roberta": ("RobertaLayer", BertLayerBetterTransformer),
52 "roc_bert": ("RoCBertLayer", BertLayerBetterTransformer),
53 "roformer": ("RoFormerLayer", BertLayerBetterTransformer),
54 "splinter": ("SplinterLayer", BertLayerBetterTransformer),
55 "tapas": ("TapasLayer", BertLayerBetterTransformer),
56 "vilt": ("ViltLayer", ViltLayerBetterTransformer),
57 "vit": ("ViTLayer", ViTLayerBetterTransformer),
58 "vit_mae": ("ViTMAELayer", ViTLayerBetterTransformer),
59 "vit_msn": ("ViTMSNLayer", ViTLayerBetterTransformer),
60 "wav2vec2": (
61 ["Wav2Vec2EncoderLayer", "Wav2Vec2EncoderLayerStableLayerNorm"],
62 Wav2Vec2EncoderLayerBetterTransformer,
63 ),
64 "whisper": ("WhisperEncoderLayer", WhisperEncoderLayerBetterTransformer),
65 "xlm-roberta": ("XLMRobertaLayer", BertLayerBetterTransformer),
66 "yolos": ("YolosLayer", ViTLayerBetterTransformer),
67 }
68
69 EXCLUDE_FROM_TRANSFORM = {
70 # clip's text model uses causal attention, that is most likely not supported in BetterTransformer
71 "clip": ["text_model"],
72 }
73
74 CAN_NOT_BE_SUPPORTED = {
75 "deberta-v2": "DeBERTa v2 does not use a regular attention mechanism, which is not suppored in PyTorch's BetterTransformer.",
76 "glpn": "GLPN has a convolutional layer present in the FFN network, which is not suppored in PyTorch's BetterTransformer.",
77 "t5": "T5 uses attention bias, which is not suppored in PyTorch's BetterTransformer.",
78 }
79
80 @staticmethod
81 def cannot_support(model_type: str) -> bool:
82 """
83 Returns True if a given model type can not be supported by PyTorch's Better Transformer.
84
85 Args:
86 model_type (`str`):
87 The model type to check.
88 """
89 return model_type in BetterTransformerManager.CAN_NOT_BE_SUPPORTED
90
91 @staticmethod
92 def supports(model_type: str) -> bool:
93 """
94 Returns True if a given model type is supported by PyTorch's Better Transformer, and integrated in Optimum.
95
96 Args:
97 model_type (`str`):
98 The model type to check.
99 """
100 return model_type in BetterTransformerManager.MODEL_MAPPING
101
102
103 class warn_uncompatible_save(object):
104 def __init__(self, callback):
105 self.callback = callback
106
107 def __enter__(self):
108 return self
109
110 def __exit__(self, ex_typ, ex_val, traceback):
111 return True
112
113 def __call__(self, *args, **kwargs):
114 warnings.warn(
115 "You are calling `save_pretrained` to a `BetterTransformer` converted model you may likely encounter unexepected behaviors. ",
116 UserWarning,
117 )
118 return self.callback(*args, **kwargs)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optimum/bettertransformer/models/__init__.py b/optimum/bettertransformer/models/__init__.py
--- a/optimum/bettertransformer/models/__init__.py
+++ b/optimum/bettertransformer/models/__init__.py
@@ -45,6 +45,7 @@
"hubert": ("HubertEncoderLayer", Wav2Vec2EncoderLayerBetterTransformer),
"layoutlm": ("LayoutLMLayer", BertLayerBetterTransformer),
"m2m_100": ("M2M100EncoderLayer", MBartEncoderLayerBetterTransformer),
+ "marian": ("MarianEncoderLayer", BartEncoderLayerBetterTransformer),
"markuplm": ("MarkupLMLayer", BertLayerBetterTransformer),
"mbart": ("MBartEncoderLayer", MBartEncoderLayerBetterTransformer),
"rembert": ("RemBertLayer", BertLayerBetterTransformer),
| {"golden_diff": "diff --git a/optimum/bettertransformer/models/__init__.py b/optimum/bettertransformer/models/__init__.py\n--- a/optimum/bettertransformer/models/__init__.py\n+++ b/optimum/bettertransformer/models/__init__.py\n@@ -45,6 +45,7 @@\n \"hubert\": (\"HubertEncoderLayer\", Wav2Vec2EncoderLayerBetterTransformer),\n \"layoutlm\": (\"LayoutLMLayer\", BertLayerBetterTransformer),\n \"m2m_100\": (\"M2M100EncoderLayer\", MBartEncoderLayerBetterTransformer),\n+ \"marian\": (\"MarianEncoderLayer\", BartEncoderLayerBetterTransformer),\n \"markuplm\": (\"MarkupLMLayer\", BertLayerBetterTransformer),\n \"mbart\": (\"MBartEncoderLayer\", MBartEncoderLayerBetterTransformer),\n \"rembert\": (\"RemBertLayer\", BertLayerBetterTransformer),\n", "issue": "BetterTransformer support for Marian\n`The model type marian is not yet supported supported to be used with BetterTransformer.`\r\n\r\nWondering if there is a way to get this working with BetterTransformer. I am trying to get this model translating faster. https://huggingface.co/Helsinki-NLP/opus-mt-zh-en?\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport warnings\n\nfrom .encoder_models import (\n AlbertLayerBetterTransformer,\n BartEncoderLayerBetterTransformer,\n BertLayerBetterTransformer,\n CLIPLayerBetterTransformer,\n DistilBertLayerBetterTransformer,\n FSMTEncoderLayerBetterTransformer,\n MBartEncoderLayerBetterTransformer,\n ViltLayerBetterTransformer,\n ViTLayerBetterTransformer,\n Wav2Vec2EncoderLayerBetterTransformer,\n WhisperEncoderLayerBetterTransformer,\n)\n\n\nclass BetterTransformerManager:\n MODEL_MAPPING = {\n \"albert\": (\"AlbertLayer\", AlbertLayerBetterTransformer),\n \"bart\": (\"BartEncoderLayer\", BartEncoderLayerBetterTransformer),\n \"bert\": (\"BertLayer\", BertLayerBetterTransformer),\n \"bert-generation\": (\"BertGenerationLayer\", BertLayerBetterTransformer),\n \"camembert\": (\"CamembertLayer\", BertLayerBetterTransformer),\n \"clip\": (\"CLIPEncoderLayer\", CLIPLayerBetterTransformer),\n \"data2vec-text\": (\"Data2VecTextLayer\", BertLayerBetterTransformer),\n \"deit\": (\"DeiTLayer\", ViTLayerBetterTransformer),\n \"distilbert\": (\"TransformerBlock\", DistilBertLayerBetterTransformer),\n \"electra\": (\"ElectraLayer\", BertLayerBetterTransformer),\n \"ernie\": (\"ErnieLayer\", BertLayerBetterTransformer),\n \"fsmt\": (\"EncoderLayer\", FSMTEncoderLayerBetterTransformer),\n \"hubert\": (\"HubertEncoderLayer\", Wav2Vec2EncoderLayerBetterTransformer),\n \"layoutlm\": (\"LayoutLMLayer\", BertLayerBetterTransformer),\n \"m2m_100\": (\"M2M100EncoderLayer\", MBartEncoderLayerBetterTransformer),\n \"markuplm\": (\"MarkupLMLayer\", BertLayerBetterTransformer),\n \"mbart\": (\"MBartEncoderLayer\", MBartEncoderLayerBetterTransformer),\n \"rembert\": (\"RemBertLayer\", BertLayerBetterTransformer),\n \"roberta\": (\"RobertaLayer\", BertLayerBetterTransformer),\n \"roc_bert\": (\"RoCBertLayer\", BertLayerBetterTransformer),\n \"roformer\": (\"RoFormerLayer\", BertLayerBetterTransformer),\n \"splinter\": (\"SplinterLayer\", BertLayerBetterTransformer),\n \"tapas\": (\"TapasLayer\", BertLayerBetterTransformer),\n \"vilt\": (\"ViltLayer\", ViltLayerBetterTransformer),\n \"vit\": (\"ViTLayer\", ViTLayerBetterTransformer),\n \"vit_mae\": (\"ViTMAELayer\", ViTLayerBetterTransformer),\n \"vit_msn\": (\"ViTMSNLayer\", ViTLayerBetterTransformer),\n \"wav2vec2\": (\n [\"Wav2Vec2EncoderLayer\", \"Wav2Vec2EncoderLayerStableLayerNorm\"],\n Wav2Vec2EncoderLayerBetterTransformer,\n ),\n \"whisper\": (\"WhisperEncoderLayer\", WhisperEncoderLayerBetterTransformer),\n \"xlm-roberta\": (\"XLMRobertaLayer\", BertLayerBetterTransformer),\n \"yolos\": (\"YolosLayer\", ViTLayerBetterTransformer),\n }\n\n EXCLUDE_FROM_TRANSFORM = {\n # clip's text model uses causal attention, that is most likely not supported in BetterTransformer\n \"clip\": [\"text_model\"],\n }\n\n CAN_NOT_BE_SUPPORTED = {\n \"deberta-v2\": \"DeBERTa v2 does not use a regular attention mechanism, which is not suppored in PyTorch's BetterTransformer.\",\n \"glpn\": \"GLPN has a convolutional layer present in the FFN network, which is not suppored in PyTorch's BetterTransformer.\",\n \"t5\": \"T5 uses attention bias, which is not suppored in PyTorch's BetterTransformer.\",\n }\n\n @staticmethod\n def cannot_support(model_type: str) -> bool:\n \"\"\"\n Returns True if a given model type can not be supported by PyTorch's Better Transformer.\n\n Args:\n model_type (`str`):\n The model type to check.\n \"\"\"\n return model_type in BetterTransformerManager.CAN_NOT_BE_SUPPORTED\n\n @staticmethod\n def supports(model_type: str) -> bool:\n \"\"\"\n Returns True if a given model type is supported by PyTorch's Better Transformer, and integrated in Optimum.\n\n Args:\n model_type (`str`):\n The model type to check.\n \"\"\"\n return model_type in BetterTransformerManager.MODEL_MAPPING\n\n\nclass warn_uncompatible_save(object):\n def __init__(self, callback):\n self.callback = callback\n\n def __enter__(self):\n return self\n\n def __exit__(self, ex_typ, ex_val, traceback):\n return True\n\n def __call__(self, *args, **kwargs):\n warnings.warn(\n \"You are calling `save_pretrained` to a `BetterTransformer` converted model you may likely encounter unexepected behaviors. \",\n UserWarning,\n )\n return self.callback(*args, **kwargs)\n", "path": "optimum/bettertransformer/models/__init__.py"}], "after_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport warnings\n\nfrom .encoder_models import (\n AlbertLayerBetterTransformer,\n BartEncoderLayerBetterTransformer,\n BertLayerBetterTransformer,\n CLIPLayerBetterTransformer,\n DistilBertLayerBetterTransformer,\n FSMTEncoderLayerBetterTransformer,\n MBartEncoderLayerBetterTransformer,\n ViltLayerBetterTransformer,\n ViTLayerBetterTransformer,\n Wav2Vec2EncoderLayerBetterTransformer,\n WhisperEncoderLayerBetterTransformer,\n)\n\n\nclass BetterTransformerManager:\n MODEL_MAPPING = {\n \"albert\": (\"AlbertLayer\", AlbertLayerBetterTransformer),\n \"bart\": (\"BartEncoderLayer\", BartEncoderLayerBetterTransformer),\n \"bert\": (\"BertLayer\", BertLayerBetterTransformer),\n \"bert-generation\": (\"BertGenerationLayer\", BertLayerBetterTransformer),\n \"camembert\": (\"CamembertLayer\", BertLayerBetterTransformer),\n \"clip\": (\"CLIPEncoderLayer\", CLIPLayerBetterTransformer),\n \"data2vec-text\": (\"Data2VecTextLayer\", BertLayerBetterTransformer),\n \"deit\": (\"DeiTLayer\", ViTLayerBetterTransformer),\n \"distilbert\": (\"TransformerBlock\", DistilBertLayerBetterTransformer),\n \"electra\": (\"ElectraLayer\", BertLayerBetterTransformer),\n \"ernie\": (\"ErnieLayer\", BertLayerBetterTransformer),\n \"fsmt\": (\"EncoderLayer\", FSMTEncoderLayerBetterTransformer),\n \"hubert\": (\"HubertEncoderLayer\", Wav2Vec2EncoderLayerBetterTransformer),\n \"layoutlm\": (\"LayoutLMLayer\", BertLayerBetterTransformer),\n \"m2m_100\": (\"M2M100EncoderLayer\", MBartEncoderLayerBetterTransformer),\n \"marian\": (\"MarianEncoderLayer\", BartEncoderLayerBetterTransformer),\n \"markuplm\": (\"MarkupLMLayer\", BertLayerBetterTransformer),\n \"mbart\": (\"MBartEncoderLayer\", MBartEncoderLayerBetterTransformer),\n \"rembert\": (\"RemBertLayer\", BertLayerBetterTransformer),\n \"roberta\": (\"RobertaLayer\", BertLayerBetterTransformer),\n \"roc_bert\": (\"RoCBertLayer\", BertLayerBetterTransformer),\n \"roformer\": (\"RoFormerLayer\", BertLayerBetterTransformer),\n \"splinter\": (\"SplinterLayer\", BertLayerBetterTransformer),\n \"tapas\": (\"TapasLayer\", BertLayerBetterTransformer),\n \"vilt\": (\"ViltLayer\", ViltLayerBetterTransformer),\n \"vit\": (\"ViTLayer\", ViTLayerBetterTransformer),\n \"vit_mae\": (\"ViTMAELayer\", ViTLayerBetterTransformer),\n \"vit_msn\": (\"ViTMSNLayer\", ViTLayerBetterTransformer),\n \"wav2vec2\": (\"Wav2Vec2EncoderLayer\", Wav2Vec2EncoderLayerBetterTransformer),\n \"whisper\": (\"WhisperEncoderLayer\", WhisperEncoderLayerBetterTransformer),\n \"xlm-roberta\": (\"XLMRobertaLayer\", BertLayerBetterTransformer),\n \"yolos\": (\"YolosLayer\", ViTLayerBetterTransformer),\n }\n\n EXCLUDE_FROM_TRANSFORM = {\n # clip's text model uses causal attention, that is most likely not supported in BetterTransformer\n \"clip\": [\"text_model\"],\n }\n\n CAN_NOT_BE_SUPPORTED = {\n \"deberta-v2\": \"DeBERTa v2 does not use a regular attention mechanism, which is not suppored in PyTorch's BetterTransformer.\",\n \"glpn\": \"GLPN has a convolutional layer present in the FFN network, which is not suppored in PyTorch's BetterTransformer.\",\n \"t5\": \"T5 uses attention bias, which is not suppored in PyTorch's BetterTransformer.\",\n }\n\n @staticmethod\n def cannot_support(model_type: str) -> bool:\n \"\"\"\n Returns True if a given model type can not be supported by PyTorch's Better Transformer.\n\n Args:\n model_type (`str`):\n The model type to check.\n \"\"\"\n return model_type in BetterTransformerManager.CAN_NOT_BE_SUPPORTED\n\n @staticmethod\n def supports(model_type: str) -> bool:\n \"\"\"\n Returns True if a given model type is supported by PyTorch's Better Transformer, and integrated in Optimum.\n\n Args:\n model_type (`str`):\n The model type to check.\n \"\"\"\n return model_type in BetterTransformerManager.MODEL_MAPPING\n\n\nclass warn_uncompatible_save(object):\n def __init__(self, callback):\n self.callback = callback\n\n def __enter__(self):\n return self\n\n def __exit__(self, ex_typ, ex_val, traceback):\n return True\n\n def __call__(self, *args, **kwargs):\n warnings.warn(\n \"You are calling `save_pretrained` to a `BetterTransformer` converted model you may likely encounter unexepected behaviors. \",\n UserWarning,\n )\n return self.callback(*args, **kwargs)\n", "path": "optimum/bettertransformer/models/__init__.py"}]} | 1,783 | 200 |
gh_patches_debug_662 | rasdani/github-patches | git_diff | pex-tool__pex-1976 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.113
On the docket:
+ [x] Restore AtomicDirectory non-locked good behavior. #1974
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.112"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.112"
+__version__ = "2.1.113"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.112\"\n+__version__ = \"2.1.113\"\n", "issue": "Release 2.1.113\nOn the docket:\r\n+ [x] Restore AtomicDirectory non-locked good behavior. #1974\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.112\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.113\"\n", "path": "pex/version.py"}]} | 342 | 98 |
gh_patches_debug_536 | rasdani/github-patches | git_diff | translate__pootle-5863 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Templates language is taken into account in the main view
I'm testing Pootle 2.8.0RC3 and I've found an issue related to #4568.
When I filter the translations for a single project, the progress bar now shows 100% (the templates aren't taken into account now, great):

However, when I go back to the global view, that project shows a progress bar including the templates result:

Thank you!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_data/project_data.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from pootle.core.delegate import revision
10
11 from .utils import RelatedStoresDataTool, RelatedTPsDataTool
12
13
14 class ProjectDataTool(RelatedTPsDataTool):
15 """Retrieves aggregate stats for a Project"""
16
17 cache_key_name = "project"
18
19 def filter_data(self, qs):
20 return qs.filter(tp__project=self.context)
21
22 @property
23 def rev_cache_key(self):
24 return revision.get(
25 self.context.__class__)(self.context.directory).get(key="stats")
26
27
28 class ProjectResourceDataTool(RelatedStoresDataTool):
29 group_by = ("store__translation_project__language__code", )
30 cache_key_name = "project_resource"
31
32 @property
33 def project_path(self):
34 return (
35 "/%s%s"
36 % (self.project_code, self.tp_path))
37
38 @property
39 def tp_path(self):
40 return (
41 "/%s%s"
42 % (self.dir_path,
43 self.filename))
44
45 def filter_data(self, qs):
46 return (
47 qs.filter(store__translation_project__project__code=self.project_code)
48 .filter(store__tp_path__startswith=self.tp_path))
49
50 @property
51 def context_name(self):
52 return "/projects%s" % self.project_path
53
54
55 class ProjectSetDataTool(RelatedTPsDataTool):
56 group_by = ("tp__project__code", )
57 cache_key_name = "projects"
58
59 def get_root_child_path(self, child):
60 return child[self.group_by[0]]
61
62 @property
63 def context_name(self):
64 return "ALL"
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_data/project_data.py b/pootle/apps/pootle_data/project_data.py
--- a/pootle/apps/pootle_data/project_data.py
+++ b/pootle/apps/pootle_data/project_data.py
@@ -62,3 +62,7 @@
@property
def context_name(self):
return "ALL"
+
+ def filter_data(self, qs):
+ qs = super(ProjectSetDataTool, self).filter_data(qs)
+ return qs.exclude(tp__language__code="templates")
| {"golden_diff": "diff --git a/pootle/apps/pootle_data/project_data.py b/pootle/apps/pootle_data/project_data.py\n--- a/pootle/apps/pootle_data/project_data.py\n+++ b/pootle/apps/pootle_data/project_data.py\n@@ -62,3 +62,7 @@\n @property\n def context_name(self):\n return \"ALL\"\n+\n+ def filter_data(self, qs):\n+ qs = super(ProjectSetDataTool, self).filter_data(qs)\n+ return qs.exclude(tp__language__code=\"templates\")\n", "issue": "Templates language is taken into account in the main view\nI'm testing Pootle 2.8.0RC3 and I've found an issue related to #4568.\r\n\r\nWhen I filter the translations for a single project, the progress bar now shows 100% (the templates aren't taken into account now, great):\r\n\r\n\r\nHowever, when I go back to the global view, that project shows a progress bar including the templates result:\r\n\r\n\r\nThank you!\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom pootle.core.delegate import revision\n\nfrom .utils import RelatedStoresDataTool, RelatedTPsDataTool\n\n\nclass ProjectDataTool(RelatedTPsDataTool):\n \"\"\"Retrieves aggregate stats for a Project\"\"\"\n\n cache_key_name = \"project\"\n\n def filter_data(self, qs):\n return qs.filter(tp__project=self.context)\n\n @property\n def rev_cache_key(self):\n return revision.get(\n self.context.__class__)(self.context.directory).get(key=\"stats\")\n\n\nclass ProjectResourceDataTool(RelatedStoresDataTool):\n group_by = (\"store__translation_project__language__code\", )\n cache_key_name = \"project_resource\"\n\n @property\n def project_path(self):\n return (\n \"/%s%s\"\n % (self.project_code, self.tp_path))\n\n @property\n def tp_path(self):\n return (\n \"/%s%s\"\n % (self.dir_path,\n self.filename))\n\n def filter_data(self, qs):\n return (\n qs.filter(store__translation_project__project__code=self.project_code)\n .filter(store__tp_path__startswith=self.tp_path))\n\n @property\n def context_name(self):\n return \"/projects%s\" % self.project_path\n\n\nclass ProjectSetDataTool(RelatedTPsDataTool):\n group_by = (\"tp__project__code\", )\n cache_key_name = \"projects\"\n\n def get_root_child_path(self, child):\n return child[self.group_by[0]]\n\n @property\n def context_name(self):\n return \"ALL\"\n", "path": "pootle/apps/pootle_data/project_data.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom pootle.core.delegate import revision\n\nfrom .utils import RelatedStoresDataTool, RelatedTPsDataTool\n\n\nclass ProjectDataTool(RelatedTPsDataTool):\n \"\"\"Retrieves aggregate stats for a Project\"\"\"\n\n cache_key_name = \"project\"\n\n def filter_data(self, qs):\n return qs.filter(tp__project=self.context)\n\n @property\n def rev_cache_key(self):\n return revision.get(\n self.context.__class__)(self.context.directory).get(key=\"stats\")\n\n\nclass ProjectResourceDataTool(RelatedStoresDataTool):\n group_by = (\"store__translation_project__language__code\", )\n cache_key_name = \"project_resource\"\n\n @property\n def project_path(self):\n return (\n \"/%s%s\"\n % (self.project_code, self.tp_path))\n\n @property\n def tp_path(self):\n return (\n \"/%s%s\"\n % (self.dir_path,\n self.filename))\n\n def filter_data(self, qs):\n return (\n qs.filter(store__translation_project__project__code=self.project_code)\n .filter(store__tp_path__startswith=self.tp_path))\n\n @property\n def context_name(self):\n return \"/projects%s\" % self.project_path\n\n\nclass ProjectSetDataTool(RelatedTPsDataTool):\n group_by = (\"tp__project__code\", )\n cache_key_name = \"projects\"\n\n def get_root_child_path(self, child):\n return child[self.group_by[0]]\n\n @property\n def context_name(self):\n return \"ALL\"\n\n def filter_data(self, qs):\n qs = super(ProjectSetDataTool, self).filter_data(qs)\n return qs.exclude(tp__language__code=\"templates\")\n", "path": "pootle/apps/pootle_data/project_data.py"}]} | 1,020 | 123 |
gh_patches_debug_2034 | rasdani/github-patches | git_diff | hylang__hy-320 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
hy raises ImportError out of the box
This is on Python 2.6.
May be related to #37
I think `hy` should probably install the `importlib` dependency at installation time, or the docs should state clearly that `importlib` needs to be installed ahead of time. Or, (worst case) state that Python 2.6 is not supported.
```
(env)09:52:13 Python (master) > hy
Traceback (most recent call last):
File "/Users/jacobsen/env/bin/hy", line 9, in <module>
load_entry_point('hy==0.9.10', 'console_scripts', 'hy')()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py", line 343, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py", line 2354, in load_entry_point
return ep.load()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py", line 2060, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Users/jacobsen/Programming/Python/hy/hy/__init__.py", line 37, in <module>
import hy.importer # NOQA
File "/Users/jacobsen/Programming/Python/hy/hy/importer.py", line 22, in <module>
from hy.compiler import hy_compile
File "/Users/jacobsen/Programming/Python/hy/hy/compiler.py", line 44, in <module>
import importlib
ImportError: No module named importlib
(env)09:52:13 Python (master) > pip install importlib
Downloading/unpacking importlib
Downloading importlib-1.0.2.tar.bz2
Running setup.py egg_info for package importlib
Installing collected packages: importlib
Running setup.py install for importlib
Successfully installed importlib
Cleaning up...
(env)09:52:21 Python (master) > hy
hy 0.9.10
=>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the "Software"),
6 # to deal in the Software without restriction, including without limitation
7 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 # and/or sell copies of the Software, and to permit persons to whom the
9 # Software is furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
17 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
19 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
20 # DEALINGS IN THE SOFTWARE.
21
22 import os
23 import re
24 import sys
25
26 from setuptools import find_packages, setup
27
28 PKG = "hy"
29 VERSIONFILE = os.path.join(PKG, "version.py")
30 verstr = "unknown"
31 try:
32 verstrline = open(VERSIONFILE, "rt").read()
33 except EnvironmentError:
34 pass # Okay, there is no version file.
35 else:
36 VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
37 mo = re.search(VSRE, verstrline, re.M)
38 if mo:
39 __version__ = mo.group(1)
40 else:
41 msg = "if %s.py exists, it is required to be well-formed" % VERSIONFILE
42 raise RuntimeError(msg)
43
44 long_description = """Hy is a Python <--> Lisp layer. It helps
45 make things work nicer, and lets Python and the Hy lisp variant play
46 nice together. """
47
48 install_requires = ['rply>=0.6.2']
49 if sys.version_info[:2] < (2, 7):
50 install_requires.append('argparse>=1.2.1')
51 if os.name == 'nt':
52 install_requires.append('pyreadline==2.0')
53
54 setup(
55 name=PKG,
56 version=__version__,
57 install_requires=install_requires,
58 dependency_links=['https://github.com/hylang/rply/zipball/master#egg=rply-0.6.2'],
59 entry_points={
60 'console_scripts': [
61 'hy = hy.cmdline:hy_main',
62 'hyc = hy.cmdline:hyc_main'
63 ]
64 },
65 packages=find_packages(exclude=['tests*']),
66 package_data={
67 'hy.contrib': ['*.hy'],
68 'hy.core': ['*.hy'],
69 },
70 author="Paul Tagliamonte",
71 author_email="[email protected]",
72 long_description=long_description,
73 description='Lisp and Python love each other.',
74 license="Expat",
75 url="http://hylang.org/",
76 platforms=['any'],
77 classifiers=[
78 "Development Status :: 4 - Beta",
79 "Intended Audience :: Developers",
80 "License :: DFSG approved",
81 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
82 "Operating System :: OS Independent",
83 "Programming Language :: Lisp",
84 "Programming Language :: Python",
85 "Programming Language :: Python :: 2",
86 "Programming Language :: Python :: 2.6",
87 "Programming Language :: Python :: 2.7",
88 "Programming Language :: Python :: 3",
89 "Programming Language :: Python :: 3.2",
90 "Programming Language :: Python :: 3.3",
91 "Topic :: Software Development :: Code Generators",
92 "Topic :: Software Development :: Compilers",
93 "Topic :: Software Development :: Libraries",
94 ]
95 )
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -48,6 +48,7 @@
install_requires = ['rply>=0.6.2']
if sys.version_info[:2] < (2, 7):
install_requires.append('argparse>=1.2.1')
+ install_requires.append('importlib>=1.0.2')
if os.name == 'nt':
install_requires.append('pyreadline==2.0')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -48,6 +48,7 @@\n install_requires = ['rply>=0.6.2']\n if sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\n+ install_requires.append('importlib>=1.0.2')\n if os.name == 'nt':\n install_requires.append('pyreadline==2.0')\n", "issue": "hy raises ImportError out of the box\nThis is on Python 2.6.\n\nMay be related to #37\n\nI think `hy` should probably install the `importlib` dependency at installation time, or the docs should state clearly that `importlib` needs to be installed ahead of time. Or, (worst case) state that Python 2.6 is not supported.\n\n```\n(env)09:52:13 Python (master) > hy\nTraceback (most recent call last):\n File \"/Users/jacobsen/env/bin/hy\", line 9, in <module>\n load_entry_point('hy==0.9.10', 'console_scripts', 'hy')()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py\", line 343, in load_entry_point\n return get_distribution(dist).load_entry_point(group, name)\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py\", line 2354, in load_entry_point\n return ep.load()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pkg_resources.py\", line 2060, in load\n entry = __import__(self.module_name, globals(),globals(), ['__name__'])\n File \"/Users/jacobsen/Programming/Python/hy/hy/__init__.py\", line 37, in <module>\n import hy.importer # NOQA\n File \"/Users/jacobsen/Programming/Python/hy/hy/importer.py\", line 22, in <module>\n from hy.compiler import hy_compile\n File \"/Users/jacobsen/Programming/Python/hy/hy/compiler.py\", line 44, in <module>\n import importlib\nImportError: No module named importlib\n(env)09:52:13 Python (master) > pip install importlib\nDownloading/unpacking importlib\n Downloading importlib-1.0.2.tar.bz2\n Running setup.py egg_info for package importlib\nInstalling collected packages: importlib\n Running setup.py install for importlib\nSuccessfully installed importlib\nCleaning up...\n(env)09:52:21 Python (master) > hy\nhy 0.9.10\n=> \n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\nimport re\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPKG = \"hy\"\nVERSIONFILE = os.path.join(PKG, \"version.py\")\nverstr = \"unknown\"\ntry:\n verstrline = open(VERSIONFILE, \"rt\").read()\nexcept EnvironmentError:\n pass # Okay, there is no version file.\nelse:\n VSRE = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\n mo = re.search(VSRE, verstrline, re.M)\n if mo:\n __version__ = mo.group(1)\n else:\n msg = \"if %s.py exists, it is required to be well-formed\" % VERSIONFILE\n raise RuntimeError(msg)\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\ninstall_requires = ['rply>=0.6.2']\nif sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\nif os.name == 'nt':\n install_requires.append('pyreadline==2.0')\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=install_requires,\n dependency_links=['https://github.com/hylang/rply/zipball/master#egg=rply-0.6.2'],\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hyc = hy.cmdline:hyc_main'\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\nimport re\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPKG = \"hy\"\nVERSIONFILE = os.path.join(PKG, \"version.py\")\nverstr = \"unknown\"\ntry:\n verstrline = open(VERSIONFILE, \"rt\").read()\nexcept EnvironmentError:\n pass # Okay, there is no version file.\nelse:\n VSRE = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\n mo = re.search(VSRE, verstrline, re.M)\n if mo:\n __version__ = mo.group(1)\n else:\n msg = \"if %s.py exists, it is required to be well-formed\" % VERSIONFILE\n raise RuntimeError(msg)\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\ninstall_requires = ['rply>=0.6.2']\nif sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\n install_requires.append('importlib>=1.0.2')\nif os.name == 'nt':\n install_requires.append('pyreadline==2.0')\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=install_requires,\n dependency_links=['https://github.com/hylang/rply/zipball/master#egg=rply-0.6.2'],\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hyc = hy.cmdline:hyc_main'\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}]} | 1,835 | 108 |
gh_patches_debug_18446 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2515 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeDecodeError at /rest/v1/recipient_country/
'ascii' codec can't decode byte 0xc3 in position 7: ordinal not in range(128)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/models/country.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.db import models
9 from django.core.validators import MaxValueValidator, MinValueValidator
10 from django.utils.translation import ugettext_lazy as _
11
12 from ..fields import ValidXMLCharField
13 from ..iso3166 import ISO_3166_COUNTRIES, CONTINENTS, COUNTRY_CONTINENTS
14
15 from akvo.codelists import models as codelist_models
16 from akvo.codelists.store.codelists_v202 import COUNTRY
17 from akvo.utils import codelist_choices, codelist_value
18
19
20 class Country(models.Model):
21 name = ValidXMLCharField(_(u'country name'), max_length=50, unique=True, db_index=True)
22 iso_code = ValidXMLCharField(
23 _(u'ISO 3166 code'), max_length=2, unique=True, db_index=True, choices=ISO_3166_COUNTRIES
24 )
25 continent = ValidXMLCharField(_(u'continent name'), max_length=20, db_index=True)
26 continent_code = ValidXMLCharField(
27 _(u'continent code'), max_length=2, db_index=True, choices=CONTINENTS
28 )
29
30 def __unicode__(self):
31 return self.name
32
33 @classmethod
34 def fields_from_iso_code(cls, iso_code):
35 continent_code = COUNTRY_CONTINENTS[iso_code]
36 name = dict(ISO_3166_COUNTRIES)[iso_code]
37 continent = dict(CONTINENTS)[continent_code]
38 return dict(
39 iso_code=iso_code, name=name, continent=continent, continent_code=continent_code
40 )
41
42 class Meta:
43 app_label = 'rsr'
44 verbose_name = _(u'country')
45 verbose_name_plural = _(u'countries')
46 ordering = ['name']
47
48
49 class RecipientCountry(models.Model):
50 project = models.ForeignKey(
51 'Project', verbose_name=_(u'project'), related_name='recipient_countries'
52 )
53 country = ValidXMLCharField(
54 _(u'recipient country'), blank=True, max_length=2,choices=codelist_choices(COUNTRY, show_code=False),
55 help_text=_(u'The country that benefits from the project.')
56 )
57 percentage = models.DecimalField(
58 _(u'recipient country percentage'), blank=True, null=True, max_digits=4, decimal_places=1,
59 validators=[MaxValueValidator(100), MinValueValidator(0)],
60 help_text=_(u'The percentage of total commitments or total activity budget allocated to '
61 u'this country. Content must be a positive decimal number between 0 and 100, '
62 u'with no percentage sign. Percentages for all reported countries and regions '
63 u'MUST add up to 100%. Use a period to denote decimals.')
64 )
65 text = ValidXMLCharField(
66 _(u'recipient country description'), blank=True, max_length=50,
67 help_text=_(u'Enter additional information about the recipient country, if necessary.')
68 )
69
70 def __unicode__(self):
71 if self.country:
72 try:
73 country_unicode = self.iati_country().name
74 except (AttributeError, codelist_models.Country.DoesNotExist):
75 country_unicode = self.country
76 else:
77 country_unicode = u'%s' % _(u'No country specified')
78
79 if self.percentage:
80 country_unicode += u' (%s%%)' % str(self.percentage)
81
82 return country_unicode
83
84 def iati_country(self):
85 return codelist_value(codelist_models.Country, self, 'country')
86
87 def iati_country_unicode(self):
88 return str(self.iati_country())
89
90 class Meta:
91 app_label = 'rsr'
92 verbose_name = _(u'recipient country')
93 verbose_name_plural = _(u'recipient countries')
94 ordering = ('-percentage', 'country')
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rsr/models/country.py b/akvo/rsr/models/country.py
--- a/akvo/rsr/models/country.py
+++ b/akvo/rsr/models/country.py
@@ -51,7 +51,7 @@
'Project', verbose_name=_(u'project'), related_name='recipient_countries'
)
country = ValidXMLCharField(
- _(u'recipient country'), blank=True, max_length=2,choices=codelist_choices(COUNTRY, show_code=False),
+ _(u'recipient country'), blank=True, max_length=2, choices=codelist_choices(COUNTRY, show_code=False),
help_text=_(u'The country that benefits from the project.')
)
percentage = models.DecimalField(
@@ -85,7 +85,7 @@
return codelist_value(codelist_models.Country, self, 'country')
def iati_country_unicode(self):
- return str(self.iati_country())
+ return unicode(self.iati_country())
class Meta:
app_label = 'rsr'
| {"golden_diff": "diff --git a/akvo/rsr/models/country.py b/akvo/rsr/models/country.py\n--- a/akvo/rsr/models/country.py\n+++ b/akvo/rsr/models/country.py\n@@ -51,7 +51,7 @@\n 'Project', verbose_name=_(u'project'), related_name='recipient_countries'\n )\n country = ValidXMLCharField(\n- _(u'recipient country'), blank=True, max_length=2,choices=codelist_choices(COUNTRY, show_code=False),\n+ _(u'recipient country'), blank=True, max_length=2, choices=codelist_choices(COUNTRY, show_code=False),\n help_text=_(u'The country that benefits from the project.')\n )\n percentage = models.DecimalField(\n@@ -85,7 +85,7 @@\n return codelist_value(codelist_models.Country, self, 'country')\n \n def iati_country_unicode(self):\n- return str(self.iati_country())\n+ return unicode(self.iati_country())\n \n class Meta:\n app_label = 'rsr'\n", "issue": "UnicodeDecodeError at /rest/v1/recipient_country/\n'ascii' codec can't decode byte 0xc3 in position 7: ordinal not in range(128)\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\nfrom ..iso3166 import ISO_3166_COUNTRIES, CONTINENTS, COUNTRY_CONTINENTS\n\nfrom akvo.codelists import models as codelist_models\nfrom akvo.codelists.store.codelists_v202 import COUNTRY\nfrom akvo.utils import codelist_choices, codelist_value\n\n\nclass Country(models.Model):\n name = ValidXMLCharField(_(u'country name'), max_length=50, unique=True, db_index=True)\n iso_code = ValidXMLCharField(\n _(u'ISO 3166 code'), max_length=2, unique=True, db_index=True, choices=ISO_3166_COUNTRIES\n )\n continent = ValidXMLCharField(_(u'continent name'), max_length=20, db_index=True)\n continent_code = ValidXMLCharField(\n _(u'continent code'), max_length=2, db_index=True, choices=CONTINENTS\n )\n\n def __unicode__(self):\n return self.name\n\n @classmethod\n def fields_from_iso_code(cls, iso_code):\n continent_code = COUNTRY_CONTINENTS[iso_code]\n name = dict(ISO_3166_COUNTRIES)[iso_code]\n continent = dict(CONTINENTS)[continent_code]\n return dict(\n iso_code=iso_code, name=name, continent=continent, continent_code=continent_code\n )\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'country')\n verbose_name_plural = _(u'countries')\n ordering = ['name']\n\n\nclass RecipientCountry(models.Model):\n project = models.ForeignKey(\n 'Project', verbose_name=_(u'project'), related_name='recipient_countries'\n )\n country = ValidXMLCharField(\n _(u'recipient country'), blank=True, max_length=2,choices=codelist_choices(COUNTRY, show_code=False),\n help_text=_(u'The country that benefits from the project.')\n )\n percentage = models.DecimalField(\n _(u'recipient country percentage'), blank=True, null=True, max_digits=4, decimal_places=1,\n validators=[MaxValueValidator(100), MinValueValidator(0)],\n help_text=_(u'The percentage of total commitments or total activity budget allocated to '\n u'this country. Content must be a positive decimal number between 0 and 100, '\n u'with no percentage sign. Percentages for all reported countries and regions '\n u'MUST add up to 100%. Use a period to denote decimals.')\n )\n text = ValidXMLCharField(\n _(u'recipient country description'), blank=True, max_length=50,\n help_text=_(u'Enter additional information about the recipient country, if necessary.')\n )\n\n def __unicode__(self):\n if self.country:\n try:\n country_unicode = self.iati_country().name\n except (AttributeError, codelist_models.Country.DoesNotExist):\n country_unicode = self.country\n else:\n country_unicode = u'%s' % _(u'No country specified')\n\n if self.percentage:\n country_unicode += u' (%s%%)' % str(self.percentage)\n\n return country_unicode\n\n def iati_country(self):\n return codelist_value(codelist_models.Country, self, 'country')\n\n def iati_country_unicode(self):\n return str(self.iati_country())\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'recipient country')\n verbose_name_plural = _(u'recipient countries')\n ordering = ('-percentage', 'country')\n", "path": "akvo/rsr/models/country.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\nfrom ..iso3166 import ISO_3166_COUNTRIES, CONTINENTS, COUNTRY_CONTINENTS\n\nfrom akvo.codelists import models as codelist_models\nfrom akvo.codelists.store.codelists_v202 import COUNTRY\nfrom akvo.utils import codelist_choices, codelist_value\n\n\nclass Country(models.Model):\n name = ValidXMLCharField(_(u'country name'), max_length=50, unique=True, db_index=True)\n iso_code = ValidXMLCharField(\n _(u'ISO 3166 code'), max_length=2, unique=True, db_index=True, choices=ISO_3166_COUNTRIES\n )\n continent = ValidXMLCharField(_(u'continent name'), max_length=20, db_index=True)\n continent_code = ValidXMLCharField(\n _(u'continent code'), max_length=2, db_index=True, choices=CONTINENTS\n )\n\n def __unicode__(self):\n return self.name\n\n @classmethod\n def fields_from_iso_code(cls, iso_code):\n continent_code = COUNTRY_CONTINENTS[iso_code]\n name = dict(ISO_3166_COUNTRIES)[iso_code]\n continent = dict(CONTINENTS)[continent_code]\n return dict(\n iso_code=iso_code, name=name, continent=continent, continent_code=continent_code\n )\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'country')\n verbose_name_plural = _(u'countries')\n ordering = ['name']\n\n\nclass RecipientCountry(models.Model):\n project = models.ForeignKey(\n 'Project', verbose_name=_(u'project'), related_name='recipient_countries'\n )\n country = ValidXMLCharField(\n _(u'recipient country'), blank=True, max_length=2, choices=codelist_choices(COUNTRY, show_code=False),\n help_text=_(u'The country that benefits from the project.')\n )\n percentage = models.DecimalField(\n _(u'recipient country percentage'), blank=True, null=True, max_digits=4, decimal_places=1,\n validators=[MaxValueValidator(100), MinValueValidator(0)],\n help_text=_(u'The percentage of total commitments or total activity budget allocated to '\n u'this country. Content must be a positive decimal number between 0 and 100, '\n u'with no percentage sign. Percentages for all reported countries and regions '\n u'MUST add up to 100%. Use a period to denote decimals.')\n )\n text = ValidXMLCharField(\n _(u'recipient country description'), blank=True, max_length=50,\n help_text=_(u'Enter additional information about the recipient country, if necessary.')\n )\n\n def __unicode__(self):\n if self.country:\n try:\n country_unicode = self.iati_country().name\n except (AttributeError, codelist_models.Country.DoesNotExist):\n country_unicode = self.country\n else:\n country_unicode = u'%s' % _(u'No country specified')\n\n if self.percentage:\n country_unicode += u' (%s%%)' % str(self.percentage)\n\n return country_unicode\n\n def iati_country(self):\n return codelist_value(codelist_models.Country, self, 'country')\n\n def iati_country_unicode(self):\n return unicode(self.iati_country())\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'recipient country')\n verbose_name_plural = _(u'recipient countries')\n ordering = ('-percentage', 'country')\n", "path": "akvo/rsr/models/country.py"}]} | 1,379 | 234 |
gh_patches_debug_1895 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DISCOVERY: Notification and change log for domain managers
### Issue description
As a domain manager,
I want an in-app log of all changes made to my domain
So that I can ensure that it is correct, and track any changes that have been made, avoiding and correcting errors.
### Acceptance criteria
TBD
### Additional context
Notifications about changes to domain info:
All users wanted to be notified of changes to their domain informationβin particular, updates to name servers. Most users said theyβd like an email notifications because they rarely visit the registrar. However, an in-app audit trail would be helpful, as well, for future reference or in case an email was missed. Need to do some discovery and design exploration around this.
Souirce: [User feedback](https://docs.google.com/document/d/1M5foXX34qPc7R_J1uhBACHWUhg8WHwX3bB6nurvNNWE/edit#bookmark=id.pa0k2x54vkx1)
### Links to other issues
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/models/__init__.py`
Content:
```
1 from auditlog.registry import auditlog # type: ignore
2 from .contact import Contact
3 from .domain_application import DomainApplication
4 from .domain_information import DomainInformation
5 from .domain import Domain
6 from .draft_domain import DraftDomain
7 from .host_ip import HostIP
8 from .host import Host
9 from .domain_invitation import DomainInvitation
10 from .nameserver import Nameserver
11 from .user_domain_role import UserDomainRole
12 from .public_contact import PublicContact
13 from .user import User
14 from .user_group import UserGroup
15 from .website import Website
16 from .transition_domain import TransitionDomain
17
18 __all__ = [
19 "Contact",
20 "DomainApplication",
21 "DomainInformation",
22 "Domain",
23 "DraftDomain",
24 "DomainInvitation",
25 "HostIP",
26 "Host",
27 "Nameserver",
28 "UserDomainRole",
29 "PublicContact",
30 "User",
31 "UserGroup",
32 "Website",
33 "TransitionDomain",
34 ]
35
36 auditlog.register(Contact)
37 auditlog.register(DomainApplication)
38 auditlog.register(Domain)
39 auditlog.register(DraftDomain)
40 auditlog.register(DomainInvitation)
41 auditlog.register(HostIP)
42 auditlog.register(Host)
43 auditlog.register(Nameserver)
44 auditlog.register(UserDomainRole)
45 auditlog.register(PublicContact)
46 auditlog.register(User, m2m_fields=["user_permissions", "groups"])
47 auditlog.register(UserGroup, m2m_fields=["permissions"])
48 auditlog.register(Website)
49 auditlog.register(TransitionDomain)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/models/__init__.py b/src/registrar/models/__init__.py
--- a/src/registrar/models/__init__.py
+++ b/src/registrar/models/__init__.py
@@ -38,6 +38,7 @@
auditlog.register(Domain)
auditlog.register(DraftDomain)
auditlog.register(DomainInvitation)
+auditlog.register(DomainInformation)
auditlog.register(HostIP)
auditlog.register(Host)
auditlog.register(Nameserver)
| {"golden_diff": "diff --git a/src/registrar/models/__init__.py b/src/registrar/models/__init__.py\n--- a/src/registrar/models/__init__.py\n+++ b/src/registrar/models/__init__.py\n@@ -38,6 +38,7 @@\n auditlog.register(Domain)\n auditlog.register(DraftDomain)\n auditlog.register(DomainInvitation)\n+auditlog.register(DomainInformation)\n auditlog.register(HostIP)\n auditlog.register(Host)\n auditlog.register(Nameserver)\n", "issue": "DISCOVERY: Notification and change log for domain managers\n### Issue description\n\nAs a domain manager,\nI want an in-app log of all changes made to my domain\nSo that I can ensure that it is correct, and track any changes that have been made, avoiding and correcting errors.\n\n### Acceptance criteria\n\nTBD\n\n### Additional context\n\nNotifications about changes to domain info:\n\nAll users wanted to be notified of changes to their domain information\u2013in particular, updates to name servers. Most users said they\u2019d like an email notifications because they rarely visit the registrar. However, an in-app audit trail would be helpful, as well, for future reference or in case an email was missed. Need to do some discovery and design exploration around this.\n\nSouirce: [User feedback](https://docs.google.com/document/d/1M5foXX34qPc7R_J1uhBACHWUhg8WHwX3bB6nurvNNWE/edit#bookmark=id.pa0k2x54vkx1)\n\n### Links to other issues\n\n_No response_\n", "before_files": [{"content": "from auditlog.registry import auditlog # type: ignore\nfrom .contact import Contact\nfrom .domain_application import DomainApplication\nfrom .domain_information import DomainInformation\nfrom .domain import Domain\nfrom .draft_domain import DraftDomain\nfrom .host_ip import HostIP\nfrom .host import Host\nfrom .domain_invitation import DomainInvitation\nfrom .nameserver import Nameserver\nfrom .user_domain_role import UserDomainRole\nfrom .public_contact import PublicContact\nfrom .user import User\nfrom .user_group import UserGroup\nfrom .website import Website\nfrom .transition_domain import TransitionDomain\n\n__all__ = [\n \"Contact\",\n \"DomainApplication\",\n \"DomainInformation\",\n \"Domain\",\n \"DraftDomain\",\n \"DomainInvitation\",\n \"HostIP\",\n \"Host\",\n \"Nameserver\",\n \"UserDomainRole\",\n \"PublicContact\",\n \"User\",\n \"UserGroup\",\n \"Website\",\n \"TransitionDomain\",\n]\n\nauditlog.register(Contact)\nauditlog.register(DomainApplication)\nauditlog.register(Domain)\nauditlog.register(DraftDomain)\nauditlog.register(DomainInvitation)\nauditlog.register(HostIP)\nauditlog.register(Host)\nauditlog.register(Nameserver)\nauditlog.register(UserDomainRole)\nauditlog.register(PublicContact)\nauditlog.register(User, m2m_fields=[\"user_permissions\", \"groups\"])\nauditlog.register(UserGroup, m2m_fields=[\"permissions\"])\nauditlog.register(Website)\nauditlog.register(TransitionDomain)\n", "path": "src/registrar/models/__init__.py"}], "after_files": [{"content": "from auditlog.registry import auditlog # type: ignore\nfrom .contact import Contact\nfrom .domain_application import DomainApplication\nfrom .domain_information import DomainInformation\nfrom .domain import Domain\nfrom .draft_domain import DraftDomain\nfrom .host_ip import HostIP\nfrom .host import Host\nfrom .domain_invitation import DomainInvitation\nfrom .nameserver import Nameserver\nfrom .user_domain_role import UserDomainRole\nfrom .public_contact import PublicContact\nfrom .user import User\nfrom .user_group import UserGroup\nfrom .website import Website\nfrom .transition_domain import TransitionDomain\n\n__all__ = [\n \"Contact\",\n \"DomainApplication\",\n \"DomainInformation\",\n \"Domain\",\n \"DraftDomain\",\n \"DomainInvitation\",\n \"HostIP\",\n \"Host\",\n \"Nameserver\",\n \"UserDomainRole\",\n \"PublicContact\",\n \"User\",\n \"UserGroup\",\n \"Website\",\n \"TransitionDomain\",\n]\n\nauditlog.register(Contact)\nauditlog.register(DomainApplication)\nauditlog.register(Domain)\nauditlog.register(DraftDomain)\nauditlog.register(DomainInvitation)\nauditlog.register(DomainInformation)\nauditlog.register(HostIP)\nauditlog.register(Host)\nauditlog.register(Nameserver)\nauditlog.register(UserDomainRole)\nauditlog.register(PublicContact)\nauditlog.register(User, m2m_fields=[\"user_permissions\", \"groups\"])\nauditlog.register(UserGroup, m2m_fields=[\"permissions\"])\nauditlog.register(Website)\nauditlog.register(TransitionDomain)\n", "path": "src/registrar/models/__init__.py"}]} | 891 | 105 |
gh_patches_debug_32255 | rasdani/github-patches | git_diff | ckan__ckan-5639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to disable reloader when starting the development server
**CKAN version**
2.9.0
**Describe the bug**
When trying to start the development server without a reloader i encountered problems with the `--reloader` argument.
The reloader option requires a TEXT argument, therefore i expected that --reloader False disables the reloader.
**Steps to reproduce**
Start ckan with following command:
`ckan -c [PATH_TO_CONFIG] run --host 0.0.0.0 --reloader False`
**Expected behavior**
Server starts without reloader
**Additional details**
Currently the `reloader` option is passed as string and if it's not provided it defaults to the boolean value `True`
So we have two cases when the `run_simple` method is called:
1. `--reloader` argument is not provided --> reloader=True
2. `--reloader` argument is provided --> some string is passed as reloader argument to the `run_simple` method, which evaluates to true in the if statement distinguishing whether the reloader should be used or not.
So the `--reloader` argument does not affect anything.
_My suggestion:_ rename the argument to `disable-reloader` and turn it into a boolean flag. This enables the user to disable the reloader and the default behaviour (i.e. the dev server starts with a reloader) stays the same.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/cli/server.py`
Content:
```
1 # encoding: utf-8
2
3 import logging
4
5 import click
6 from werkzeug.serving import run_simple
7
8 from ckan.common import config
9 import ckan.plugins.toolkit as tk
10
11 log = logging.getLogger(__name__)
12
13
14 @click.command(u"run", short_help=u"Start development server")
15 @click.option(u"-H", u"--host", default=u"localhost", help=u"Set host")
16 @click.option(u"-p", u"--port", default=5000, help=u"Set port")
17 @click.option(u"-r", u"--reloader", default=True, help=u"Use reloader")
18 @click.option(
19 u"-t", u"--threaded", is_flag=True,
20 help=u"Handle each request in a separate thread"
21 )
22 @click.option(u"-e", u"--extra-files", multiple=True)
23 @click.option(
24 u"--processes", type=int, default=0,
25 help=u"Maximum number of concurrent processes"
26 )
27 @click.pass_context
28 def run(ctx, host, port, reloader, threaded, extra_files, processes):
29 u"""Runs the Werkzeug development server"""
30 threaded = threaded or tk.asbool(config.get(u"ckan.devserver.threaded"))
31 processes = processes or tk.asint(
32 config.get(u"ckan.devserver.multiprocess", 1)
33 )
34 if threaded and processes > 1:
35 tk.error_shout(u"Cannot have a multithreaded and multi process server")
36 raise click.Abort()
37
38 log.info(u"Running server {0} on port {1}".format(host, port))
39
40 config_extra_files = tk.aslist(
41 config.get(u"ckan.devserver.watch_patterns")
42 )
43 extra_files = list(extra_files) + [
44 config[u"__file__"]
45 ] + config_extra_files
46
47 run_simple(
48 host,
49 port,
50 ctx.obj.app,
51 use_reloader=reloader,
52 use_evalex=True,
53 threaded=threaded,
54 processes=processes,
55 extra_files=extra_files,
56 )
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckan/cli/server.py b/ckan/cli/server.py
--- a/ckan/cli/server.py
+++ b/ckan/cli/server.py
@@ -5,8 +5,8 @@
import click
from werkzeug.serving import run_simple
-from ckan.common import config
import ckan.plugins.toolkit as tk
+from ckan.common import config
log = logging.getLogger(__name__)
@@ -14,7 +14,8 @@
@click.command(u"run", short_help=u"Start development server")
@click.option(u"-H", u"--host", default=u"localhost", help=u"Set host")
@click.option(u"-p", u"--port", default=5000, help=u"Set port")
[email protected](u"-r", u"--reloader", default=True, help=u"Use reloader")
[email protected](u"-r", u"--disable-reloader", is_flag=True,
+ help=u"Disable reloader")
@click.option(
u"-t", u"--threaded", is_flag=True,
help=u"Handle each request in a separate thread"
@@ -25,8 +26,9 @@
help=u"Maximum number of concurrent processes"
)
@click.pass_context
-def run(ctx, host, port, reloader, threaded, extra_files, processes):
+def run(ctx, host, port, disable_reloader, threaded, extra_files, processes):
u"""Runs the Werkzeug development server"""
+ use_reloader = not disable_reloader
threaded = threaded or tk.asbool(config.get(u"ckan.devserver.threaded"))
processes = processes or tk.asint(
config.get(u"ckan.devserver.multiprocess", 1)
@@ -48,7 +50,7 @@
host,
port,
ctx.obj.app,
- use_reloader=reloader,
+ use_reloader=use_reloader,
use_evalex=True,
threaded=threaded,
processes=processes,
| {"golden_diff": "diff --git a/ckan/cli/server.py b/ckan/cli/server.py\n--- a/ckan/cli/server.py\n+++ b/ckan/cli/server.py\n@@ -5,8 +5,8 @@\n import click\n from werkzeug.serving import run_simple\n \n-from ckan.common import config\n import ckan.plugins.toolkit as tk\n+from ckan.common import config\n \n log = logging.getLogger(__name__)\n \n@@ -14,7 +14,8 @@\n @click.command(u\"run\", short_help=u\"Start development server\")\n @click.option(u\"-H\", u\"--host\", default=u\"localhost\", help=u\"Set host\")\n @click.option(u\"-p\", u\"--port\", default=5000, help=u\"Set port\")\[email protected](u\"-r\", u\"--reloader\", default=True, help=u\"Use reloader\")\[email protected](u\"-r\", u\"--disable-reloader\", is_flag=True,\n+ help=u\"Disable reloader\")\n @click.option(\n u\"-t\", u\"--threaded\", is_flag=True,\n help=u\"Handle each request in a separate thread\"\n@@ -25,8 +26,9 @@\n help=u\"Maximum number of concurrent processes\"\n )\n @click.pass_context\n-def run(ctx, host, port, reloader, threaded, extra_files, processes):\n+def run(ctx, host, port, disable_reloader, threaded, extra_files, processes):\n u\"\"\"Runs the Werkzeug development server\"\"\"\n+ use_reloader = not disable_reloader\n threaded = threaded or tk.asbool(config.get(u\"ckan.devserver.threaded\"))\n processes = processes or tk.asint(\n config.get(u\"ckan.devserver.multiprocess\", 1)\n@@ -48,7 +50,7 @@\n host,\n port,\n ctx.obj.app,\n- use_reloader=reloader,\n+ use_reloader=use_reloader,\n use_evalex=True,\n threaded=threaded,\n processes=processes,\n", "issue": "No way to disable reloader when starting the development server\n**CKAN version**\r\n2.9.0\r\n\r\n**Describe the bug**\r\nWhen trying to start the development server without a reloader i encountered problems with the `--reloader` argument.\r\nThe reloader option requires a TEXT argument, therefore i expected that --reloader False disables the reloader.\r\n\r\n\r\n**Steps to reproduce**\r\nStart ckan with following command:\r\n\r\n`ckan -c [PATH_TO_CONFIG] run --host 0.0.0.0 --reloader False`\r\n\r\n**Expected behavior**\r\nServer starts without reloader\r\n\r\n**Additional details**\r\n\r\nCurrently the `reloader` option is passed as string and if it's not provided it defaults to the boolean value `True`\r\n\r\nSo we have two cases when the `run_simple` method is called:\r\n1. `--reloader` argument is not provided --> reloader=True\r\n2. `--reloader` argument is provided --> some string is passed as reloader argument to the `run_simple` method, which evaluates to true in the if statement distinguishing whether the reloader should be used or not.\r\n\r\nSo the `--reloader` argument does not affect anything.\r\n\r\n_My suggestion:_ rename the argument to `disable-reloader` and turn it into a boolean flag. This enables the user to disable the reloader and the default behaviour (i.e. the dev server starts with a reloader) stays the same.\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\nimport logging\n\nimport click\nfrom werkzeug.serving import run_simple\n\nfrom ckan.common import config\nimport ckan.plugins.toolkit as tk\n\nlog = logging.getLogger(__name__)\n\n\[email protected](u\"run\", short_help=u\"Start development server\")\[email protected](u\"-H\", u\"--host\", default=u\"localhost\", help=u\"Set host\")\[email protected](u\"-p\", u\"--port\", default=5000, help=u\"Set port\")\[email protected](u\"-r\", u\"--reloader\", default=True, help=u\"Use reloader\")\[email protected](\n u\"-t\", u\"--threaded\", is_flag=True,\n help=u\"Handle each request in a separate thread\"\n)\[email protected](u\"-e\", u\"--extra-files\", multiple=True)\[email protected](\n u\"--processes\", type=int, default=0,\n help=u\"Maximum number of concurrent processes\"\n)\[email protected]_context\ndef run(ctx, host, port, reloader, threaded, extra_files, processes):\n u\"\"\"Runs the Werkzeug development server\"\"\"\n threaded = threaded or tk.asbool(config.get(u\"ckan.devserver.threaded\"))\n processes = processes or tk.asint(\n config.get(u\"ckan.devserver.multiprocess\", 1)\n )\n if threaded and processes > 1:\n tk.error_shout(u\"Cannot have a multithreaded and multi process server\")\n raise click.Abort()\n\n log.info(u\"Running server {0} on port {1}\".format(host, port))\n\n config_extra_files = tk.aslist(\n config.get(u\"ckan.devserver.watch_patterns\")\n )\n extra_files = list(extra_files) + [\n config[u\"__file__\"]\n ] + config_extra_files\n\n run_simple(\n host,\n port,\n ctx.obj.app,\n use_reloader=reloader,\n use_evalex=True,\n threaded=threaded,\n processes=processes,\n extra_files=extra_files,\n )\n", "path": "ckan/cli/server.py"}], "after_files": [{"content": "# encoding: utf-8\n\nimport logging\n\nimport click\nfrom werkzeug.serving import run_simple\n\nimport ckan.plugins.toolkit as tk\nfrom ckan.common import config\n\nlog = logging.getLogger(__name__)\n\n\[email protected](u\"run\", short_help=u\"Start development server\")\[email protected](u\"-H\", u\"--host\", default=u\"localhost\", help=u\"Set host\")\[email protected](u\"-p\", u\"--port\", default=5000, help=u\"Set port\")\[email protected](u\"-r\", u\"--disable-reloader\", is_flag=True,\n help=u\"Disable reloader\")\[email protected](\n u\"-t\", u\"--threaded\", is_flag=True,\n help=u\"Handle each request in a separate thread\"\n)\[email protected](u\"-e\", u\"--extra-files\", multiple=True)\[email protected](\n u\"--processes\", type=int, default=0,\n help=u\"Maximum number of concurrent processes\"\n)\[email protected]_context\ndef run(ctx, host, port, disable_reloader, threaded, extra_files, processes):\n u\"\"\"Runs the Werkzeug development server\"\"\"\n use_reloader = not disable_reloader\n threaded = threaded or tk.asbool(config.get(u\"ckan.devserver.threaded\"))\n processes = processes or tk.asint(\n config.get(u\"ckan.devserver.multiprocess\", 1)\n )\n if threaded and processes > 1:\n tk.error_shout(u\"Cannot have a multithreaded and multi process server\")\n raise click.Abort()\n\n log.info(u\"Running server {0} on port {1}\".format(host, port))\n\n config_extra_files = tk.aslist(\n config.get(u\"ckan.devserver.watch_patterns\")\n )\n extra_files = list(extra_files) + [\n config[u\"__file__\"]\n ] + config_extra_files\n\n run_simple(\n host,\n port,\n ctx.obj.app,\n use_reloader=use_reloader,\n use_evalex=True,\n threaded=threaded,\n processes=processes,\n extra_files=extra_files,\n )\n", "path": "ckan/cli/server.py"}]} | 1,103 | 433 |
gh_patches_debug_16062 | rasdani/github-patches | git_diff | web2py__web2py-1665 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remain py2.6 stuff that can be delete
Here :
https://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/gluon/import_all.py#L86
py26_deprecated is used before been declared...
Since we drop py2.6 : https://groups.google.com/d/msg/web2py-developers/xz5o_CF4GOo/ZJm0HImTAAAJ
Shall we delete those line??
I send a PR if you take it...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gluon/import_all.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 This file is part of the web2py Web Framework
5 Copyrighted by Massimo Di Pierro <[email protected]>
6 License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)
7
8 This file is not strictly required by web2py. It is used for three purposes:
9
10 1) check that all required modules are installed properly
11 2) provide py2exe and py2app a list of modules to be packaged in the binary
12 3) (optional) preload modules in memory to speed up http responses
13
14 """
15
16 import os
17 import sys
18
19 base_modules = ['aifc', 'anydbm', 'array', 'asynchat', 'asyncore', 'atexit',
20 'audioop', 'base64', 'BaseHTTPServer', 'Bastion', 'binascii',
21 'binhex', 'bisect', 'bz2', 'calendar', 'cgi', 'CGIHTTPServer',
22 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop',
23 'collections', 'colorsys', 'compileall', 'compiler',
24 'compiler.ast', 'compiler.visitor', 'ConfigParser',
25 'contextlib', 'Cookie', 'cookielib', 'copy', 'copy_reg',
26 'collections',
27 'cPickle', 'cProfile', 'cStringIO', 'csv', 'ctypes',
28 'datetime', 'decimal', 'difflib', 'dircache', 'dis',
29 'doctest', 'DocXMLRPCServer', 'dumbdbm', 'dummy_thread',
30 'dummy_threading', 'email', 'email.charset', 'email.encoders',
31 'email.errors', 'email.generator', 'email.header',
32 'email.iterators', 'email.message', 'email.mime',
33 'email.mime.audio', 'email.mime.base', 'email.mime.image',
34 'email.mime.message', 'email.mime.multipart',
35 'email.mime.nonmultipart', 'email.mime.text', 'email.parser',
36 'email.utils', 'encodings.idna', 'errno', 'exceptions',
37 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fpformat',
38 'ftplib', 'functools', 'gc', 'getopt', 'getpass', 'gettext',
39 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'hotshot',
40 'hotshot.stats', 'htmlentitydefs', 'htmllib', 'HTMLParser',
41 'httplib', 'imaplib', 'imghdr', 'imp', 'inspect',
42 'itertools', 'keyword', 'linecache', 'locale', 'logging',
43 'macpath', 'mailbox', 'mailcap', 'marshal', 'math',
44 'mimetools', 'mimetypes', 'mmap', 'modulefinder', 'mutex',
45 'netrc', 'new', 'nntplib', 'operator', 'optparse', 'os',
46 'parser', 'pdb', 'pickle', 'pickletools', 'pkgutil',
47 'platform', 'poplib', 'pprint', 'py_compile', 'pyclbr',
48 'pydoc', 'Queue', 'quopri', 'random', 're', 'repr',
49 'rexec', 'rfc822', 'rlcompleter', 'robotparser', 'runpy',
50 'sched', 'select', 'sgmllib', 'shelve',
51 'shlex', 'shutil', 'signal', 'SimpleHTTPServer',
52 'SimpleXMLRPCServer', 'site', 'smtpd', 'smtplib',
53 'sndhdr', 'socket', 'SocketServer', 'sqlite3',
54 'stat', 'statvfs', 'string', 'StringIO',
55 'stringprep', 'struct', 'subprocess', 'sunau', 'symbol',
56 'tabnanny', 'tarfile', 'telnetlib', 'tempfile', 'textwrap', 'thread', 'threading',
57 'time', 'timeit', 'Tix', 'Tkinter', 'token',
58 'tokenize', 'trace', 'traceback', 'types',
59 'unicodedata', 'unittest', 'urllib', 'urllib2',
60 'urlparse', 'user', 'UserDict', 'UserList', 'UserString',
61 'uu', 'uuid', 'warnings', 'wave', 'weakref', 'webbrowser',
62 'whichdb', 'wsgiref', 'wsgiref.handlers', 'wsgiref.headers',
63 'wsgiref.simple_server', 'wsgiref.util', 'wsgiref.validate',
64 'xdrlib', 'xml.dom', 'xml.dom.minidom', 'xml.dom.pulldom',
65 'xml.etree.ElementTree', 'xml.parsers.expat', 'xml.sax',
66 'xml.sax.handler', 'xml.sax.saxutils', 'xml.sax.xmlreader',
67 'xmlrpclib', 'zipfile', 'zipimport', 'zlib', 'mhlib',
68 'MimeWriter', 'mimify', 'multifile', 'sets']
69
70 contributed_modules = []
71
72 # Python base version
73 python_version = sys.version[:3]
74
75 # Modules which we want to raise an Exception if they are missing
76 alert_dependency = ['hashlib', 'uuid']
77
78 # Now we remove the blacklisted modules if we are using the stated
79 # python version.
80 #
81 # List of modules deprecated in Python 2.6 or 2.7 that are in the above set
82 py27_deprecated = ['mhlib', 'multifile', 'mimify', 'sets', 'MimeWriter'] # And ['optparse'] but we need it for now
83
84 if python_version >= '2.6':
85 base_modules += ['json', 'multiprocessing']
86 base_modules = list(set(base_modules).difference(set(py26_deprecated)))
87
88 if python_version >= '2.7':
89 base_modules += ['argparse', 'json', 'multiprocessing']
90 base_modules = list(set(base_modules).difference(set(py27_deprecated)))
91
92 # Now iterate in the base_modules, trying to do the import
93 for module in base_modules + contributed_modules:
94 try:
95 __import__(module, globals(), locals(), [])
96 except:
97 # Raise an exception if the current module is a dependency
98 if module in alert_dependency:
99 msg = "Missing dependency: %(module)s\n" % locals()
100 msg += "Try the following command: "
101 msg += "easy_install-%(python_version)s -U %(module)s" % locals()
102 raise ImportError(msg)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gluon/import_all.py b/gluon/import_all.py
--- a/gluon/import_all.py
+++ b/gluon/import_all.py
@@ -78,13 +78,9 @@
# Now we remove the blacklisted modules if we are using the stated
# python version.
#
-# List of modules deprecated in Python 2.6 or 2.7 that are in the above set
+# List of modules deprecated in Python 2.7 that are in the above list
py27_deprecated = ['mhlib', 'multifile', 'mimify', 'sets', 'MimeWriter'] # And ['optparse'] but we need it for now
-if python_version >= '2.6':
- base_modules += ['json', 'multiprocessing']
- base_modules = list(set(base_modules).difference(set(py26_deprecated)))
-
if python_version >= '2.7':
base_modules += ['argparse', 'json', 'multiprocessing']
base_modules = list(set(base_modules).difference(set(py27_deprecated)))
| {"golden_diff": "diff --git a/gluon/import_all.py b/gluon/import_all.py\n--- a/gluon/import_all.py\n+++ b/gluon/import_all.py\n@@ -78,13 +78,9 @@\n # Now we remove the blacklisted modules if we are using the stated\n # python version.\n #\n-# List of modules deprecated in Python 2.6 or 2.7 that are in the above set\n+# List of modules deprecated in Python 2.7 that are in the above list\n py27_deprecated = ['mhlib', 'multifile', 'mimify', 'sets', 'MimeWriter'] # And ['optparse'] but we need it for now\n \n-if python_version >= '2.6':\n- base_modules += ['json', 'multiprocessing']\n- base_modules = list(set(base_modules).difference(set(py26_deprecated)))\n-\n if python_version >= '2.7':\n base_modules += ['argparse', 'json', 'multiprocessing']\n base_modules = list(set(base_modules).difference(set(py27_deprecated)))\n", "issue": "remain py2.6 stuff that can be delete\nHere :\r\nhttps://github.com/web2py/web2py/blob/0d646fa5e7c731cb5c392adf6a885351e77e4903/gluon/import_all.py#L86\r\n\r\npy26_deprecated is used before been declared...\r\n\r\nSince we drop py2.6 : https://groups.google.com/d/msg/web2py-developers/xz5o_CF4GOo/ZJm0HImTAAAJ\r\n\r\nShall we delete those line??\r\n\r\nI send a PR if you take it...\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\nThis file is not strictly required by web2py. It is used for three purposes:\n\n1) check that all required modules are installed properly\n2) provide py2exe and py2app a list of modules to be packaged in the binary\n3) (optional) preload modules in memory to speed up http responses\n\n\"\"\"\n\nimport os\nimport sys\n\nbase_modules = ['aifc', 'anydbm', 'array', 'asynchat', 'asyncore', 'atexit',\n 'audioop', 'base64', 'BaseHTTPServer', 'Bastion', 'binascii',\n 'binhex', 'bisect', 'bz2', 'calendar', 'cgi', 'CGIHTTPServer',\n 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop',\n 'collections', 'colorsys', 'compileall', 'compiler',\n 'compiler.ast', 'compiler.visitor', 'ConfigParser',\n 'contextlib', 'Cookie', 'cookielib', 'copy', 'copy_reg',\n 'collections',\n 'cPickle', 'cProfile', 'cStringIO', 'csv', 'ctypes',\n 'datetime', 'decimal', 'difflib', 'dircache', 'dis',\n 'doctest', 'DocXMLRPCServer', 'dumbdbm', 'dummy_thread',\n 'dummy_threading', 'email', 'email.charset', 'email.encoders',\n 'email.errors', 'email.generator', 'email.header',\n 'email.iterators', 'email.message', 'email.mime',\n 'email.mime.audio', 'email.mime.base', 'email.mime.image',\n 'email.mime.message', 'email.mime.multipart',\n 'email.mime.nonmultipart', 'email.mime.text', 'email.parser',\n 'email.utils', 'encodings.idna', 'errno', 'exceptions',\n 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fpformat',\n 'ftplib', 'functools', 'gc', 'getopt', 'getpass', 'gettext',\n 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'hotshot',\n 'hotshot.stats', 'htmlentitydefs', 'htmllib', 'HTMLParser',\n 'httplib', 'imaplib', 'imghdr', 'imp', 'inspect',\n 'itertools', 'keyword', 'linecache', 'locale', 'logging',\n 'macpath', 'mailbox', 'mailcap', 'marshal', 'math',\n 'mimetools', 'mimetypes', 'mmap', 'modulefinder', 'mutex',\n 'netrc', 'new', 'nntplib', 'operator', 'optparse', 'os',\n 'parser', 'pdb', 'pickle', 'pickletools', 'pkgutil',\n 'platform', 'poplib', 'pprint', 'py_compile', 'pyclbr',\n 'pydoc', 'Queue', 'quopri', 'random', 're', 'repr',\n 'rexec', 'rfc822', 'rlcompleter', 'robotparser', 'runpy',\n 'sched', 'select', 'sgmllib', 'shelve',\n 'shlex', 'shutil', 'signal', 'SimpleHTTPServer',\n 'SimpleXMLRPCServer', 'site', 'smtpd', 'smtplib',\n 'sndhdr', 'socket', 'SocketServer', 'sqlite3',\n 'stat', 'statvfs', 'string', 'StringIO',\n 'stringprep', 'struct', 'subprocess', 'sunau', 'symbol',\n 'tabnanny', 'tarfile', 'telnetlib', 'tempfile', 'textwrap', 'thread', 'threading',\n 'time', 'timeit', 'Tix', 'Tkinter', 'token',\n 'tokenize', 'trace', 'traceback', 'types',\n 'unicodedata', 'unittest', 'urllib', 'urllib2',\n 'urlparse', 'user', 'UserDict', 'UserList', 'UserString',\n 'uu', 'uuid', 'warnings', 'wave', 'weakref', 'webbrowser',\n 'whichdb', 'wsgiref', 'wsgiref.handlers', 'wsgiref.headers',\n 'wsgiref.simple_server', 'wsgiref.util', 'wsgiref.validate',\n 'xdrlib', 'xml.dom', 'xml.dom.minidom', 'xml.dom.pulldom',\n 'xml.etree.ElementTree', 'xml.parsers.expat', 'xml.sax',\n 'xml.sax.handler', 'xml.sax.saxutils', 'xml.sax.xmlreader',\n 'xmlrpclib', 'zipfile', 'zipimport', 'zlib', 'mhlib',\n 'MimeWriter', 'mimify', 'multifile', 'sets']\n\ncontributed_modules = []\n\n# Python base version\npython_version = sys.version[:3]\n\n# Modules which we want to raise an Exception if they are missing\nalert_dependency = ['hashlib', 'uuid']\n\n# Now we remove the blacklisted modules if we are using the stated\n# python version.\n#\n# List of modules deprecated in Python 2.6 or 2.7 that are in the above set\npy27_deprecated = ['mhlib', 'multifile', 'mimify', 'sets', 'MimeWriter'] # And ['optparse'] but we need it for now\n\nif python_version >= '2.6':\n base_modules += ['json', 'multiprocessing']\n base_modules = list(set(base_modules).difference(set(py26_deprecated)))\n\nif python_version >= '2.7':\n base_modules += ['argparse', 'json', 'multiprocessing']\n base_modules = list(set(base_modules).difference(set(py27_deprecated)))\n\n# Now iterate in the base_modules, trying to do the import\nfor module in base_modules + contributed_modules:\n try:\n __import__(module, globals(), locals(), [])\n except:\n # Raise an exception if the current module is a dependency\n if module in alert_dependency:\n msg = \"Missing dependency: %(module)s\\n\" % locals()\n msg += \"Try the following command: \"\n msg += \"easy_install-%(python_version)s -U %(module)s\" % locals()\n raise ImportError(msg)\n", "path": "gluon/import_all.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\nThis file is not strictly required by web2py. It is used for three purposes:\n\n1) check that all required modules are installed properly\n2) provide py2exe and py2app a list of modules to be packaged in the binary\n3) (optional) preload modules in memory to speed up http responses\n\n\"\"\"\n\nimport os\nimport sys\n\nbase_modules = ['aifc', 'anydbm', 'array', 'asynchat', 'asyncore', 'atexit',\n 'audioop', 'base64', 'BaseHTTPServer', 'Bastion', 'binascii',\n 'binhex', 'bisect', 'bz2', 'calendar', 'cgi', 'CGIHTTPServer',\n 'cgitb', 'chunk', 'cmath', 'cmd', 'code', 'codecs', 'codeop',\n 'collections', 'colorsys', 'compileall', 'compiler',\n 'compiler.ast', 'compiler.visitor', 'ConfigParser',\n 'contextlib', 'Cookie', 'cookielib', 'copy', 'copy_reg',\n 'collections',\n 'cPickle', 'cProfile', 'cStringIO', 'csv', 'ctypes',\n 'datetime', 'decimal', 'difflib', 'dircache', 'dis',\n 'doctest', 'DocXMLRPCServer', 'dumbdbm', 'dummy_thread',\n 'dummy_threading', 'email', 'email.charset', 'email.encoders',\n 'email.errors', 'email.generator', 'email.header',\n 'email.iterators', 'email.message', 'email.mime',\n 'email.mime.audio', 'email.mime.base', 'email.mime.image',\n 'email.mime.message', 'email.mime.multipart',\n 'email.mime.nonmultipart', 'email.mime.text', 'email.parser',\n 'email.utils', 'encodings.idna', 'errno', 'exceptions',\n 'filecmp', 'fileinput', 'fnmatch', 'formatter', 'fpformat',\n 'ftplib', 'functools', 'gc', 'getopt', 'getpass', 'gettext',\n 'glob', 'gzip', 'hashlib', 'heapq', 'hmac', 'hotshot',\n 'hotshot.stats', 'htmlentitydefs', 'htmllib', 'HTMLParser',\n 'httplib', 'imaplib', 'imghdr', 'imp', 'inspect',\n 'itertools', 'keyword', 'linecache', 'locale', 'logging',\n 'macpath', 'mailbox', 'mailcap', 'marshal', 'math',\n 'mimetools', 'mimetypes', 'mmap', 'modulefinder', 'mutex',\n 'netrc', 'new', 'nntplib', 'operator', 'optparse', 'os',\n 'parser', 'pdb', 'pickle', 'pickletools', 'pkgutil',\n 'platform', 'poplib', 'pprint', 'py_compile', 'pyclbr',\n 'pydoc', 'Queue', 'quopri', 'random', 're', 'repr',\n 'rexec', 'rfc822', 'rlcompleter', 'robotparser', 'runpy',\n 'sched', 'select', 'sgmllib', 'shelve',\n 'shlex', 'shutil', 'signal', 'SimpleHTTPServer',\n 'SimpleXMLRPCServer', 'site', 'smtpd', 'smtplib',\n 'sndhdr', 'socket', 'SocketServer', 'sqlite3',\n 'stat', 'statvfs', 'string', 'StringIO',\n 'stringprep', 'struct', 'subprocess', 'sunau', 'symbol',\n 'tabnanny', 'tarfile', 'telnetlib', 'tempfile', 'textwrap', 'thread', 'threading',\n 'time', 'timeit', 'Tix', 'Tkinter', 'token',\n 'tokenize', 'trace', 'traceback', 'types',\n 'unicodedata', 'unittest', 'urllib', 'urllib2',\n 'urlparse', 'user', 'UserDict', 'UserList', 'UserString',\n 'uu', 'uuid', 'warnings', 'wave', 'weakref', 'webbrowser',\n 'whichdb', 'wsgiref', 'wsgiref.handlers', 'wsgiref.headers',\n 'wsgiref.simple_server', 'wsgiref.util', 'wsgiref.validate',\n 'xdrlib', 'xml.dom', 'xml.dom.minidom', 'xml.dom.pulldom',\n 'xml.etree.ElementTree', 'xml.parsers.expat', 'xml.sax',\n 'xml.sax.handler', 'xml.sax.saxutils', 'xml.sax.xmlreader',\n 'xmlrpclib', 'zipfile', 'zipimport', 'zlib', 'mhlib',\n 'MimeWriter', 'mimify', 'multifile', 'sets']\n\ncontributed_modules = []\n\n# Python base version\npython_version = sys.version[:3]\n\n# Modules which we want to raise an Exception if they are missing\nalert_dependency = ['hashlib', 'uuid']\n\n# Now we remove the blacklisted modules if we are using the stated\n# python version.\n#\n# List of modules deprecated in Python 2.7 that are in the above list\npy27_deprecated = ['mhlib', 'multifile', 'mimify', 'sets', 'MimeWriter'] # And ['optparse'] but we need it for now\n\nif python_version >= '2.7':\n base_modules += ['argparse', 'json', 'multiprocessing']\n base_modules = list(set(base_modules).difference(set(py27_deprecated)))\n\n# Now iterate in the base_modules, trying to do the import\nfor module in base_modules + contributed_modules:\n try:\n __import__(module, globals(), locals(), [])\n except:\n # Raise an exception if the current module is a dependency\n if module in alert_dependency:\n msg = \"Missing dependency: %(module)s\\n\" % locals()\n msg += \"Try the following command: \"\n msg += \"easy_install-%(python_version)s -U %(module)s\" % locals()\n raise ImportError(msg)\n", "path": "gluon/import_all.py"}]} | 2,031 | 234 |
gh_patches_debug_553 | rasdani/github-patches | git_diff | pex-tool__pex-884 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.2
On the docket:
+ [x] Isolating a pex chroot doesn't work from a zipped pex #882
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.1.1'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.1.1'
+__version__ = '2.1.2'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.1.1'\n+__version__ = '2.1.2'\n", "issue": "Release 2.1.2\nOn the docket:\r\n+ [x] Isolating a pex chroot doesn't work from a zipped pex #882 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.1'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.1.2'\n", "path": "pex/version.py"}]} | 345 | 94 |
gh_patches_debug_14667 | rasdani/github-patches | git_diff | Parsl__parsl-3151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove BlueWaters from userguide
**Describe the bug**
Bluewaters is a supercomputer at the NCSA (UIUC) that is now retired. Our userguide still contains an example configuration for this defunct machine that should be removed.
Here's a link to the userguide section as rendered: https://parsl.readthedocs.io/en/stable/userguide/configuring.html#blue-waters-ncsa
**Expected behavior**
Bluewaters should be removed from our userguide section.
Here's a quick sketch of the work involved:
1. Remove the section on Bluewaters from `docs/userguide/configuring.rst`
2. Remove the example configuration file here `parsl/configs/bluewaters.py`
3. Rebuild the documentation: `cd docs; make clean html; `
4. Check the newly rebuild docs with `cd docs/_build/html; python3 -m http.server 8080` and load `http://localhost:8080` in your browser to load the html pages that was newly rebuilt in step.3.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/configs/bluewaters.py`
Content:
```
1 from parsl.config import Config
2 from parsl.executors import HighThroughputExecutor
3 from parsl.launchers import AprunLauncher
4 from parsl.providers import TorqueProvider
5
6
7 config = Config(
8 executors=[
9 HighThroughputExecutor(
10 label="bw_htex",
11 cores_per_worker=1,
12 worker_debug=False,
13 provider=TorqueProvider(
14 queue='normal',
15 launcher=AprunLauncher(overrides="-b -- bwpy-environ --"),
16 scheduler_options='', # string to prepend to #SBATCH blocks in the submit script to the scheduler
17 worker_init='', # command to run before starting a worker, such as 'source activate env'
18 init_blocks=1,
19 max_blocks=1,
20 min_blocks=1,
21 nodes_per_block=2,
22 walltime='00:10:00'
23 ),
24 )
25
26 ],
27
28 )
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/configs/bluewaters.py b/parsl/configs/bluewaters.py
deleted file mode 100644
--- a/parsl/configs/bluewaters.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from parsl.config import Config
-from parsl.executors import HighThroughputExecutor
-from parsl.launchers import AprunLauncher
-from parsl.providers import TorqueProvider
-
-
-config = Config(
- executors=[
- HighThroughputExecutor(
- label="bw_htex",
- cores_per_worker=1,
- worker_debug=False,
- provider=TorqueProvider(
- queue='normal',
- launcher=AprunLauncher(overrides="-b -- bwpy-environ --"),
- scheduler_options='', # string to prepend to #SBATCH blocks in the submit script to the scheduler
- worker_init='', # command to run before starting a worker, such as 'source activate env'
- init_blocks=1,
- max_blocks=1,
- min_blocks=1,
- nodes_per_block=2,
- walltime='00:10:00'
- ),
- )
-
- ],
-
-)
| {"golden_diff": "diff --git a/parsl/configs/bluewaters.py b/parsl/configs/bluewaters.py\ndeleted file mode 100644\n--- a/parsl/configs/bluewaters.py\n+++ /dev/null\n@@ -1,28 +0,0 @@\n-from parsl.config import Config\n-from parsl.executors import HighThroughputExecutor\n-from parsl.launchers import AprunLauncher\n-from parsl.providers import TorqueProvider\n-\n-\n-config = Config(\n- executors=[\n- HighThroughputExecutor(\n- label=\"bw_htex\",\n- cores_per_worker=1,\n- worker_debug=False,\n- provider=TorqueProvider(\n- queue='normal',\n- launcher=AprunLauncher(overrides=\"-b -- bwpy-environ --\"),\n- scheduler_options='', # string to prepend to #SBATCH blocks in the submit script to the scheduler\n- worker_init='', # command to run before starting a worker, such as 'source activate env'\n- init_blocks=1,\n- max_blocks=1,\n- min_blocks=1,\n- nodes_per_block=2,\n- walltime='00:10:00'\n- ),\n- )\n-\n- ],\n-\n-)\n", "issue": "Remove BlueWaters from userguide\n**Describe the bug**\r\n\r\nBluewaters is a supercomputer at the NCSA (UIUC) that is now retired. Our userguide still contains an example configuration for this defunct machine that should be removed.\r\n\r\nHere's a link to the userguide section as rendered: https://parsl.readthedocs.io/en/stable/userguide/configuring.html#blue-waters-ncsa\r\n\r\n\r\n**Expected behavior**\r\n\r\nBluewaters should be removed from our userguide section.\r\n\r\nHere's a quick sketch of the work involved:\r\n1. Remove the section on Bluewaters from `docs/userguide/configuring.rst`\r\n2. Remove the example configuration file here `parsl/configs/bluewaters.py`\r\n3. Rebuild the documentation: `cd docs; make clean html; `\r\n4. Check the newly rebuild docs with `cd docs/_build/html; python3 -m http.server 8080` and load `http://localhost:8080` in your browser to load the html pages that was newly rebuilt in step.3.\r\n\r\n\n", "before_files": [{"content": "from parsl.config import Config\nfrom parsl.executors import HighThroughputExecutor\nfrom parsl.launchers import AprunLauncher\nfrom parsl.providers import TorqueProvider\n\n\nconfig = Config(\n executors=[\n HighThroughputExecutor(\n label=\"bw_htex\",\n cores_per_worker=1,\n worker_debug=False,\n provider=TorqueProvider(\n queue='normal',\n launcher=AprunLauncher(overrides=\"-b -- bwpy-environ --\"),\n scheduler_options='', # string to prepend to #SBATCH blocks in the submit script to the scheduler\n worker_init='', # command to run before starting a worker, such as 'source activate env'\n init_blocks=1,\n max_blocks=1,\n min_blocks=1,\n nodes_per_block=2,\n walltime='00:10:00'\n ),\n )\n\n ],\n\n)\n", "path": "parsl/configs/bluewaters.py"}], "after_files": [{"content": null, "path": "parsl/configs/bluewaters.py"}]} | 724 | 269 |
gh_patches_debug_32517 | rasdani/github-patches | git_diff | scrapy__scrapy-4623 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`scrapy genspider` should not overwrite existing file
## Summary
If the file mentioned in `scrapy genspider` already exists, then genspider should refuse to generate the the file.
## Motivation
As it stands, existing code can be blown away if this command runs twice.
## Describe alternatives you've considered
Prompting the user for overwriting existing spider.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/genspider.py`
Content:
```
1 import os
2 import shutil
3 import string
4
5 from importlib import import_module
6 from os.path import join, dirname, abspath, exists, splitext
7
8 import scrapy
9 from scrapy.commands import ScrapyCommand
10 from scrapy.utils.template import render_templatefile, string_camelcase
11 from scrapy.exceptions import UsageError
12
13
14 def sanitize_module_name(module_name):
15 """Sanitize the given module name, by replacing dashes and points
16 with underscores and prefixing it with a letter if it doesn't start
17 with one
18 """
19 module_name = module_name.replace('-', '_').replace('.', '_')
20 if module_name[0] not in string.ascii_letters:
21 module_name = "a" + module_name
22 return module_name
23
24
25 class Command(ScrapyCommand):
26
27 requires_project = False
28 default_settings = {'LOG_ENABLED': False}
29
30 def syntax(self):
31 return "[options] <name> <domain>"
32
33 def short_desc(self):
34 return "Generate new spider using pre-defined templates"
35
36 def add_options(self, parser):
37 ScrapyCommand.add_options(self, parser)
38 parser.add_option("-l", "--list", dest="list", action="store_true",
39 help="List available templates")
40 parser.add_option("-e", "--edit", dest="edit", action="store_true",
41 help="Edit spider after creating it")
42 parser.add_option("-d", "--dump", dest="dump", metavar="TEMPLATE",
43 help="Dump template to standard output")
44 parser.add_option("-t", "--template", dest="template", default="basic",
45 help="Uses a custom template.")
46 parser.add_option("--force", dest="force", action="store_true",
47 help="If the spider already exists, overwrite it with the template")
48
49 def run(self, args, opts):
50 if opts.list:
51 self._list_templates()
52 return
53 if opts.dump:
54 template_file = self._find_template(opts.dump)
55 if template_file:
56 with open(template_file, "r") as f:
57 print(f.read())
58 return
59 if len(args) != 2:
60 raise UsageError()
61
62 name, domain = args[0:2]
63 module = sanitize_module_name(name)
64
65 if self.settings.get('BOT_NAME') == module:
66 print("Cannot create a spider with the same name as your project")
67 return
68
69 try:
70 spidercls = self.crawler_process.spider_loader.load(name)
71 except KeyError:
72 pass
73 else:
74 # if spider already exists and not --force then halt
75 if not opts.force:
76 print("Spider %r already exists in module:" % name)
77 print(" %s" % spidercls.__module__)
78 return
79 template_file = self._find_template(opts.template)
80 if template_file:
81 self._genspider(module, name, domain, opts.template, template_file)
82 if opts.edit:
83 self.exitcode = os.system('scrapy edit "%s"' % name)
84
85 def _genspider(self, module, name, domain, template_name, template_file):
86 """Generate the spider module, based on the given template"""
87 tvars = {
88 'project_name': self.settings.get('BOT_NAME'),
89 'ProjectName': string_camelcase(self.settings.get('BOT_NAME')),
90 'module': module,
91 'name': name,
92 'domain': domain,
93 'classname': '%sSpider' % ''.join(s.capitalize() for s in module.split('_'))
94 }
95 if self.settings.get('NEWSPIDER_MODULE'):
96 spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])
97 spiders_dir = abspath(dirname(spiders_module.__file__))
98 else:
99 spiders_module = None
100 spiders_dir = "."
101 spider_file = "%s.py" % join(spiders_dir, module)
102 shutil.copyfile(template_file, spider_file)
103 render_templatefile(spider_file, **tvars)
104 print("Created spider %r using template %r "
105 % (name, template_name), end=('' if spiders_module else '\n'))
106 if spiders_module:
107 print("in module:\n %s.%s" % (spiders_module.__name__, module))
108
109 def _find_template(self, template):
110 template_file = join(self.templates_dir, '%s.tmpl' % template)
111 if exists(template_file):
112 return template_file
113 print("Unable to find template: %s\n" % template)
114 print('Use "scrapy genspider --list" to see all available templates.')
115
116 def _list_templates(self):
117 print("Available templates:")
118 for filename in sorted(os.listdir(self.templates_dir)):
119 if filename.endswith('.tmpl'):
120 print(" %s" % splitext(filename)[0])
121
122 @property
123 def templates_dir(self):
124 return join(
125 self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),
126 'spiders'
127 )
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/commands/genspider.py b/scrapy/commands/genspider.py
--- a/scrapy/commands/genspider.py
+++ b/scrapy/commands/genspider.py
@@ -66,16 +66,9 @@
print("Cannot create a spider with the same name as your project")
return
- try:
- spidercls = self.crawler_process.spider_loader.load(name)
- except KeyError:
- pass
- else:
- # if spider already exists and not --force then halt
- if not opts.force:
- print("Spider %r already exists in module:" % name)
- print(" %s" % spidercls.__module__)
- return
+ if not opts.force and self._spider_exists(name):
+ return
+
template_file = self._find_template(opts.template)
if template_file:
self._genspider(module, name, domain, opts.template, template_file)
@@ -119,6 +112,34 @@
if filename.endswith('.tmpl'):
print(" %s" % splitext(filename)[0])
+ def _spider_exists(self, name):
+ if not self.settings.get('NEWSPIDER_MODULE'):
+ # if run as a standalone command and file with same filename already exists
+ if exists(name + ".py"):
+ print("%s already exists" % (abspath(name + ".py")))
+ return True
+ return False
+
+ try:
+ spidercls = self.crawler_process.spider_loader.load(name)
+ except KeyError:
+ pass
+ else:
+ # if spider with same name exists
+ print("Spider %r already exists in module:" % name)
+ print(" %s" % spidercls.__module__)
+ return True
+
+ # a file with the same name exists in the target directory
+ spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])
+ spiders_dir = dirname(spiders_module.__file__)
+ spiders_dir_abs = abspath(spiders_dir)
+ if exists(join(spiders_dir_abs, name + ".py")):
+ print("%s already exists" % (join(spiders_dir_abs, (name + ".py"))))
+ return True
+
+ return False
+
@property
def templates_dir(self):
return join(
| {"golden_diff": "diff --git a/scrapy/commands/genspider.py b/scrapy/commands/genspider.py\n--- a/scrapy/commands/genspider.py\n+++ b/scrapy/commands/genspider.py\n@@ -66,16 +66,9 @@\n print(\"Cannot create a spider with the same name as your project\")\n return\n \n- try:\n- spidercls = self.crawler_process.spider_loader.load(name)\n- except KeyError:\n- pass\n- else:\n- # if spider already exists and not --force then halt\n- if not opts.force:\n- print(\"Spider %r already exists in module:\" % name)\n- print(\" %s\" % spidercls.__module__)\n- return\n+ if not opts.force and self._spider_exists(name):\n+ return\n+\n template_file = self._find_template(opts.template)\n if template_file:\n self._genspider(module, name, domain, opts.template, template_file)\n@@ -119,6 +112,34 @@\n if filename.endswith('.tmpl'):\n print(\" %s\" % splitext(filename)[0])\n \n+ def _spider_exists(self, name):\n+ if not self.settings.get('NEWSPIDER_MODULE'):\n+ # if run as a standalone command and file with same filename already exists\n+ if exists(name + \".py\"):\n+ print(\"%s already exists\" % (abspath(name + \".py\")))\n+ return True\n+ return False\n+\n+ try:\n+ spidercls = self.crawler_process.spider_loader.load(name)\n+ except KeyError:\n+ pass\n+ else:\n+ # if spider with same name exists\n+ print(\"Spider %r already exists in module:\" % name)\n+ print(\" %s\" % spidercls.__module__)\n+ return True\n+\n+ # a file with the same name exists in the target directory\n+ spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])\n+ spiders_dir = dirname(spiders_module.__file__)\n+ spiders_dir_abs = abspath(spiders_dir)\n+ if exists(join(spiders_dir_abs, name + \".py\")):\n+ print(\"%s already exists\" % (join(spiders_dir_abs, (name + \".py\"))))\n+ return True\n+\n+ return False\n+\n @property\n def templates_dir(self):\n return join(\n", "issue": "`scrapy genspider` should not overwrite existing file\n\r\n## Summary\r\n\r\nIf the file mentioned in `scrapy genspider` already exists, then genspider should refuse to generate the the file.\r\n\r\n## Motivation\r\n\r\nAs it stands, existing code can be blown away if this command runs twice.\r\n\r\n## Describe alternatives you've considered\r\n\r\nPrompting the user for overwriting existing spider.\r\n\r\n\n", "before_files": [{"content": "import os\nimport shutil\nimport string\n\nfrom importlib import import_module\nfrom os.path import join, dirname, abspath, exists, splitext\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\ndef sanitize_module_name(module_name):\n \"\"\"Sanitize the given module name, by replacing dashes and points\n with underscores and prefixing it with a letter if it doesn't start\n with one\n \"\"\"\n module_name = module_name.replace('-', '_').replace('.', '_')\n if module_name[0] not in string.ascii_letters:\n module_name = \"a\" + module_name\n return module_name\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"[options] <name> <domain>\"\n\n def short_desc(self):\n return \"Generate new spider using pre-defined templates\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"-l\", \"--list\", dest=\"list\", action=\"store_true\",\n help=\"List available templates\")\n parser.add_option(\"-e\", \"--edit\", dest=\"edit\", action=\"store_true\",\n help=\"Edit spider after creating it\")\n parser.add_option(\"-d\", \"--dump\", dest=\"dump\", metavar=\"TEMPLATE\",\n help=\"Dump template to standard output\")\n parser.add_option(\"-t\", \"--template\", dest=\"template\", default=\"basic\",\n help=\"Uses a custom template.\")\n parser.add_option(\"--force\", dest=\"force\", action=\"store_true\",\n help=\"If the spider already exists, overwrite it with the template\")\n\n def run(self, args, opts):\n if opts.list:\n self._list_templates()\n return\n if opts.dump:\n template_file = self._find_template(opts.dump)\n if template_file:\n with open(template_file, \"r\") as f:\n print(f.read())\n return\n if len(args) != 2:\n raise UsageError()\n\n name, domain = args[0:2]\n module = sanitize_module_name(name)\n\n if self.settings.get('BOT_NAME') == module:\n print(\"Cannot create a spider with the same name as your project\")\n return\n\n try:\n spidercls = self.crawler_process.spider_loader.load(name)\n except KeyError:\n pass\n else:\n # if spider already exists and not --force then halt\n if not opts.force:\n print(\"Spider %r already exists in module:\" % name)\n print(\" %s\" % spidercls.__module__)\n return\n template_file = self._find_template(opts.template)\n if template_file:\n self._genspider(module, name, domain, opts.template, template_file)\n if opts.edit:\n self.exitcode = os.system('scrapy edit \"%s\"' % name)\n\n def _genspider(self, module, name, domain, template_name, template_file):\n \"\"\"Generate the spider module, based on the given template\"\"\"\n tvars = {\n 'project_name': self.settings.get('BOT_NAME'),\n 'ProjectName': string_camelcase(self.settings.get('BOT_NAME')),\n 'module': module,\n 'name': name,\n 'domain': domain,\n 'classname': '%sSpider' % ''.join(s.capitalize() for s in module.split('_'))\n }\n if self.settings.get('NEWSPIDER_MODULE'):\n spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])\n spiders_dir = abspath(dirname(spiders_module.__file__))\n else:\n spiders_module = None\n spiders_dir = \".\"\n spider_file = \"%s.py\" % join(spiders_dir, module)\n shutil.copyfile(template_file, spider_file)\n render_templatefile(spider_file, **tvars)\n print(\"Created spider %r using template %r \"\n % (name, template_name), end=('' if spiders_module else '\\n'))\n if spiders_module:\n print(\"in module:\\n %s.%s\" % (spiders_module.__name__, module))\n\n def _find_template(self, template):\n template_file = join(self.templates_dir, '%s.tmpl' % template)\n if exists(template_file):\n return template_file\n print(\"Unable to find template: %s\\n\" % template)\n print('Use \"scrapy genspider --list\" to see all available templates.')\n\n def _list_templates(self):\n print(\"Available templates:\")\n for filename in sorted(os.listdir(self.templates_dir)):\n if filename.endswith('.tmpl'):\n print(\" %s\" % splitext(filename)[0])\n\n @property\n def templates_dir(self):\n return join(\n self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),\n 'spiders'\n )\n", "path": "scrapy/commands/genspider.py"}], "after_files": [{"content": "import os\nimport shutil\nimport string\n\nfrom importlib import import_module\nfrom os.path import join, dirname, abspath, exists, splitext\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\ndef sanitize_module_name(module_name):\n \"\"\"Sanitize the given module name, by replacing dashes and points\n with underscores and prefixing it with a letter if it doesn't start\n with one\n \"\"\"\n module_name = module_name.replace('-', '_').replace('.', '_')\n if module_name[0] not in string.ascii_letters:\n module_name = \"a\" + module_name\n return module_name\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False}\n\n def syntax(self):\n return \"[options] <name> <domain>\"\n\n def short_desc(self):\n return \"Generate new spider using pre-defined templates\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"-l\", \"--list\", dest=\"list\", action=\"store_true\",\n help=\"List available templates\")\n parser.add_option(\"-e\", \"--edit\", dest=\"edit\", action=\"store_true\",\n help=\"Edit spider after creating it\")\n parser.add_option(\"-d\", \"--dump\", dest=\"dump\", metavar=\"TEMPLATE\",\n help=\"Dump template to standard output\")\n parser.add_option(\"-t\", \"--template\", dest=\"template\", default=\"basic\",\n help=\"Uses a custom template.\")\n parser.add_option(\"--force\", dest=\"force\", action=\"store_true\",\n help=\"If the spider already exists, overwrite it with the template\")\n\n def run(self, args, opts):\n if opts.list:\n self._list_templates()\n return\n if opts.dump:\n template_file = self._find_template(opts.dump)\n if template_file:\n with open(template_file, \"r\") as f:\n print(f.read())\n return\n if len(args) != 2:\n raise UsageError()\n\n name, domain = args[0:2]\n module = sanitize_module_name(name)\n\n if self.settings.get('BOT_NAME') == module:\n print(\"Cannot create a spider with the same name as your project\")\n return\n\n if not opts.force and self._spider_exists(name):\n return\n\n template_file = self._find_template(opts.template)\n if template_file:\n self._genspider(module, name, domain, opts.template, template_file)\n if opts.edit:\n self.exitcode = os.system('scrapy edit \"%s\"' % name)\n\n def _genspider(self, module, name, domain, template_name, template_file):\n \"\"\"Generate the spider module, based on the given template\"\"\"\n tvars = {\n 'project_name': self.settings.get('BOT_NAME'),\n 'ProjectName': string_camelcase(self.settings.get('BOT_NAME')),\n 'module': module,\n 'name': name,\n 'domain': domain,\n 'classname': '%sSpider' % ''.join(s.capitalize() for s in module.split('_'))\n }\n if self.settings.get('NEWSPIDER_MODULE'):\n spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])\n spiders_dir = abspath(dirname(spiders_module.__file__))\n else:\n spiders_module = None\n spiders_dir = \".\"\n spider_file = \"%s.py\" % join(spiders_dir, module)\n shutil.copyfile(template_file, spider_file)\n render_templatefile(spider_file, **tvars)\n print(\"Created spider %r using template %r \"\n % (name, template_name), end=('' if spiders_module else '\\n'))\n if spiders_module:\n print(\"in module:\\n %s.%s\" % (spiders_module.__name__, module))\n\n def _find_template(self, template):\n template_file = join(self.templates_dir, '%s.tmpl' % template)\n if exists(template_file):\n return template_file\n print(\"Unable to find template: %s\\n\" % template)\n print('Use \"scrapy genspider --list\" to see all available templates.')\n\n def _list_templates(self):\n print(\"Available templates:\")\n for filename in sorted(os.listdir(self.templates_dir)):\n if filename.endswith('.tmpl'):\n print(\" %s\" % splitext(filename)[0])\n\n def _spider_exists(self, name):\n if not self.settings.get('NEWSPIDER_MODULE'):\n # if run as a standalone command and file with same filename already exists\n if exists(name + \".py\"):\n print(\"%s already exists\" % (abspath(name + \".py\")))\n return True\n return False\n\n try:\n spidercls = self.crawler_process.spider_loader.load(name)\n except KeyError:\n pass\n else:\n # if spider with same name exists\n print(\"Spider %r already exists in module:\" % name)\n print(\" %s\" % spidercls.__module__)\n return True\n\n # a file with the same name exists in the target directory\n spiders_module = import_module(self.settings['NEWSPIDER_MODULE'])\n spiders_dir = dirname(spiders_module.__file__)\n spiders_dir_abs = abspath(spiders_dir)\n if exists(join(spiders_dir_abs, name + \".py\")):\n print(\"%s already exists\" % (join(spiders_dir_abs, (name + \".py\"))))\n return True\n\n return False\n\n @property\n def templates_dir(self):\n return join(\n self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),\n 'spiders'\n )\n", "path": "scrapy/commands/genspider.py"}]} | 1,677 | 532 |
gh_patches_debug_10030 | rasdani/github-patches | git_diff | mozilla__bugbug-3696 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop the `bug` model
The model has been renamed to `defect` in https://github.com/mozilla/bugbug/pull/335/files#diff-74f32ab12cc9be314b1fead3a281590c768699c2e9883b8346066f6c4d6daa90
However, the definition of `bug` model was not dropped.
We need to drop the following line: https://github.com/mozilla/bugbug/blob/c228c2a686b271f7ca24a0716cda4f9059229d9f/bugbug/models/__init__.py#L16
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import importlib
3 import logging
4 from typing import Type
5
6 from bugbug.model import Model
7
8 LOGGER = logging.getLogger()
9
10
11 MODELS = {
12 "annotateignore": "bugbug.models.annotate_ignore.AnnotateIgnoreModel",
13 "assignee": "bugbug.models.assignee.AssigneeModel",
14 "backout": "bugbug.models.backout.BackoutModel",
15 "browsername": "bugbug.models.browsername.BrowserNameModel",
16 "bug": "bugbug.model.BugModel",
17 "bugtype": "bugbug.models.bugtype.BugTypeModel",
18 "component": "bugbug.models.component.ComponentModel",
19 "component_nn": "bugbug.models.component_nn.ComponentNNModel",
20 "defect": "bugbug.models.defect.DefectModel",
21 "defectenhancementtask": "bugbug.models.defect_enhancement_task.DefectEnhancementTaskModel",
22 "devdocneeded": "bugbug.models.devdocneeded.DevDocNeededModel",
23 "duplicate": "bugbug.models.duplicate.DuplicateModel",
24 "fixtime": "bugbug.models.fixtime.FixTimeModel",
25 "needsdiagnosis": "bugbug.models.needsdiagnosis.NeedsDiagnosisModel",
26 "qaneeded": "bugbug.models.qaneeded.QANeededModel",
27 "rcatype": "bugbug.models.rcatype.RCATypeModel",
28 "regression": "bugbug.models.regression.RegressionModel",
29 "regressionrange": "bugbug.models.regressionrange.RegressionRangeModel",
30 "regressor": "bugbug.models.regressor.RegressorModel",
31 "spambug": "bugbug.models.spambug.SpamBugModel",
32 "stepstoreproduce": "bugbug.models.stepstoreproduce.StepsToReproduceModel",
33 "testlabelselect": "bugbug.models.testselect.TestLabelSelectModel",
34 "testgroupselect": "bugbug.models.testselect.TestGroupSelectModel",
35 "testconfiggroupselect": "bugbug.models.testselect.TestConfigGroupSelectModel",
36 "testfailure": "bugbug.models.testfailure.TestFailureModel",
37 "tracking": "bugbug.models.tracking.TrackingModel",
38 "uplift": "bugbug.models.uplift.UpliftModel",
39 }
40
41
42 def get_model_class(model_name: str) -> Type[Model]:
43 if model_name not in MODELS:
44 err_msg = f"Invalid name {model_name}, not in {list(MODELS.keys())}"
45 raise ValueError(err_msg)
46
47 full_qualified_class_name = MODELS[model_name]
48 module_name, class_name = full_qualified_class_name.rsplit(".", 1)
49
50 module = importlib.import_module(module_name)
51
52 return getattr(module, class_name)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/models/__init__.py b/bugbug/models/__init__.py
--- a/bugbug/models/__init__.py
+++ b/bugbug/models/__init__.py
@@ -13,7 +13,6 @@
"assignee": "bugbug.models.assignee.AssigneeModel",
"backout": "bugbug.models.backout.BackoutModel",
"browsername": "bugbug.models.browsername.BrowserNameModel",
- "bug": "bugbug.model.BugModel",
"bugtype": "bugbug.models.bugtype.BugTypeModel",
"component": "bugbug.models.component.ComponentModel",
"component_nn": "bugbug.models.component_nn.ComponentNNModel",
| {"golden_diff": "diff --git a/bugbug/models/__init__.py b/bugbug/models/__init__.py\n--- a/bugbug/models/__init__.py\n+++ b/bugbug/models/__init__.py\n@@ -13,7 +13,6 @@\n \"assignee\": \"bugbug.models.assignee.AssigneeModel\",\n \"backout\": \"bugbug.models.backout.BackoutModel\",\n \"browsername\": \"bugbug.models.browsername.BrowserNameModel\",\n- \"bug\": \"bugbug.model.BugModel\",\n \"bugtype\": \"bugbug.models.bugtype.BugTypeModel\",\n \"component\": \"bugbug.models.component.ComponentModel\",\n \"component_nn\": \"bugbug.models.component_nn.ComponentNNModel\",\n", "issue": "Drop the `bug` model\nThe model has been renamed to `defect` in https://github.com/mozilla/bugbug/pull/335/files#diff-74f32ab12cc9be314b1fead3a281590c768699c2e9883b8346066f6c4d6daa90 \r\n\r\nHowever, the definition of `bug` model was not dropped.\r\n\r\n\r\nWe need to drop the following line: https://github.com/mozilla/bugbug/blob/c228c2a686b271f7ca24a0716cda4f9059229d9f/bugbug/models/__init__.py#L16\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport importlib\nimport logging\nfrom typing import Type\n\nfrom bugbug.model import Model\n\nLOGGER = logging.getLogger()\n\n\nMODELS = {\n \"annotateignore\": \"bugbug.models.annotate_ignore.AnnotateIgnoreModel\",\n \"assignee\": \"bugbug.models.assignee.AssigneeModel\",\n \"backout\": \"bugbug.models.backout.BackoutModel\",\n \"browsername\": \"bugbug.models.browsername.BrowserNameModel\",\n \"bug\": \"bugbug.model.BugModel\",\n \"bugtype\": \"bugbug.models.bugtype.BugTypeModel\",\n \"component\": \"bugbug.models.component.ComponentModel\",\n \"component_nn\": \"bugbug.models.component_nn.ComponentNNModel\",\n \"defect\": \"bugbug.models.defect.DefectModel\",\n \"defectenhancementtask\": \"bugbug.models.defect_enhancement_task.DefectEnhancementTaskModel\",\n \"devdocneeded\": \"bugbug.models.devdocneeded.DevDocNeededModel\",\n \"duplicate\": \"bugbug.models.duplicate.DuplicateModel\",\n \"fixtime\": \"bugbug.models.fixtime.FixTimeModel\",\n \"needsdiagnosis\": \"bugbug.models.needsdiagnosis.NeedsDiagnosisModel\",\n \"qaneeded\": \"bugbug.models.qaneeded.QANeededModel\",\n \"rcatype\": \"bugbug.models.rcatype.RCATypeModel\",\n \"regression\": \"bugbug.models.regression.RegressionModel\",\n \"regressionrange\": \"bugbug.models.regressionrange.RegressionRangeModel\",\n \"regressor\": \"bugbug.models.regressor.RegressorModel\",\n \"spambug\": \"bugbug.models.spambug.SpamBugModel\",\n \"stepstoreproduce\": \"bugbug.models.stepstoreproduce.StepsToReproduceModel\",\n \"testlabelselect\": \"bugbug.models.testselect.TestLabelSelectModel\",\n \"testgroupselect\": \"bugbug.models.testselect.TestGroupSelectModel\",\n \"testconfiggroupselect\": \"bugbug.models.testselect.TestConfigGroupSelectModel\",\n \"testfailure\": \"bugbug.models.testfailure.TestFailureModel\",\n \"tracking\": \"bugbug.models.tracking.TrackingModel\",\n \"uplift\": \"bugbug.models.uplift.UpliftModel\",\n}\n\n\ndef get_model_class(model_name: str) -> Type[Model]:\n if model_name not in MODELS:\n err_msg = f\"Invalid name {model_name}, not in {list(MODELS.keys())}\"\n raise ValueError(err_msg)\n\n full_qualified_class_name = MODELS[model_name]\n module_name, class_name = full_qualified_class_name.rsplit(\".\", 1)\n\n module = importlib.import_module(module_name)\n\n return getattr(module, class_name)\n", "path": "bugbug/models/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport importlib\nimport logging\nfrom typing import Type\n\nfrom bugbug.model import Model\n\nLOGGER = logging.getLogger()\n\n\nMODELS = {\n \"annotateignore\": \"bugbug.models.annotate_ignore.AnnotateIgnoreModel\",\n \"assignee\": \"bugbug.models.assignee.AssigneeModel\",\n \"backout\": \"bugbug.models.backout.BackoutModel\",\n \"browsername\": \"bugbug.models.browsername.BrowserNameModel\",\n \"bugtype\": \"bugbug.models.bugtype.BugTypeModel\",\n \"component\": \"bugbug.models.component.ComponentModel\",\n \"component_nn\": \"bugbug.models.component_nn.ComponentNNModel\",\n \"defect\": \"bugbug.models.defect.DefectModel\",\n \"defectenhancementtask\": \"bugbug.models.defect_enhancement_task.DefectEnhancementTaskModel\",\n \"devdocneeded\": \"bugbug.models.devdocneeded.DevDocNeededModel\",\n \"duplicate\": \"bugbug.models.duplicate.DuplicateModel\",\n \"fixtime\": \"bugbug.models.fixtime.FixTimeModel\",\n \"needsdiagnosis\": \"bugbug.models.needsdiagnosis.NeedsDiagnosisModel\",\n \"qaneeded\": \"bugbug.models.qaneeded.QANeededModel\",\n \"rcatype\": \"bugbug.models.rcatype.RCATypeModel\",\n \"regression\": \"bugbug.models.regression.RegressionModel\",\n \"regressionrange\": \"bugbug.models.regressionrange.RegressionRangeModel\",\n \"regressor\": \"bugbug.models.regressor.RegressorModel\",\n \"spambug\": \"bugbug.models.spambug.SpamBugModel\",\n \"stepstoreproduce\": \"bugbug.models.stepstoreproduce.StepsToReproduceModel\",\n \"testlabelselect\": \"bugbug.models.testselect.TestLabelSelectModel\",\n \"testgroupselect\": \"bugbug.models.testselect.TestGroupSelectModel\",\n \"testconfiggroupselect\": \"bugbug.models.testselect.TestConfigGroupSelectModel\",\n \"testfailure\": \"bugbug.models.testfailure.TestFailureModel\",\n \"tracking\": \"bugbug.models.tracking.TrackingModel\",\n \"uplift\": \"bugbug.models.uplift.UpliftModel\",\n}\n\n\ndef get_model_class(model_name: str) -> Type[Model]:\n if model_name not in MODELS:\n err_msg = f\"Invalid name {model_name}, not in {list(MODELS.keys())}\"\n raise ValueError(err_msg)\n\n full_qualified_class_name = MODELS[model_name]\n module_name, class_name = full_qualified_class_name.rsplit(\".\", 1)\n\n module = importlib.import_module(module_name)\n\n return getattr(module, class_name)\n", "path": "bugbug/models/__init__.py"}]} | 1,129 | 158 |
gh_patches_debug_4882 | rasdani/github-patches | git_diff | chainer__chainer-1501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProgressBar iters/sec is too low in CPU mode
The 'iters/sec' speed displayed in the `ProgressBar` extension seems too low when running a model in CPU mode. I suspect this is due to the use of `time.clock()` instead of `time.time()`. `time.clock()` apparently measures the total time spent in all CPU cores, which can cause the measured time to (sometimes greatly) exceed the wall clock time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/training/extensions/progress_bar.py`
Content:
```
1 from __future__ import division
2 import datetime
3 import sys
4 import time
5
6 from chainer.training import extension
7 from chainer.training import trigger
8
9
10 class ProgressBar(extension.Extension):
11
12 """Trainer extension to print a progress bar and recent training status.
13
14 This extension prints a progress bar at every call. It watches the current
15 iteration and epoch to print the bar.
16
17 Args:
18 training_length (tuple): Length of whole training. It consists of an
19 integer and either ``'epoch'`` or ``'iteration'``. If this value is
20 omitted and the stop trigger of the trainer is
21 :class:`IntervalTrigger`, this extension uses its attributes to
22 determine the length of the training.
23 update_interval (int): Number of iterations to skip printing the
24 progress bar.
25 bar_length (int): Length of the progress bar in characters.
26 out: Stream to print the bar. Standard output is used by default.
27
28 """
29 def __init__(self, training_length=None, update_interval=100,
30 bar_length=50, out=sys.stdout):
31 self._training_length = training_length
32 self._status_template = None
33 self._update_interval = update_interval
34 self._bar_length = bar_length
35 self._out = out
36 self._recent_timing = []
37
38 def __call__(self, trainer):
39 training_length = self._training_length
40
41 # initialize some attributes at the first call
42 if training_length is None:
43 t = trainer.stop_trigger
44 if not isinstance(t, trigger.IntervalTrigger):
45 raise TypeError(
46 'cannot retrieve the training length from %s' % type(t))
47 training_length = self._training_length = t.period, t.unit
48
49 stat_template = self._status_template
50 if stat_template is None:
51 stat_template = self._status_template = (
52 '{0.iteration:10} iter, {0.epoch} epoch / %s %ss\n' %
53 training_length)
54
55 length, unit = training_length
56 out = self._out
57
58 iteration = trainer.updater.iteration
59
60 # print the progress bar
61 if iteration % self._update_interval == 0:
62 epoch = trainer.updater.epoch_detail
63 recent_timing = self._recent_timing
64 now = time.clock()
65
66 if len(recent_timing) >= 1:
67 out.write('\033[J')
68
69 if unit == 'iteration':
70 rate = iteration / length
71 else:
72 rate = epoch / length
73
74 bar_length = self._bar_length
75 marks = '#' * int(rate * bar_length)
76 out.write(' total [{}{}] {:6.2%}\n'.format(
77 marks, '.' * (bar_length - len(marks)), rate))
78
79 epoch_rate = epoch - int(epoch)
80 marks = '#' * int(epoch_rate * bar_length)
81 out.write('this epoch [{}{}] {:6.2%}\n'.format(
82 marks, '.' * (bar_length - len(marks)), epoch_rate))
83
84 status = stat_template.format(trainer.updater)
85 out.write(status)
86
87 old_t, old_e, old_sec = recent_timing[0]
88 speed_t = (iteration - old_t) / (now - old_sec)
89 speed_e = (epoch - old_e) / (now - old_sec)
90 if unit == 'iteration':
91 estimated_time = (length - iteration) / speed_t
92 else:
93 estimated_time = (length - epoch) / speed_e
94 out.write('{:10.5g} iters/sec. Estimated time to finish: {}.\n'
95 .format(speed_t,
96 datetime.timedelta(seconds=estimated_time)))
97
98 # move the cursor to the head of the progress bar
99 out.write('\033[4A')
100 out.flush()
101
102 if len(recent_timing) > 100:
103 del recent_timing[0]
104
105 recent_timing.append((iteration, epoch, now))
106
107 def finalize(self):
108 # delete the progress bar
109 out = self._out
110 out.write('\033[J')
111 out.flush()
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/training/extensions/progress_bar.py b/chainer/training/extensions/progress_bar.py
--- a/chainer/training/extensions/progress_bar.py
+++ b/chainer/training/extensions/progress_bar.py
@@ -61,7 +61,7 @@
if iteration % self._update_interval == 0:
epoch = trainer.updater.epoch_detail
recent_timing = self._recent_timing
- now = time.clock()
+ now = time.time()
if len(recent_timing) >= 1:
out.write('\033[J')
| {"golden_diff": "diff --git a/chainer/training/extensions/progress_bar.py b/chainer/training/extensions/progress_bar.py\n--- a/chainer/training/extensions/progress_bar.py\n+++ b/chainer/training/extensions/progress_bar.py\n@@ -61,7 +61,7 @@\n if iteration % self._update_interval == 0:\n epoch = trainer.updater.epoch_detail\n recent_timing = self._recent_timing\n- now = time.clock()\n+ now = time.time()\n \n if len(recent_timing) >= 1:\n out.write('\\033[J')\n", "issue": "ProgressBar iters/sec is too low in CPU mode\nThe 'iters/sec' speed displayed in the `ProgressBar` extension seems too low when running a model in CPU mode. I suspect this is due to the use of `time.clock()` instead of `time.time()`. `time.clock()` apparently measures the total time spent in all CPU cores, which can cause the measured time to (sometimes greatly) exceed the wall clock time.\n\n", "before_files": [{"content": "from __future__ import division\nimport datetime\nimport sys\nimport time\n\nfrom chainer.training import extension\nfrom chainer.training import trigger\n\n\nclass ProgressBar(extension.Extension):\n\n \"\"\"Trainer extension to print a progress bar and recent training status.\n\n This extension prints a progress bar at every call. It watches the current\n iteration and epoch to print the bar.\n\n Args:\n training_length (tuple): Length of whole training. It consists of an\n integer and either ``'epoch'`` or ``'iteration'``. If this value is\n omitted and the stop trigger of the trainer is\n :class:`IntervalTrigger`, this extension uses its attributes to\n determine the length of the training.\n update_interval (int): Number of iterations to skip printing the\n progress bar.\n bar_length (int): Length of the progress bar in characters.\n out: Stream to print the bar. Standard output is used by default.\n\n \"\"\"\n def __init__(self, training_length=None, update_interval=100,\n bar_length=50, out=sys.stdout):\n self._training_length = training_length\n self._status_template = None\n self._update_interval = update_interval\n self._bar_length = bar_length\n self._out = out\n self._recent_timing = []\n\n def __call__(self, trainer):\n training_length = self._training_length\n\n # initialize some attributes at the first call\n if training_length is None:\n t = trainer.stop_trigger\n if not isinstance(t, trigger.IntervalTrigger):\n raise TypeError(\n 'cannot retrieve the training length from %s' % type(t))\n training_length = self._training_length = t.period, t.unit\n\n stat_template = self._status_template\n if stat_template is None:\n stat_template = self._status_template = (\n '{0.iteration:10} iter, {0.epoch} epoch / %s %ss\\n' %\n training_length)\n\n length, unit = training_length\n out = self._out\n\n iteration = trainer.updater.iteration\n\n # print the progress bar\n if iteration % self._update_interval == 0:\n epoch = trainer.updater.epoch_detail\n recent_timing = self._recent_timing\n now = time.clock()\n\n if len(recent_timing) >= 1:\n out.write('\\033[J')\n\n if unit == 'iteration':\n rate = iteration / length\n else:\n rate = epoch / length\n\n bar_length = self._bar_length\n marks = '#' * int(rate * bar_length)\n out.write(' total [{}{}] {:6.2%}\\n'.format(\n marks, '.' * (bar_length - len(marks)), rate))\n\n epoch_rate = epoch - int(epoch)\n marks = '#' * int(epoch_rate * bar_length)\n out.write('this epoch [{}{}] {:6.2%}\\n'.format(\n marks, '.' * (bar_length - len(marks)), epoch_rate))\n\n status = stat_template.format(trainer.updater)\n out.write(status)\n\n old_t, old_e, old_sec = recent_timing[0]\n speed_t = (iteration - old_t) / (now - old_sec)\n speed_e = (epoch - old_e) / (now - old_sec)\n if unit == 'iteration':\n estimated_time = (length - iteration) / speed_t\n else:\n estimated_time = (length - epoch) / speed_e\n out.write('{:10.5g} iters/sec. Estimated time to finish: {}.\\n'\n .format(speed_t,\n datetime.timedelta(seconds=estimated_time)))\n\n # move the cursor to the head of the progress bar\n out.write('\\033[4A')\n out.flush()\n\n if len(recent_timing) > 100:\n del recent_timing[0]\n\n recent_timing.append((iteration, epoch, now))\n\n def finalize(self):\n # delete the progress bar\n out = self._out\n out.write('\\033[J')\n out.flush()\n", "path": "chainer/training/extensions/progress_bar.py"}], "after_files": [{"content": "from __future__ import division\nimport datetime\nimport sys\nimport time\n\nfrom chainer.training import extension\nfrom chainer.training import trigger\n\n\nclass ProgressBar(extension.Extension):\n\n \"\"\"Trainer extension to print a progress bar and recent training status.\n\n This extension prints a progress bar at every call. It watches the current\n iteration and epoch to print the bar.\n\n Args:\n training_length (tuple): Length of whole training. It consists of an\n integer and either ``'epoch'`` or ``'iteration'``. If this value is\n omitted and the stop trigger of the trainer is\n :class:`IntervalTrigger`, this extension uses its attributes to\n determine the length of the training.\n update_interval (int): Number of iterations to skip printing the\n progress bar.\n bar_length (int): Length of the progress bar in characters.\n out: Stream to print the bar. Standard output is used by default.\n\n \"\"\"\n def __init__(self, training_length=None, update_interval=100,\n bar_length=50, out=sys.stdout):\n self._training_length = training_length\n self._status_template = None\n self._update_interval = update_interval\n self._bar_length = bar_length\n self._out = out\n self._recent_timing = []\n\n def __call__(self, trainer):\n training_length = self._training_length\n\n # initialize some attributes at the first call\n if training_length is None:\n t = trainer.stop_trigger\n if not isinstance(t, trigger.IntervalTrigger):\n raise TypeError(\n 'cannot retrieve the training length from %s' % type(t))\n training_length = self._training_length = t.period, t.unit\n\n stat_template = self._status_template\n if stat_template is None:\n stat_template = self._status_template = (\n '{0.iteration:10} iter, {0.epoch} epoch / %s %ss\\n' %\n training_length)\n\n length, unit = training_length\n out = self._out\n\n iteration = trainer.updater.iteration\n\n # print the progress bar\n if iteration % self._update_interval == 0:\n epoch = trainer.updater.epoch_detail\n recent_timing = self._recent_timing\n now = time.time()\n\n if len(recent_timing) >= 1:\n out.write('\\033[J')\n\n if unit == 'iteration':\n rate = iteration / length\n else:\n rate = epoch / length\n\n bar_length = self._bar_length\n marks = '#' * int(rate * bar_length)\n out.write(' total [{}{}] {:6.2%}\\n'.format(\n marks, '.' * (bar_length - len(marks)), rate))\n\n epoch_rate = epoch - int(epoch)\n marks = '#' * int(epoch_rate * bar_length)\n out.write('this epoch [{}{}] {:6.2%}\\n'.format(\n marks, '.' * (bar_length - len(marks)), epoch_rate))\n\n status = stat_template.format(trainer.updater)\n out.write(status)\n\n old_t, old_e, old_sec = recent_timing[0]\n speed_t = (iteration - old_t) / (now - old_sec)\n speed_e = (epoch - old_e) / (now - old_sec)\n if unit == 'iteration':\n estimated_time = (length - iteration) / speed_t\n else:\n estimated_time = (length - epoch) / speed_e\n out.write('{:10.5g} iters/sec. Estimated time to finish: {}.\\n'\n .format(speed_t,\n datetime.timedelta(seconds=estimated_time)))\n\n # move the cursor to the head of the progress bar\n out.write('\\033[4A')\n out.flush()\n\n if len(recent_timing) > 100:\n del recent_timing[0]\n\n recent_timing.append((iteration, epoch, now))\n\n def finalize(self):\n # delete the progress bar\n out = self._out\n out.write('\\033[J')\n out.flush()\n", "path": "chainer/training/extensions/progress_bar.py"}]} | 1,469 | 125 |
gh_patches_debug_18395 | rasdani/github-patches | git_diff | getnikola__nikola-1622 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot deploy after removing cache
Hello,
the deploy command crashes like this:
IOError: [Errno 2] No such file or directory: 'cache/lastdeploy'
Why, yes, of course, I can remove "cache", can't I? Seems I have to "mkdir" it. Also I would like to disable its usage, as the point is unclear to my, using rsync does the trick to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/command/deploy.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright Β© 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function
28 import io
29 from datetime import datetime
30 from dateutil.tz import gettz
31 import os
32 import sys
33 import subprocess
34 import time
35
36 from blinker import signal
37
38 from nikola.plugin_categories import Command
39 from nikola.utils import get_logger, remove_file, unicode_str
40
41
42 class CommandDeploy(Command):
43 """Deploy site."""
44 name = "deploy"
45
46 doc_usage = "[[preset [preset...]]"
47 doc_purpose = "deploy the site"
48 doc_description = "Deploy the site by executing deploy commands from the presets listed on the command line. If no presets are specified, `default` is executed."
49 logger = None
50
51 def _execute(self, command, args):
52 self.logger = get_logger('deploy', self.site.loghandlers)
53 # Get last successful deploy date
54 timestamp_path = os.path.join(self.site.config['CACHE_FOLDER'], 'lastdeploy')
55 if self.site.config['COMMENT_SYSTEM_ID'] == 'nikolademo':
56 self.logger.warn("\nWARNING WARNING WARNING WARNING\n"
57 "You are deploying using the nikolademo Disqus account.\n"
58 "That means you will not be able to moderate the comments in your own site.\n"
59 "And is probably not what you want to do.\n"
60 "Think about it for 5 seconds, I'll wait :-)\n\n")
61 time.sleep(5)
62
63 deploy_drafts = self.site.config.get('DEPLOY_DRAFTS', True)
64 deploy_future = self.site.config.get('DEPLOY_FUTURE', False)
65 undeployed_posts = []
66 if not (deploy_drafts and deploy_future):
67 # Remove drafts and future posts
68 out_dir = self.site.config['OUTPUT_FOLDER']
69 self.site.scan_posts()
70 for post in self.site.timeline:
71 if (not deploy_drafts and post.is_draft) or \
72 (not deploy_future and post.publish_later):
73 remove_file(os.path.join(out_dir, post.destination_path()))
74 remove_file(os.path.join(out_dir, post.source_path))
75 undeployed_posts.append(post)
76
77 if args:
78 presets = args
79 else:
80 presets = ['default']
81
82 # test for preset existence
83 for preset in presets:
84 try:
85 self.site.config['DEPLOY_COMMANDS'][preset]
86 except:
87 self.logger.error('No such preset: {0}'.format(preset))
88 sys.exit(255)
89
90 for preset in presets:
91 self.logger.info("=> preset '{0}'".format(preset))
92 for command in self.site.config['DEPLOY_COMMANDS'][preset]:
93 self.logger.info("==> {0}".format(command))
94 try:
95 subprocess.check_call(command, shell=True)
96 except subprocess.CalledProcessError as e:
97 self.logger.error('Failed deployment β command {0} '
98 'returned {1}'.format(e.cmd, e.returncode))
99 sys.exit(e.returncode)
100
101 self.logger.info("Successful deployment")
102 try:
103 with io.open(timestamp_path, 'r', encoding='utf8') as inf:
104 last_deploy = datetime.strptime(inf.read().strip(), "%Y-%m-%dT%H:%M:%S.%f")
105 clean = False
106 except (IOError, Exception) as e:
107 self.logger.debug("Problem when reading `{0}`: {1}".format(timestamp_path, e))
108 last_deploy = datetime(1970, 1, 1)
109 clean = True
110
111 new_deploy = datetime.utcnow()
112 self._emit_deploy_event(last_deploy, new_deploy, clean, undeployed_posts)
113
114 os.makedirs(self.site.config['CACHE_FOLDER'])
115 # Store timestamp of successful deployment
116 with io.open(timestamp_path, 'w+', encoding='utf8') as outf:
117 outf.write(unicode_str(new_deploy.isoformat()))
118
119 def _emit_deploy_event(self, last_deploy, new_deploy, clean=False, undeployed=None):
120 """ Emit events for all timeline entries newer than last deploy.
121
122 last_deploy: datetime
123 Time stamp of the last successful deployment.
124
125 new_deploy: datetime
126 Time stamp of the current deployment.
127
128 clean: bool
129 True when it appears like deploy is being run after a clean.
130
131 """
132
133 event = {
134 'last_deploy': last_deploy,
135 'new_deploy': new_deploy,
136 'clean': clean,
137 'undeployed': undeployed
138 }
139
140 if last_deploy.tzinfo is None:
141 last_deploy = last_deploy.replace(tzinfo=gettz('UTC'))
142
143 deployed = [
144 entry for entry in self.site.timeline
145 if entry.date > last_deploy and entry not in undeployed
146 ]
147
148 event['deployed'] = deployed
149
150 if len(deployed) > 0 or len(undeployed) > 0:
151 signal('deployed').send(event)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/command/deploy.py b/nikola/plugins/command/deploy.py
--- a/nikola/plugins/command/deploy.py
+++ b/nikola/plugins/command/deploy.py
@@ -36,7 +36,7 @@
from blinker import signal
from nikola.plugin_categories import Command
-from nikola.utils import get_logger, remove_file, unicode_str
+from nikola.utils import get_logger, remove_file, unicode_str, makedirs
class CommandDeploy(Command):
@@ -111,7 +111,7 @@
new_deploy = datetime.utcnow()
self._emit_deploy_event(last_deploy, new_deploy, clean, undeployed_posts)
- os.makedirs(self.site.config['CACHE_FOLDER'])
+ makedirs(self.site.config['CACHE_FOLDER'])
# Store timestamp of successful deployment
with io.open(timestamp_path, 'w+', encoding='utf8') as outf:
outf.write(unicode_str(new_deploy.isoformat()))
| {"golden_diff": "diff --git a/nikola/plugins/command/deploy.py b/nikola/plugins/command/deploy.py\n--- a/nikola/plugins/command/deploy.py\n+++ b/nikola/plugins/command/deploy.py\n@@ -36,7 +36,7 @@\n from blinker import signal\n \n from nikola.plugin_categories import Command\n-from nikola.utils import get_logger, remove_file, unicode_str\n+from nikola.utils import get_logger, remove_file, unicode_str, makedirs\n \n \n class CommandDeploy(Command):\n@@ -111,7 +111,7 @@\n new_deploy = datetime.utcnow()\n self._emit_deploy_event(last_deploy, new_deploy, clean, undeployed_posts)\n \n- os.makedirs(self.site.config['CACHE_FOLDER'])\n+ makedirs(self.site.config['CACHE_FOLDER'])\n # Store timestamp of successful deployment\n with io.open(timestamp_path, 'w+', encoding='utf8') as outf:\n outf.write(unicode_str(new_deploy.isoformat()))\n", "issue": "Cannot deploy after removing cache\nHello,\n\nthe deploy command crashes like this:\n\nIOError: [Errno 2] No such file or directory: 'cache/lastdeploy'\n\nWhy, yes, of course, I can remove \"cache\", can't I? Seems I have to \"mkdir\" it. Also I would like to disable its usage, as the point is unclear to my, using rsync does the trick to me.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function\nimport io\nfrom datetime import datetime\nfrom dateutil.tz import gettz\nimport os\nimport sys\nimport subprocess\nimport time\n\nfrom blinker import signal\n\nfrom nikola.plugin_categories import Command\nfrom nikola.utils import get_logger, remove_file, unicode_str\n\n\nclass CommandDeploy(Command):\n \"\"\"Deploy site.\"\"\"\n name = \"deploy\"\n\n doc_usage = \"[[preset [preset...]]\"\n doc_purpose = \"deploy the site\"\n doc_description = \"Deploy the site by executing deploy commands from the presets listed on the command line. If no presets are specified, `default` is executed.\"\n logger = None\n\n def _execute(self, command, args):\n self.logger = get_logger('deploy', self.site.loghandlers)\n # Get last successful deploy date\n timestamp_path = os.path.join(self.site.config['CACHE_FOLDER'], 'lastdeploy')\n if self.site.config['COMMENT_SYSTEM_ID'] == 'nikolademo':\n self.logger.warn(\"\\nWARNING WARNING WARNING WARNING\\n\"\n \"You are deploying using the nikolademo Disqus account.\\n\"\n \"That means you will not be able to moderate the comments in your own site.\\n\"\n \"And is probably not what you want to do.\\n\"\n \"Think about it for 5 seconds, I'll wait :-)\\n\\n\")\n time.sleep(5)\n\n deploy_drafts = self.site.config.get('DEPLOY_DRAFTS', True)\n deploy_future = self.site.config.get('DEPLOY_FUTURE', False)\n undeployed_posts = []\n if not (deploy_drafts and deploy_future):\n # Remove drafts and future posts\n out_dir = self.site.config['OUTPUT_FOLDER']\n self.site.scan_posts()\n for post in self.site.timeline:\n if (not deploy_drafts and post.is_draft) or \\\n (not deploy_future and post.publish_later):\n remove_file(os.path.join(out_dir, post.destination_path()))\n remove_file(os.path.join(out_dir, post.source_path))\n undeployed_posts.append(post)\n\n if args:\n presets = args\n else:\n presets = ['default']\n\n # test for preset existence\n for preset in presets:\n try:\n self.site.config['DEPLOY_COMMANDS'][preset]\n except:\n self.logger.error('No such preset: {0}'.format(preset))\n sys.exit(255)\n\n for preset in presets:\n self.logger.info(\"=> preset '{0}'\".format(preset))\n for command in self.site.config['DEPLOY_COMMANDS'][preset]:\n self.logger.info(\"==> {0}\".format(command))\n try:\n subprocess.check_call(command, shell=True)\n except subprocess.CalledProcessError as e:\n self.logger.error('Failed deployment \u2014 command {0} '\n 'returned {1}'.format(e.cmd, e.returncode))\n sys.exit(e.returncode)\n\n self.logger.info(\"Successful deployment\")\n try:\n with io.open(timestamp_path, 'r', encoding='utf8') as inf:\n last_deploy = datetime.strptime(inf.read().strip(), \"%Y-%m-%dT%H:%M:%S.%f\")\n clean = False\n except (IOError, Exception) as e:\n self.logger.debug(\"Problem when reading `{0}`: {1}\".format(timestamp_path, e))\n last_deploy = datetime(1970, 1, 1)\n clean = True\n\n new_deploy = datetime.utcnow()\n self._emit_deploy_event(last_deploy, new_deploy, clean, undeployed_posts)\n\n os.makedirs(self.site.config['CACHE_FOLDER'])\n # Store timestamp of successful deployment\n with io.open(timestamp_path, 'w+', encoding='utf8') as outf:\n outf.write(unicode_str(new_deploy.isoformat()))\n\n def _emit_deploy_event(self, last_deploy, new_deploy, clean=False, undeployed=None):\n \"\"\" Emit events for all timeline entries newer than last deploy.\n\n last_deploy: datetime\n Time stamp of the last successful deployment.\n\n new_deploy: datetime\n Time stamp of the current deployment.\n\n clean: bool\n True when it appears like deploy is being run after a clean.\n\n \"\"\"\n\n event = {\n 'last_deploy': last_deploy,\n 'new_deploy': new_deploy,\n 'clean': clean,\n 'undeployed': undeployed\n }\n\n if last_deploy.tzinfo is None:\n last_deploy = last_deploy.replace(tzinfo=gettz('UTC'))\n\n deployed = [\n entry for entry in self.site.timeline\n if entry.date > last_deploy and entry not in undeployed\n ]\n\n event['deployed'] = deployed\n\n if len(deployed) > 0 or len(undeployed) > 0:\n signal('deployed').send(event)\n", "path": "nikola/plugins/command/deploy.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function\nimport io\nfrom datetime import datetime\nfrom dateutil.tz import gettz\nimport os\nimport sys\nimport subprocess\nimport time\n\nfrom blinker import signal\n\nfrom nikola.plugin_categories import Command\nfrom nikola.utils import get_logger, remove_file, unicode_str, makedirs\n\n\nclass CommandDeploy(Command):\n \"\"\"Deploy site.\"\"\"\n name = \"deploy\"\n\n doc_usage = \"[[preset [preset...]]\"\n doc_purpose = \"deploy the site\"\n doc_description = \"Deploy the site by executing deploy commands from the presets listed on the command line. If no presets are specified, `default` is executed.\"\n logger = None\n\n def _execute(self, command, args):\n self.logger = get_logger('deploy', self.site.loghandlers)\n # Get last successful deploy date\n timestamp_path = os.path.join(self.site.config['CACHE_FOLDER'], 'lastdeploy')\n if self.site.config['COMMENT_SYSTEM_ID'] == 'nikolademo':\n self.logger.warn(\"\\nWARNING WARNING WARNING WARNING\\n\"\n \"You are deploying using the nikolademo Disqus account.\\n\"\n \"That means you will not be able to moderate the comments in your own site.\\n\"\n \"And is probably not what you want to do.\\n\"\n \"Think about it for 5 seconds, I'll wait :-)\\n\\n\")\n time.sleep(5)\n\n deploy_drafts = self.site.config.get('DEPLOY_DRAFTS', True)\n deploy_future = self.site.config.get('DEPLOY_FUTURE', False)\n undeployed_posts = []\n if not (deploy_drafts and deploy_future):\n # Remove drafts and future posts\n out_dir = self.site.config['OUTPUT_FOLDER']\n self.site.scan_posts()\n for post in self.site.timeline:\n if (not deploy_drafts and post.is_draft) or \\\n (not deploy_future and post.publish_later):\n remove_file(os.path.join(out_dir, post.destination_path()))\n remove_file(os.path.join(out_dir, post.source_path))\n undeployed_posts.append(post)\n\n if args:\n presets = args\n else:\n presets = ['default']\n\n # test for preset existence\n for preset in presets:\n try:\n self.site.config['DEPLOY_COMMANDS'][preset]\n except:\n self.logger.error('No such preset: {0}'.format(preset))\n sys.exit(255)\n\n for preset in presets:\n self.logger.info(\"=> preset '{0}'\".format(preset))\n for command in self.site.config['DEPLOY_COMMANDS'][preset]:\n self.logger.info(\"==> {0}\".format(command))\n try:\n subprocess.check_call(command, shell=True)\n except subprocess.CalledProcessError as e:\n self.logger.error('Failed deployment \u2014 command {0} '\n 'returned {1}'.format(e.cmd, e.returncode))\n sys.exit(e.returncode)\n\n self.logger.info(\"Successful deployment\")\n try:\n with io.open(timestamp_path, 'r', encoding='utf8') as inf:\n last_deploy = datetime.strptime(inf.read().strip(), \"%Y-%m-%dT%H:%M:%S.%f\")\n clean = False\n except (IOError, Exception) as e:\n self.logger.debug(\"Problem when reading `{0}`: {1}\".format(timestamp_path, e))\n last_deploy = datetime(1970, 1, 1)\n clean = True\n\n new_deploy = datetime.utcnow()\n self._emit_deploy_event(last_deploy, new_deploy, clean, undeployed_posts)\n\n makedirs(self.site.config['CACHE_FOLDER'])\n # Store timestamp of successful deployment\n with io.open(timestamp_path, 'w+', encoding='utf8') as outf:\n outf.write(unicode_str(new_deploy.isoformat()))\n\n def _emit_deploy_event(self, last_deploy, new_deploy, clean=False, undeployed=None):\n \"\"\" Emit events for all timeline entries newer than last deploy.\n\n last_deploy: datetime\n Time stamp of the last successful deployment.\n\n new_deploy: datetime\n Time stamp of the current deployment.\n\n clean: bool\n True when it appears like deploy is being run after a clean.\n\n \"\"\"\n\n event = {\n 'last_deploy': last_deploy,\n 'new_deploy': new_deploy,\n 'clean': clean,\n 'undeployed': undeployed\n }\n\n if last_deploy.tzinfo is None:\n last_deploy = last_deploy.replace(tzinfo=gettz('UTC'))\n\n deployed = [\n entry for entry in self.site.timeline\n if entry.date > last_deploy and entry not in undeployed\n ]\n\n event['deployed'] = deployed\n\n if len(deployed) > 0 or len(undeployed) > 0:\n signal('deployed').send(event)\n", "path": "nikola/plugins/command/deploy.py"}]} | 2,006 | 209 |
gh_patches_debug_18753 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-2890 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NVTX annotation of multiple use of a transform
**Is your feature request related to a problem? Please describe.**
NVTX Range decorator currently is able to to patch each transform once. Multiple use of the same transforms with `Range` will result in duplicate annotations.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/utils/nvtx.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11 """
12 Decorators and context managers for NVIDIA Tools Extension to profile MONAI components
13 """
14
15 from collections import defaultdict
16 from functools import wraps
17 from typing import Any, Optional, Tuple, Union
18
19 from torch.autograd import Function
20 from torch.nn import Module
21 from torch.optim import Optimizer
22 from torch.utils.data import Dataset
23
24 from monai.utils import ensure_tuple, optional_import
25
26 _nvtx, _ = optional_import("torch._C._nvtx", descriptor="NVTX is not installed. Are you sure you have a CUDA build?")
27
28 __all__ = ["Range"]
29
30
31 class Range:
32 """
33 A decorator and context manager for NVIDIA Tools Extension (NVTX) Range for profiling.
34 When used as a decorator it encloses a specific method of the object with an NVTX Range.
35 When used as a context manager, it encloses the runtime context (created by with statement) with an NVTX Range.
36
37 Args:
38 name: the name to be associated to the range
39 methods: (only when used as decorator) the name of a method (or a list of the name of the methods)
40 to be wrapped by NVTX range.
41 If None (default), the method(s) will be inferred based on the object's type for various MONAI components,
42 such as Networks, Losses, Functions, Transforms, and Datasets.
43 Otherwise, it look up predefined methods: "forward", "__call__", "__next__", "__getitem__"
44 append_method_name: if append the name of the methods to be decorated to the range's name
45 If None (default), it appends the method's name only if we are annotating more than one method.
46
47 """
48
49 name_counter: dict = defaultdict(int)
50
51 def __init__(
52 self,
53 name: Optional[str] = None,
54 methods: Optional[Union[str, Tuple[str, ...]]] = None,
55 append_method_name: Optional[bool] = None,
56 ) -> None:
57 self.name = name
58 self.methods = methods
59 self.append_method_name = append_method_name
60
61 def __call__(self, obj: Any):
62 # Define the name to be associated to the range if not provided
63 if self.name is None:
64 name = type(obj).__name__
65 self.name_counter[name] += 1
66 self.name = f"{name}_{self.name_counter[name]}"
67
68 # Define the methods to be wrapped if not provided
69 if self.methods is None:
70 self.methods = self._get_method(obj)
71 else:
72 self.methods = ensure_tuple(self.methods)
73
74 # Check if to append method's name to the range's name
75 if self.append_method_name is None:
76 if len(self.methods) > 1:
77 self.append_method_name = True
78 else:
79 self.append_method_name = False
80
81 # Decorate the methods
82 for method in self.methods:
83 self._decorate_method(obj, method, self.append_method_name)
84
85 return obj
86
87 def _decorate_method(self, obj, method, append_method_name):
88 # Append the method's name to the range's name
89 if append_method_name:
90 name = f"{self.name}.{method}"
91 else:
92 name = self.name
93
94 # Get the class for special functions
95 if method.startswith("_"):
96 owner = type(obj)
97 else:
98 owner = obj
99
100 # Get the method to be wrapped
101 _temp_func = getattr(owner, method)
102
103 # Wrap the method with NVTX range (range push/pop)
104 @wraps(_temp_func)
105 def range_wrapper(*args, **kwargs):
106 _nvtx.rangePushA(name)
107 output = _temp_func(*args, **kwargs)
108 _nvtx.rangePop()
109 return output
110
111 # Replace the method with the wrapped version
112 setattr(owner, method, range_wrapper)
113
114 def _get_method(self, obj: Any) -> tuple:
115 if isinstance(obj, Module):
116 method_list = ["forward"]
117 elif isinstance(obj, Optimizer):
118 method_list = ["step"]
119 elif isinstance(obj, Function):
120 method_list = ["forward", "backward"]
121 elif isinstance(obj, Dataset):
122 method_list = ["__getitem__"]
123 else:
124 default_methods = ["forward", "__call__", "__next__", "__getitem__"]
125 method_list = []
126 for method in default_methods:
127 if hasattr(obj, method):
128 method_list.append(method)
129 if len(method_list) < 1:
130 raise ValueError(
131 f"The method to be wrapped for this object [{type(obj)}] is not recognized."
132 "The name of the method should be provied or the object should have one of these methods:"
133 f"{default_methods}"
134 )
135 return ensure_tuple(method_list)
136
137 def __enter__(self):
138 if self.name is None:
139 # Number the range with class variable counter to avoid duplicate names.
140 self.name_counter["context"] += 1
141 self.name = f"context_{self.name_counter['context']}"
142
143 _nvtx.rangePushA(self.name)
144
145 def __exit__(self, type, value, traceback):
146 _nvtx.rangePop()
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monai/utils/nvtx.py b/monai/utils/nvtx.py
--- a/monai/utils/nvtx.py
+++ b/monai/utils/nvtx.py
@@ -92,7 +92,7 @@
name = self.name
# Get the class for special functions
- if method.startswith("_"):
+ if method.startswith("__"):
owner = type(obj)
else:
owner = obj
@@ -109,7 +109,16 @@
return output
# Replace the method with the wrapped version
- setattr(owner, method, range_wrapper)
+ if method.startswith("__"):
+ # If it is a special method, it requires special attention
+ class NVTXRangeDecoratedClass(owner):
+ ...
+
+ setattr(NVTXRangeDecoratedClass, method, range_wrapper)
+ obj.__class__ = NVTXRangeDecoratedClass
+
+ else:
+ setattr(owner, method, range_wrapper)
def _get_method(self, obj: Any) -> tuple:
if isinstance(obj, Module):
| {"golden_diff": "diff --git a/monai/utils/nvtx.py b/monai/utils/nvtx.py\n--- a/monai/utils/nvtx.py\n+++ b/monai/utils/nvtx.py\n@@ -92,7 +92,7 @@\n name = self.name\n \n # Get the class for special functions\n- if method.startswith(\"_\"):\n+ if method.startswith(\"__\"):\n owner = type(obj)\n else:\n owner = obj\n@@ -109,7 +109,16 @@\n return output\n \n # Replace the method with the wrapped version\n- setattr(owner, method, range_wrapper)\n+ if method.startswith(\"__\"):\n+ # If it is a special method, it requires special attention\n+ class NVTXRangeDecoratedClass(owner):\n+ ...\n+\n+ setattr(NVTXRangeDecoratedClass, method, range_wrapper)\n+ obj.__class__ = NVTXRangeDecoratedClass\n+\n+ else:\n+ setattr(owner, method, range_wrapper)\n \n def _get_method(self, obj: Any) -> tuple:\n if isinstance(obj, Module):\n", "issue": "NVTX annotation of multiple use of a transform\n**Is your feature request related to a problem? Please describe.**\r\nNVTX Range decorator currently is able to to patch each transform once. Multiple use of the same transforms with `Range` will result in duplicate annotations.\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nDecorators and context managers for NVIDIA Tools Extension to profile MONAI components\n\"\"\"\n\nfrom collections import defaultdict\nfrom functools import wraps\nfrom typing import Any, Optional, Tuple, Union\n\nfrom torch.autograd import Function\nfrom torch.nn import Module\nfrom torch.optim import Optimizer\nfrom torch.utils.data import Dataset\n\nfrom monai.utils import ensure_tuple, optional_import\n\n_nvtx, _ = optional_import(\"torch._C._nvtx\", descriptor=\"NVTX is not installed. Are you sure you have a CUDA build?\")\n\n__all__ = [\"Range\"]\n\n\nclass Range:\n \"\"\"\n A decorator and context manager for NVIDIA Tools Extension (NVTX) Range for profiling.\n When used as a decorator it encloses a specific method of the object with an NVTX Range.\n When used as a context manager, it encloses the runtime context (created by with statement) with an NVTX Range.\n\n Args:\n name: the name to be associated to the range\n methods: (only when used as decorator) the name of a method (or a list of the name of the methods)\n to be wrapped by NVTX range.\n If None (default), the method(s) will be inferred based on the object's type for various MONAI components,\n such as Networks, Losses, Functions, Transforms, and Datasets.\n Otherwise, it look up predefined methods: \"forward\", \"__call__\", \"__next__\", \"__getitem__\"\n append_method_name: if append the name of the methods to be decorated to the range's name\n If None (default), it appends the method's name only if we are annotating more than one method.\n\n \"\"\"\n\n name_counter: dict = defaultdict(int)\n\n def __init__(\n self,\n name: Optional[str] = None,\n methods: Optional[Union[str, Tuple[str, ...]]] = None,\n append_method_name: Optional[bool] = None,\n ) -> None:\n self.name = name\n self.methods = methods\n self.append_method_name = append_method_name\n\n def __call__(self, obj: Any):\n # Define the name to be associated to the range if not provided\n if self.name is None:\n name = type(obj).__name__\n self.name_counter[name] += 1\n self.name = f\"{name}_{self.name_counter[name]}\"\n\n # Define the methods to be wrapped if not provided\n if self.methods is None:\n self.methods = self._get_method(obj)\n else:\n self.methods = ensure_tuple(self.methods)\n\n # Check if to append method's name to the range's name\n if self.append_method_name is None:\n if len(self.methods) > 1:\n self.append_method_name = True\n else:\n self.append_method_name = False\n\n # Decorate the methods\n for method in self.methods:\n self._decorate_method(obj, method, self.append_method_name)\n\n return obj\n\n def _decorate_method(self, obj, method, append_method_name):\n # Append the method's name to the range's name\n if append_method_name:\n name = f\"{self.name}.{method}\"\n else:\n name = self.name\n\n # Get the class for special functions\n if method.startswith(\"_\"):\n owner = type(obj)\n else:\n owner = obj\n\n # Get the method to be wrapped\n _temp_func = getattr(owner, method)\n\n # Wrap the method with NVTX range (range push/pop)\n @wraps(_temp_func)\n def range_wrapper(*args, **kwargs):\n _nvtx.rangePushA(name)\n output = _temp_func(*args, **kwargs)\n _nvtx.rangePop()\n return output\n\n # Replace the method with the wrapped version\n setattr(owner, method, range_wrapper)\n\n def _get_method(self, obj: Any) -> tuple:\n if isinstance(obj, Module):\n method_list = [\"forward\"]\n elif isinstance(obj, Optimizer):\n method_list = [\"step\"]\n elif isinstance(obj, Function):\n method_list = [\"forward\", \"backward\"]\n elif isinstance(obj, Dataset):\n method_list = [\"__getitem__\"]\n else:\n default_methods = [\"forward\", \"__call__\", \"__next__\", \"__getitem__\"]\n method_list = []\n for method in default_methods:\n if hasattr(obj, method):\n method_list.append(method)\n if len(method_list) < 1:\n raise ValueError(\n f\"The method to be wrapped for this object [{type(obj)}] is not recognized.\"\n \"The name of the method should be provied or the object should have one of these methods:\"\n f\"{default_methods}\"\n )\n return ensure_tuple(method_list)\n\n def __enter__(self):\n if self.name is None:\n # Number the range with class variable counter to avoid duplicate names.\n self.name_counter[\"context\"] += 1\n self.name = f\"context_{self.name_counter['context']}\"\n\n _nvtx.rangePushA(self.name)\n\n def __exit__(self, type, value, traceback):\n _nvtx.rangePop()\n", "path": "monai/utils/nvtx.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nDecorators and context managers for NVIDIA Tools Extension to profile MONAI components\n\"\"\"\n\nfrom collections import defaultdict\nfrom functools import wraps\nfrom typing import Any, Optional, Tuple, Union\n\nfrom torch.autograd import Function\nfrom torch.nn import Module\nfrom torch.optim import Optimizer\nfrom torch.utils.data import Dataset\n\nfrom monai.utils import ensure_tuple, optional_import\n\n_nvtx, _ = optional_import(\"torch._C._nvtx\", descriptor=\"NVTX is not installed. Are you sure you have a CUDA build?\")\n\n__all__ = [\"Range\"]\n\n\nclass Range:\n \"\"\"\n A decorator and context manager for NVIDIA Tools Extension (NVTX) Range for profiling.\n When used as a decorator it encloses a specific method of the object with an NVTX Range.\n When used as a context manager, it encloses the runtime context (created by with statement) with an NVTX Range.\n\n Args:\n name: the name to be associated to the range\n methods: (only when used as decorator) the name of a method (or a list of the name of the methods)\n to be wrapped by NVTX range.\n If None (default), the method(s) will be inferred based on the object's type for various MONAI components,\n such as Networks, Losses, Functions, Transforms, and Datasets.\n Otherwise, it look up predefined methods: \"forward\", \"__call__\", \"__next__\", \"__getitem__\"\n append_method_name: if append the name of the methods to be decorated to the range's name\n If None (default), it appends the method's name only if we are annotating more than one method.\n\n \"\"\"\n\n name_counter: dict = defaultdict(int)\n\n def __init__(\n self,\n name: Optional[str] = None,\n methods: Optional[Union[str, Tuple[str, ...]]] = None,\n append_method_name: Optional[bool] = None,\n ) -> None:\n self.name = name\n self.methods = methods\n self.append_method_name = append_method_name\n\n def __call__(self, obj: Any):\n # Define the name to be associated to the range if not provided\n if self.name is None:\n name = type(obj).__name__\n self.name_counter[name] += 1\n self.name = f\"{name}_{self.name_counter[name]}\"\n\n # Define the methods to be wrapped if not provided\n if self.methods is None:\n self.methods = self._get_method(obj)\n else:\n self.methods = ensure_tuple(self.methods)\n\n # Check if to append method's name to the range's name\n if self.append_method_name is None:\n if len(self.methods) > 1:\n self.append_method_name = True\n else:\n self.append_method_name = False\n\n # Decorate the methods\n for method in self.methods:\n self._decorate_method(obj, method, self.append_method_name)\n\n return obj\n\n def _decorate_method(self, obj, method, append_method_name):\n # Append the method's name to the range's name\n if append_method_name:\n name = f\"{self.name}.{method}\"\n else:\n name = self.name\n\n # Get the class for special functions\n if method.startswith(\"__\"):\n owner = type(obj)\n else:\n owner = obj\n\n # Get the method to be wrapped\n _temp_func = getattr(owner, method)\n\n # Wrap the method with NVTX range (range push/pop)\n @wraps(_temp_func)\n def range_wrapper(*args, **kwargs):\n _nvtx.rangePushA(name)\n output = _temp_func(*args, **kwargs)\n _nvtx.rangePop()\n return output\n\n # Replace the method with the wrapped version\n if method.startswith(\"__\"):\n # If it is a special method, it requires special attention\n class NVTXRangeDecoratedClass(owner):\n ...\n\n setattr(NVTXRangeDecoratedClass, method, range_wrapper)\n obj.__class__ = NVTXRangeDecoratedClass\n\n else:\n setattr(owner, method, range_wrapper)\n\n def _get_method(self, obj: Any) -> tuple:\n if isinstance(obj, Module):\n method_list = [\"forward\"]\n elif isinstance(obj, Optimizer):\n method_list = [\"step\"]\n elif isinstance(obj, Function):\n method_list = [\"forward\", \"backward\"]\n elif isinstance(obj, Dataset):\n method_list = [\"__getitem__\"]\n else:\n default_methods = [\"forward\", \"__call__\", \"__next__\", \"__getitem__\"]\n method_list = []\n for method in default_methods:\n if hasattr(obj, method):\n method_list.append(method)\n if len(method_list) < 1:\n raise ValueError(\n f\"The method to be wrapped for this object [{type(obj)}] is not recognized.\"\n \"The name of the method should be provied or the object should have one of these methods:\"\n f\"{default_methods}\"\n )\n return ensure_tuple(method_list)\n\n def __enter__(self):\n if self.name is None:\n # Number the range with class variable counter to avoid duplicate names.\n self.name_counter[\"context\"] += 1\n self.name = f\"context_{self.name_counter['context']}\"\n\n _nvtx.rangePushA(self.name)\n\n def __exit__(self, type, value, traceback):\n _nvtx.rangePop()\n", "path": "monai/utils/nvtx.py"}]} | 1,897 | 244 |
gh_patches_debug_957 | rasdani/github-patches | git_diff | pymeasure__pymeasure-909 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check all Channel classes for docstrings
#895 added a property docstring test. It works, however, only for the `Instrument` classes which are publicly available.
Channels (and some base instruments), which are not imported in the init files, are not checked.
This issue is about collecting all `Instrument` and `Channel` subclasses in order to check them for docstring consistencies.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymeasure/instruments/__init__.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2023 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 from ..errors import RangeError, RangeException
26 from .channel import Channel
27 from .instrument import Instrument
28 from .resources import list_resources
29 from .validators import discreteTruncate
30
31 from . import activetechnologies
32 from . import advantest
33 from . import agilent
34 from . import aja
35 from . import ametek
36 from . import ami
37 from . import anaheimautomation
38 from . import anapico
39 from . import andeenhagerling
40 from . import anritsu
41 from . import attocube
42 from . import bkprecision
43 from . import danfysik
44 from . import deltaelektronika
45 from . import edwards
46 from . import eurotest
47 from . import fluke
48 from . import fwbell
49 from . import hcp
50 from . import heidenhain
51 from . import hp
52 from . import ipgphotonics
53 from . import keithley
54 from . import keysight
55 from . import lakeshore
56 from . import lecroy
57 from . import mksinst
58 from . import newport
59 from . import ni
60 from . import oxfordinstruments
61 from . import parker
62 from . import pendulum
63 from . import razorbill
64 from . import rohdeschwarz
65 from . import siglenttechnologies
66 from . import signalrecovery
67 from . import srs
68 from . import tcpowerconversion
69 from . import tektronix
70 from . import temptronic
71 from . import texio
72 from . import thermotron
73 from . import thorlabs
74 from . import toptica
75 from . import velleman
76 from . import yokogawa
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pymeasure/instruments/__init__.py b/pymeasure/instruments/__init__.py
--- a/pymeasure/instruments/__init__.py
+++ b/pymeasure/instruments/__init__.py
@@ -67,6 +67,7 @@
from . import srs
from . import tcpowerconversion
from . import tektronix
+from . import teledyne
from . import temptronic
from . import texio
from . import thermotron
| {"golden_diff": "diff --git a/pymeasure/instruments/__init__.py b/pymeasure/instruments/__init__.py\n--- a/pymeasure/instruments/__init__.py\n+++ b/pymeasure/instruments/__init__.py\n@@ -67,6 +67,7 @@\n from . import srs\n from . import tcpowerconversion\n from . import tektronix\n+from . import teledyne\n from . import temptronic\n from . import texio\n from . import thermotron\n", "issue": "Check all Channel classes for docstrings\n#895 added a property docstring test. It works, however, only for the `Instrument` classes which are publicly available.\r\nChannels (and some base instruments), which are not imported in the init files, are not checked.\r\n\r\nThis issue is about collecting all `Instrument` and `Channel` subclasses in order to check them for docstring consistencies.\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nfrom ..errors import RangeError, RangeException\nfrom .channel import Channel\nfrom .instrument import Instrument\nfrom .resources import list_resources\nfrom .validators import discreteTruncate\n\nfrom . import activetechnologies\nfrom . import advantest\nfrom . import agilent\nfrom . import aja\nfrom . import ametek\nfrom . import ami\nfrom . import anaheimautomation\nfrom . import anapico\nfrom . import andeenhagerling\nfrom . import anritsu\nfrom . import attocube\nfrom . import bkprecision\nfrom . import danfysik\nfrom . import deltaelektronika\nfrom . import edwards\nfrom . import eurotest\nfrom . import fluke\nfrom . import fwbell\nfrom . import hcp\nfrom . import heidenhain\nfrom . import hp\nfrom . import ipgphotonics\nfrom . import keithley\nfrom . import keysight\nfrom . import lakeshore\nfrom . import lecroy\nfrom . import mksinst\nfrom . import newport\nfrom . import ni\nfrom . import oxfordinstruments\nfrom . import parker\nfrom . import pendulum\nfrom . import razorbill\nfrom . import rohdeschwarz\nfrom . import siglenttechnologies\nfrom . import signalrecovery\nfrom . import srs\nfrom . import tcpowerconversion\nfrom . import tektronix\nfrom . import temptronic\nfrom . import texio\nfrom . import thermotron\nfrom . import thorlabs\nfrom . import toptica\nfrom . import velleman\nfrom . import yokogawa\n", "path": "pymeasure/instruments/__init__.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nfrom ..errors import RangeError, RangeException\nfrom .channel import Channel\nfrom .instrument import Instrument\nfrom .resources import list_resources\nfrom .validators import discreteTruncate\n\nfrom . import activetechnologies\nfrom . import advantest\nfrom . import agilent\nfrom . import aja\nfrom . import ametek\nfrom . import ami\nfrom . import anaheimautomation\nfrom . import anapico\nfrom . import andeenhagerling\nfrom . import anritsu\nfrom . import attocube\nfrom . import bkprecision\nfrom . import danfysik\nfrom . import deltaelektronika\nfrom . import edwards\nfrom . import eurotest\nfrom . import fluke\nfrom . import fwbell\nfrom . import hcp\nfrom . import heidenhain\nfrom . import hp\nfrom . import ipgphotonics\nfrom . import keithley\nfrom . import keysight\nfrom . import lakeshore\nfrom . import lecroy\nfrom . import mksinst\nfrom . import newport\nfrom . import ni\nfrom . import oxfordinstruments\nfrom . import parker\nfrom . import pendulum\nfrom . import razorbill\nfrom . import rohdeschwarz\nfrom . import siglenttechnologies\nfrom . import signalrecovery\nfrom . import srs\nfrom . import tcpowerconversion\nfrom . import tektronix\nfrom . import teledyne\nfrom . import temptronic\nfrom . import texio\nfrom . import thermotron\nfrom . import thorlabs\nfrom . import toptica\nfrom . import velleman\nfrom . import yokogawa\n", "path": "pymeasure/instruments/__init__.py"}]} | 1,092 | 107 |
gh_patches_debug_3832 | rasdani/github-patches | git_diff | jazzband__pip-tools-1035 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No handlers could be found for logger "pip.vcs.git"
Looks like logging is not totally set up. I get this when `pip-compile` wit VCS links:
```
No handlers could be found for logger "pip.vcs.git"
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/logging.py`
Content:
```
1 # coding: utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from . import click
5
6
7 class LogContext(object):
8 def __init__(self, verbosity=0):
9 self.verbosity = verbosity
10
11 def log(self, *args, **kwargs):
12 kwargs.setdefault("err", True)
13 click.secho(*args, **kwargs)
14
15 def debug(self, *args, **kwargs):
16 if self.verbosity >= 1:
17 self.log(*args, **kwargs)
18
19 def info(self, *args, **kwargs):
20 if self.verbosity >= 0:
21 self.log(*args, **kwargs)
22
23 def warning(self, *args, **kwargs):
24 kwargs.setdefault("fg", "yellow")
25 self.log(*args, **kwargs)
26
27 def error(self, *args, **kwargs):
28 kwargs.setdefault("fg", "red")
29 self.log(*args, **kwargs)
30
31
32 log = LogContext()
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/logging.py b/piptools/logging.py
--- a/piptools/logging.py
+++ b/piptools/logging.py
@@ -1,8 +1,14 @@
# coding: utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
+import logging
+
from . import click
+# Initialise the builtin logging module for other component using it.
+# Ex: pip
+logging.basicConfig()
+
class LogContext(object):
def __init__(self, verbosity=0):
| {"golden_diff": "diff --git a/piptools/logging.py b/piptools/logging.py\n--- a/piptools/logging.py\n+++ b/piptools/logging.py\n@@ -1,8 +1,14 @@\n # coding: utf-8\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n+import logging\n+\n from . import click\n \n+# Initialise the builtin logging module for other component using it.\n+# Ex: pip\n+logging.basicConfig()\n+\n \n class LogContext(object):\n def __init__(self, verbosity=0):\n", "issue": "No handlers could be found for logger \"pip.vcs.git\"\nLooks like logging is not totally set up. I get this when `pip-compile` wit VCS links:\n\n```\nNo handlers could be found for logger \"pip.vcs.git\"\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom . import click\n\n\nclass LogContext(object):\n def __init__(self, verbosity=0):\n self.verbosity = verbosity\n\n def log(self, *args, **kwargs):\n kwargs.setdefault(\"err\", True)\n click.secho(*args, **kwargs)\n\n def debug(self, *args, **kwargs):\n if self.verbosity >= 1:\n self.log(*args, **kwargs)\n\n def info(self, *args, **kwargs):\n if self.verbosity >= 0:\n self.log(*args, **kwargs)\n\n def warning(self, *args, **kwargs):\n kwargs.setdefault(\"fg\", \"yellow\")\n self.log(*args, **kwargs)\n\n def error(self, *args, **kwargs):\n kwargs.setdefault(\"fg\", \"red\")\n self.log(*args, **kwargs)\n\n\nlog = LogContext()\n", "path": "piptools/logging.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\n\nfrom . import click\n\n# Initialise the builtin logging module for other component using it.\n# Ex: pip\nlogging.basicConfig()\n\n\nclass LogContext(object):\n def __init__(self, verbosity=0):\n self.verbosity = verbosity\n\n def log(self, *args, **kwargs):\n kwargs.setdefault(\"err\", True)\n click.secho(*args, **kwargs)\n\n def debug(self, *args, **kwargs):\n if self.verbosity >= 1:\n self.log(*args, **kwargs)\n\n def info(self, *args, **kwargs):\n if self.verbosity >= 0:\n self.log(*args, **kwargs)\n\n def warning(self, *args, **kwargs):\n kwargs.setdefault(\"fg\", \"yellow\")\n self.log(*args, **kwargs)\n\n def error(self, *args, **kwargs):\n kwargs.setdefault(\"fg\", \"red\")\n self.log(*args, **kwargs)\n\n\nlog = LogContext()\n", "path": "piptools/logging.py"}]} | 580 | 115 |
gh_patches_debug_1758 | rasdani/github-patches | git_diff | facebookresearch__xformers-252 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JIT scripting is broken
# π Bug
JIT scripting xformers (running commit 357545ae13948659db07428553155e1802ee15af) breaks with the following error:
```bash
xformers/components/attention/attention_mask.py", line 128
def __add__(self, other):
assert isinstance(other, type(self))
~~~~~~~~~ <--- HERE
return AttentionMask(self.values + other.values, is_causal=False)
'AttentionMask.__add__' is being compiled since it was called from '__torch__.xformers.components.attention.attention_mask.AttentionMask'
```
## To Reproduce
Blocks config:
```yaml
reversible: false
block_type: "encoder"
num_layers: 12
dim_model: 768
layer_norm_style: "pre"
multi_head_config:
num_heads: 12
residual_dropout: 0.0
use_rotary_embeddings: false
attention:
name: "scaled_dot_product"
dropout: 0.0
causal: false
feedforward_config:
name: "MLP"
dropout: 0.0
activation: "gelu"
hidden_layer_multiplier: 4
```
Python code to repro:
```python
import yaml
import torch
from xformers.factory import xFormer, xFormerConfig
with open(xformer_config_file, "rb") as fileptr: # above config
model_config = yaml.load(fileptr, Loader=yaml.FullLoader)
torch.jit.script(xFormer.from_config(xFormerConfig([model_config])))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xformers/components/attention/attention_mask.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 from typing import Optional, Type, TypeVar
8
9 import torch
10
11 Self = TypeVar("Self", bound="AttentionMask")
12
13
14 class AttentionMask:
15 """
16 Holds an attention mask, along with a couple of helpers and attributes.
17
18 .. note: this is an additive mask, meaning that coefficients which should be computed hold the '0.' value,
19 and coefficients which should be skipped hold the '-inf' value. Any other value is possible if the purpose
20 is to bias the attention computation for instance
21
22 .. note: the attention mask dimensions are expected to be `[batch, to_sequence, from_sequence]`,
23 `[to_sequence, from_sequence]`, or anything broadcastable in between
24 """
25
26 def __init__(self, additive_mask: torch.Tensor, is_causal: bool = False):
27 assert additive_mask.is_floating_point(), additive_mask.dtype
28 assert not additive_mask.requires_grad
29
30 if additive_mask.ndim == 2:
31 additive_mask = additive_mask.unsqueeze(0)
32
33 self.values = additive_mask
34 self.is_causal = is_causal
35 self.seq_len = additive_mask.shape[1]
36 self.to_seq_len = additive_mask.shape[0]
37
38 def to_bool(self) -> torch.Tensor:
39 """
40 .. warning: we assume here that True implies that the value should be computed
41 """
42 return self.values != float("-inf")
43
44 @classmethod
45 def from_bool(cls: Type[Self], x: torch.Tensor) -> Self:
46 """
47 Create an AttentionMask given a boolean pattern.
48 .. warning: we assume here that True implies that the value should be computed
49 """
50 assert x.dtype == torch.bool
51
52 additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)
53 additive_mask.masked_fill_(x, 0.0)
54 additive_mask.masked_fill_(~x, float("-inf"))
55
56 return cls(additive_mask)
57
58 @classmethod
59 def from_multiplicative(cls: Type[Self], x: torch.Tensor) -> Self:
60 """
61 Create an AttentionMask given a multiplicative attention mask.
62 """
63 assert not x.dtype == torch.bool
64
65 additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)
66 x = x.bool()
67
68 additive_mask.masked_fill_(x, 0.0)
69 additive_mask.masked_fill_(~x, float("-inf"))
70
71 return cls(additive_mask)
72
73 @classmethod
74 def make_causal(
75 cls: Type[Self],
76 seq_len: int,
77 to_seq_len: Optional[int] = None,
78 device: Optional[torch.device] = None,
79 dtype: Optional[torch.dtype] = None,
80 ) -> Self:
81 if not to_seq_len:
82 to_seq_len = seq_len
83
84 additive_mask = torch.triu(
85 torch.ones(seq_len, to_seq_len, device=device, dtype=dtype) * float("-inf"),
86 diagonal=1,
87 )
88 return cls(additive_mask=additive_mask, is_causal=True)
89
90 def make_crop(
91 self, seq_len: int, to_seq_len: Optional[int] = None
92 ) -> "AttentionMask":
93 """
94 Return a cropped attention mask, whose underlying tensor is a view of this one
95 """
96
97 if not to_seq_len:
98 to_seq_len = seq_len
99
100 return AttentionMask(
101 self.values[:, :seq_len, :to_seq_len], is_causal=self.is_causal
102 )
103
104 def __repr__(self):
105 return f"AttentionMask - causal {self.is_causal} - mask " + str(self.values)
106
107 @property
108 def device(self):
109 return self.values.device
110
111 @property
112 def is_sparse(self):
113 return False
114
115 @property
116 def ndim(self):
117 return len(self.values.shape)
118
119 @property
120 def dtype(self):
121 return self.values.dtype
122
123 @property
124 def shape(self):
125 return self.values.shape
126
127 def __add__(self, other):
128 assert isinstance(other, type(self))
129 return AttentionMask(self.values + other.values, is_causal=False)
130
131 def to(
132 self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None
133 ) -> "AttentionMask":
134 assert device is None or isinstance(device, torch.device)
135 assert dtype is None or isinstance(dtype, torch.dtype)
136 assert device is not None or dtype is not None
137
138 # Noop if we don't need to create another instance
139 if ((device and device == self.device) or not device) and (
140 (dtype and dtype == self.dtype) or not dtype
141 ):
142 return self
143
144 return AttentionMask(self.values.to(device=device, dtype=dtype), self.is_causal)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xformers/components/attention/attention_mask.py b/xformers/components/attention/attention_mask.py
--- a/xformers/components/attention/attention_mask.py
+++ b/xformers/components/attention/attention_mask.py
@@ -125,7 +125,6 @@
return self.values.shape
def __add__(self, other):
- assert isinstance(other, type(self))
return AttentionMask(self.values + other.values, is_causal=False)
def to(
| {"golden_diff": "diff --git a/xformers/components/attention/attention_mask.py b/xformers/components/attention/attention_mask.py\n--- a/xformers/components/attention/attention_mask.py\n+++ b/xformers/components/attention/attention_mask.py\n@@ -125,7 +125,6 @@\n return self.values.shape\n \n def __add__(self, other):\n- assert isinstance(other, type(self))\n return AttentionMask(self.values + other.values, is_causal=False)\n \n def to(\n", "issue": "JIT scripting is broken\n# \ud83d\udc1b Bug\r\nJIT scripting xformers (running commit 357545ae13948659db07428553155e1802ee15af) breaks with the following error:\r\n\r\n```bash\r\nxformers/components/attention/attention_mask.py\", line 128\r\n def __add__(self, other):\r\n assert isinstance(other, type(self))\r\n ~~~~~~~~~ <--- HERE\r\n return AttentionMask(self.values + other.values, is_causal=False)\r\n'AttentionMask.__add__' is being compiled since it was called from '__torch__.xformers.components.attention.attention_mask.AttentionMask'\r\n```\r\n\r\n## To Reproduce\r\n\r\nBlocks config:\r\n```yaml\r\nreversible: false\r\nblock_type: \"encoder\"\r\nnum_layers: 12\r\ndim_model: 768\r\nlayer_norm_style: \"pre\"\r\n\r\nmulti_head_config:\r\n num_heads: 12\r\n residual_dropout: 0.0\r\n use_rotary_embeddings: false\r\n\r\n attention:\r\n name: \"scaled_dot_product\"\r\n dropout: 0.0\r\n causal: false\r\n\r\nfeedforward_config:\r\n name: \"MLP\"\r\n dropout: 0.0\r\n activation: \"gelu\"\r\n hidden_layer_multiplier: 4\r\n```\r\n\r\nPython code to repro:\r\n```python\r\nimport yaml\r\nimport torch\r\n\r\nfrom xformers.factory import xFormer, xFormerConfig\r\n\r\nwith open(xformer_config_file, \"rb\") as fileptr: # above config\r\n model_config = yaml.load(fileptr, Loader=yaml.FullLoader)\r\n\r\ntorch.jit.script(xFormer.from_config(xFormerConfig([model_config])))\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nfrom typing import Optional, Type, TypeVar\n\nimport torch\n\nSelf = TypeVar(\"Self\", bound=\"AttentionMask\")\n\n\nclass AttentionMask:\n \"\"\"\n Holds an attention mask, along with a couple of helpers and attributes.\n\n .. note: this is an additive mask, meaning that coefficients which should be computed hold the '0.' value,\n and coefficients which should be skipped hold the '-inf' value. Any other value is possible if the purpose\n is to bias the attention computation for instance\n\n .. note: the attention mask dimensions are expected to be `[batch, to_sequence, from_sequence]`,\n `[to_sequence, from_sequence]`, or anything broadcastable in between\n \"\"\"\n\n def __init__(self, additive_mask: torch.Tensor, is_causal: bool = False):\n assert additive_mask.is_floating_point(), additive_mask.dtype\n assert not additive_mask.requires_grad\n\n if additive_mask.ndim == 2:\n additive_mask = additive_mask.unsqueeze(0)\n\n self.values = additive_mask\n self.is_causal = is_causal\n self.seq_len = additive_mask.shape[1]\n self.to_seq_len = additive_mask.shape[0]\n\n def to_bool(self) -> torch.Tensor:\n \"\"\"\n .. warning: we assume here that True implies that the value should be computed\n \"\"\"\n return self.values != float(\"-inf\")\n\n @classmethod\n def from_bool(cls: Type[Self], x: torch.Tensor) -> Self:\n \"\"\"\n Create an AttentionMask given a boolean pattern.\n .. warning: we assume here that True implies that the value should be computed\n \"\"\"\n assert x.dtype == torch.bool\n\n additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)\n additive_mask.masked_fill_(x, 0.0)\n additive_mask.masked_fill_(~x, float(\"-inf\"))\n\n return cls(additive_mask)\n\n @classmethod\n def from_multiplicative(cls: Type[Self], x: torch.Tensor) -> Self:\n \"\"\"\n Create an AttentionMask given a multiplicative attention mask.\n \"\"\"\n assert not x.dtype == torch.bool\n\n additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)\n x = x.bool()\n\n additive_mask.masked_fill_(x, 0.0)\n additive_mask.masked_fill_(~x, float(\"-inf\"))\n\n return cls(additive_mask)\n\n @classmethod\n def make_causal(\n cls: Type[Self],\n seq_len: int,\n to_seq_len: Optional[int] = None,\n device: Optional[torch.device] = None,\n dtype: Optional[torch.dtype] = None,\n ) -> Self:\n if not to_seq_len:\n to_seq_len = seq_len\n\n additive_mask = torch.triu(\n torch.ones(seq_len, to_seq_len, device=device, dtype=dtype) * float(\"-inf\"),\n diagonal=1,\n )\n return cls(additive_mask=additive_mask, is_causal=True)\n\n def make_crop(\n self, seq_len: int, to_seq_len: Optional[int] = None\n ) -> \"AttentionMask\":\n \"\"\"\n Return a cropped attention mask, whose underlying tensor is a view of this one\n \"\"\"\n\n if not to_seq_len:\n to_seq_len = seq_len\n\n return AttentionMask(\n self.values[:, :seq_len, :to_seq_len], is_causal=self.is_causal\n )\n\n def __repr__(self):\n return f\"AttentionMask - causal {self.is_causal} - mask \" + str(self.values)\n\n @property\n def device(self):\n return self.values.device\n\n @property\n def is_sparse(self):\n return False\n\n @property\n def ndim(self):\n return len(self.values.shape)\n\n @property\n def dtype(self):\n return self.values.dtype\n\n @property\n def shape(self):\n return self.values.shape\n\n def __add__(self, other):\n assert isinstance(other, type(self))\n return AttentionMask(self.values + other.values, is_causal=False)\n\n def to(\n self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None\n ) -> \"AttentionMask\":\n assert device is None or isinstance(device, torch.device)\n assert dtype is None or isinstance(dtype, torch.dtype)\n assert device is not None or dtype is not None\n\n # Noop if we don't need to create another instance\n if ((device and device == self.device) or not device) and (\n (dtype and dtype == self.dtype) or not dtype\n ):\n return self\n\n return AttentionMask(self.values.to(device=device, dtype=dtype), self.is_causal)\n", "path": "xformers/components/attention/attention_mask.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nfrom typing import Optional, Type, TypeVar\n\nimport torch\n\nSelf = TypeVar(\"Self\", bound=\"AttentionMask\")\n\n\nclass AttentionMask:\n \"\"\"\n Holds an attention mask, along with a couple of helpers and attributes.\n\n .. note: this is an additive mask, meaning that coefficients which should be computed hold the '0.' value,\n and coefficients which should be skipped hold the '-inf' value. Any other value is possible if the purpose\n is to bias the attention computation for instance\n\n .. note: the attention mask dimensions are expected to be `[batch, to_sequence, from_sequence]`,\n `[to_sequence, from_sequence]`, or anything broadcastable in between\n \"\"\"\n\n def __init__(self, additive_mask: torch.Tensor, is_causal: bool = False):\n assert additive_mask.is_floating_point(), additive_mask.dtype\n assert not additive_mask.requires_grad\n\n if additive_mask.ndim == 2:\n additive_mask = additive_mask.unsqueeze(0)\n\n self.values = additive_mask\n self.is_causal = is_causal\n self.seq_len = additive_mask.shape[1]\n self.to_seq_len = additive_mask.shape[0]\n\n def to_bool(self) -> torch.Tensor:\n \"\"\"\n .. warning: we assume here that True implies that the value should be computed\n \"\"\"\n return self.values != float(\"-inf\")\n\n @classmethod\n def from_bool(cls: Type[Self], x: torch.Tensor) -> Self:\n \"\"\"\n Create an AttentionMask given a boolean pattern.\n .. warning: we assume here that True implies that the value should be computed\n \"\"\"\n assert x.dtype == torch.bool\n\n additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)\n additive_mask.masked_fill_(x, 0.0)\n additive_mask.masked_fill_(~x, float(\"-inf\"))\n\n return cls(additive_mask)\n\n @classmethod\n def from_multiplicative(cls: Type[Self], x: torch.Tensor) -> Self:\n \"\"\"\n Create an AttentionMask given a multiplicative attention mask.\n \"\"\"\n assert not x.dtype == torch.bool\n\n additive_mask = torch.empty_like(x, dtype=torch.float, device=x.device)\n x = x.bool()\n\n additive_mask.masked_fill_(x, 0.0)\n additive_mask.masked_fill_(~x, float(\"-inf\"))\n\n return cls(additive_mask)\n\n @classmethod\n def make_causal(\n cls: Type[Self],\n seq_len: int,\n to_seq_len: Optional[int] = None,\n device: Optional[torch.device] = None,\n dtype: Optional[torch.dtype] = None,\n ) -> Self:\n if not to_seq_len:\n to_seq_len = seq_len\n\n additive_mask = torch.triu(\n torch.ones(seq_len, to_seq_len, device=device, dtype=dtype) * float(\"-inf\"),\n diagonal=1,\n )\n return cls(additive_mask=additive_mask, is_causal=True)\n\n def make_crop(\n self, seq_len: int, to_seq_len: Optional[int] = None\n ) -> \"AttentionMask\":\n \"\"\"\n Return a cropped attention mask, whose underlying tensor is a view of this one\n \"\"\"\n\n if not to_seq_len:\n to_seq_len = seq_len\n\n return AttentionMask(\n self.values[:, :seq_len, :to_seq_len], is_causal=self.is_causal\n )\n\n def __repr__(self):\n return f\"AttentionMask - causal {self.is_causal} - mask \" + str(self.values)\n\n @property\n def device(self):\n return self.values.device\n\n @property\n def is_sparse(self):\n return False\n\n @property\n def ndim(self):\n return len(self.values.shape)\n\n @property\n def dtype(self):\n return self.values.dtype\n\n @property\n def shape(self):\n return self.values.shape\n\n def __add__(self, other):\n return AttentionMask(self.values + other.values, is_causal=False)\n\n def to(\n self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None\n ) -> \"AttentionMask\":\n assert device is None or isinstance(device, torch.device)\n assert dtype is None or isinstance(dtype, torch.dtype)\n assert device is not None or dtype is not None\n\n # Noop if we don't need to create another instance\n if ((device and device == self.device) or not device) and (\n (dtype and dtype == self.dtype) or not dtype\n ):\n return self\n\n return AttentionMask(self.values.to(device=device, dtype=dtype), self.is_causal)\n", "path": "xformers/components/attention/attention_mask.py"}]} | 2,040 | 110 |
gh_patches_debug_202 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-232 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test coverage
Some first extra tests written to get the test coverage up a bit.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aimmo-game-creator/setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3
4
5 setup(
6 name='aimmo-game-creator',
7 packages=find_packages(),
8 include_package_data=True,
9 install_requires=[
10 'eventlet',
11 'pykube',
12 ],
13 tests_require=[
14 'httmock',
15 ],
16 test_suite='tests',
17 zip_safe=False,
18 )
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aimmo-game-creator/setup.py b/aimmo-game-creator/setup.py
--- a/aimmo-game-creator/setup.py
+++ b/aimmo-game-creator/setup.py
@@ -12,6 +12,7 @@
],
tests_require=[
'httmock',
+ 'mock',
],
test_suite='tests',
zip_safe=False,
| {"golden_diff": "diff --git a/aimmo-game-creator/setup.py b/aimmo-game-creator/setup.py\n--- a/aimmo-game-creator/setup.py\n+++ b/aimmo-game-creator/setup.py\n@@ -12,6 +12,7 @@\n ],\n tests_require=[\n 'httmock',\n+ 'mock',\n ],\n test_suite='tests',\n zip_safe=False,\n", "issue": "Test coverage\nSome first extra tests written to get the test coverage up a bit.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\n\nsetup(\n name='aimmo-game-creator',\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'eventlet',\n 'pykube',\n ],\n tests_require=[\n 'httmock',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n", "path": "aimmo-game-creator/setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\n\nsetup(\n name='aimmo-game-creator',\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'eventlet',\n 'pykube',\n ],\n tests_require=[\n 'httmock',\n 'mock',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n", "path": "aimmo-game-creator/setup.py"}]} | 387 | 86 |
gh_patches_debug_52863 | rasdani/github-patches | git_diff | getsentry__sentry-70706 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue Details: Use new assignee selector component
Update issue details to use the new component in https://github.com/getsentry/sentry/issues/69054
This should be released under a short-lived feature flag.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fixtures/page_objects/issue_details.py`
Content:
```
1 from selenium.webdriver.common.by import By
2 from selenium.webdriver.support import expected_conditions
3 from selenium.webdriver.support.wait import WebDriverWait
4
5 from .base import BasePage
6 from .global_selection import GlobalSelectionPage
7
8
9 class IssueDetailsPage(BasePage):
10 def __init__(self, browser, client):
11 super().__init__(browser)
12 self.client = client
13 self.global_selection = GlobalSelectionPage(browser)
14
15 def visit_issue(self, org, groupid):
16 self.browser.get(f"/organizations/{org}/issues/{groupid}/")
17 self.wait_until_loaded()
18
19 def visit_issue_activity(self, org, groupid):
20 self.browser.get(f"/organizations/{org}/issues/{groupid}/activity/")
21 self.browser.wait_until_not('[data-test-id="loading-indicator"]')
22
23 def visit_issue_in_environment(self, org, groupid, environment):
24 self.browser.get(f"/organizations/{org}/issues/{groupid}/?environment={environment}")
25 self.browser.wait_until(".group-detail")
26
27 def visit_tag_values(self, org, groupid, tag):
28 self.browser.get(f"/organizations/{org}/issues/{groupid}/tags/{tag}/")
29 self.browser.wait_until('[data-test-id="group-tag-value"]')
30
31 def get_environment(self):
32 return self.browser.find_element(
33 by=By.CSS_SELECTOR, value='[data-test-id="env-label"'
34 ).text.lower()
35
36 def go_back_to_issues(self):
37 self.global_selection.go_back_to_issues()
38
39 def api_issue_get(self, groupid):
40 return self.client.get(f"/api/0/issues/{groupid}/")
41
42 def go_to_subtab(self, key):
43 tabs = self.browser.find_element(by=By.CSS_SELECTOR, value='[role="tablist"]')
44 tabs.find_element(by=By.CSS_SELECTOR, value=f'[role="tab"][data-key="{key}"]').click()
45 self.browser.wait_until_not('[data-test-id="loading-indicator"]')
46
47 def open_issue_errors(self):
48 self.browser.click(".errors-toggle")
49 self.browser.wait_until(".entries > .errors ul")
50
51 def open_curl(self):
52 self.browser.find_element(by=By.XPATH, value="//a//code[contains(text(), 'curl')]").click()
53
54 def resolve_issue(self):
55 self.browser.click('[aria-label="Resolve"]')
56 # Resolve should become unresolve
57 self.browser.wait_until('[aria-label="Resolved"]')
58
59 def archive_issue(self):
60 self.browser.click('[aria-label="Archive"]')
61 # Ignore should become unresolve
62 self.browser.wait_until('[aria-label="Archived"]')
63
64 def bookmark_issue(self):
65 self.browser.click('button[aria-label="More Actions"]')
66 self.browser.wait_until('[data-test-id="bookmark"]')
67 button = self.browser.element('[data-test-id="bookmark"]')
68 button.click()
69 self.browser.click('button[aria-label="More Actions"]')
70 self.browser.wait_until('[data-test-id="unbookmark"]')
71
72 def assign_to(self, user):
73 assignee = self.browser.find_element(
74 by=By.CSS_SELECTOR, value='[data-test-id="assigned-to"]'
75 )
76
77 # Open the assignee picker
78 assignee.find_element(
79 by=By.CSS_SELECTOR, value='[data-test-id="assignee-selector"]'
80 ).click()
81
82 # Wait for the input to be loaded
83 wait = WebDriverWait(assignee, 10)
84 wait.until(expected_conditions.presence_of_element_located((By.TAG_NAME, "input")))
85
86 assignee.find_element(by=By.TAG_NAME, value="input").send_keys(user)
87
88 # Click the member/team
89 options = assignee.find_elements(
90 by=By.CSS_SELECTOR, value='[data-test-id="assignee-option"]'
91 )
92 assert len(options) > 0, "No assignees could be found."
93 options[0].click()
94
95 self.browser.wait_until_not('[data-test-id="loading-indicator"]')
96
97 def find_comment_form(self):
98 self.browser.wait_until_test_id("note-input-form")
99 return self.browser.find_element(
100 by=By.CSS_SELECTOR, value='[data-test-id="note-input-form"]'
101 )
102
103 def has_comment(self, text):
104 element = self.browser.element('[data-test-id="activity-note-body"]')
105 return text in element.text
106
107 def wait_until_loaded(self):
108 self.browser.wait_until_not('[data-test-id="loading-indicator"]')
109 self.browser.wait_until_not('[data-test-id="event-errors-loading"]')
110 self.browser.wait_until_test_id("linked-issues")
111 self.browser.wait_until_test_id("loaded-device-name")
112 if self.browser.element_exists("#grouping-info"):
113 self.browser.wait_until_test_id("loaded-grouping-info")
114 self.browser.wait_until_not('[data-test-id="loading-placeholder"]')
115
116 def mark_reviewed(self):
117 self.browser.click('[aria-label="More Actions"]')
118 self.browser.wait_until('[data-test-id="mark-review"]')
119 self.browser.click('[data-test-id="mark-review"]')
120 self.browser.click('[aria-label="More Actions"]')
121 self.browser.wait_until('[data-test-id="mark-review"][aria-disabled="true"]')
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fixtures/page_objects/issue_details.py b/fixtures/page_objects/issue_details.py
--- a/fixtures/page_objects/issue_details.py
+++ b/fixtures/page_objects/issue_details.py
@@ -86,9 +86,7 @@
assignee.find_element(by=By.TAG_NAME, value="input").send_keys(user)
# Click the member/team
- options = assignee.find_elements(
- by=By.CSS_SELECTOR, value='[data-test-id="assignee-option"]'
- )
+ options = assignee.find_elements(by=By.CSS_SELECTOR, value='[role="option"]')
assert len(options) > 0, "No assignees could be found."
options[0].click()
| {"golden_diff": "diff --git a/fixtures/page_objects/issue_details.py b/fixtures/page_objects/issue_details.py\n--- a/fixtures/page_objects/issue_details.py\n+++ b/fixtures/page_objects/issue_details.py\n@@ -86,9 +86,7 @@\n assignee.find_element(by=By.TAG_NAME, value=\"input\").send_keys(user)\n \n # Click the member/team\n- options = assignee.find_elements(\n- by=By.CSS_SELECTOR, value='[data-test-id=\"assignee-option\"]'\n- )\n+ options = assignee.find_elements(by=By.CSS_SELECTOR, value='[role=\"option\"]')\n assert len(options) > 0, \"No assignees could be found.\"\n options[0].click()\n", "issue": "Issue Details: Use new assignee selector component\nUpdate issue details to use the new component in https://github.com/getsentry/sentry/issues/69054\r\n\r\nThis should be released under a short-lived feature flag.\n", "before_files": [{"content": "from selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions\nfrom selenium.webdriver.support.wait import WebDriverWait\n\nfrom .base import BasePage\nfrom .global_selection import GlobalSelectionPage\n\n\nclass IssueDetailsPage(BasePage):\n def __init__(self, browser, client):\n super().__init__(browser)\n self.client = client\n self.global_selection = GlobalSelectionPage(browser)\n\n def visit_issue(self, org, groupid):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/\")\n self.wait_until_loaded()\n\n def visit_issue_activity(self, org, groupid):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/activity/\")\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def visit_issue_in_environment(self, org, groupid, environment):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/?environment={environment}\")\n self.browser.wait_until(\".group-detail\")\n\n def visit_tag_values(self, org, groupid, tag):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/tags/{tag}/\")\n self.browser.wait_until('[data-test-id=\"group-tag-value\"]')\n\n def get_environment(self):\n return self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"env-label\"'\n ).text.lower()\n\n def go_back_to_issues(self):\n self.global_selection.go_back_to_issues()\n\n def api_issue_get(self, groupid):\n return self.client.get(f\"/api/0/issues/{groupid}/\")\n\n def go_to_subtab(self, key):\n tabs = self.browser.find_element(by=By.CSS_SELECTOR, value='[role=\"tablist\"]')\n tabs.find_element(by=By.CSS_SELECTOR, value=f'[role=\"tab\"][data-key=\"{key}\"]').click()\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def open_issue_errors(self):\n self.browser.click(\".errors-toggle\")\n self.browser.wait_until(\".entries > .errors ul\")\n\n def open_curl(self):\n self.browser.find_element(by=By.XPATH, value=\"//a//code[contains(text(), 'curl')]\").click()\n\n def resolve_issue(self):\n self.browser.click('[aria-label=\"Resolve\"]')\n # Resolve should become unresolve\n self.browser.wait_until('[aria-label=\"Resolved\"]')\n\n def archive_issue(self):\n self.browser.click('[aria-label=\"Archive\"]')\n # Ignore should become unresolve\n self.browser.wait_until('[aria-label=\"Archived\"]')\n\n def bookmark_issue(self):\n self.browser.click('button[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"bookmark\"]')\n button = self.browser.element('[data-test-id=\"bookmark\"]')\n button.click()\n self.browser.click('button[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"unbookmark\"]')\n\n def assign_to(self, user):\n assignee = self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"assigned-to\"]'\n )\n\n # Open the assignee picker\n assignee.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"assignee-selector\"]'\n ).click()\n\n # Wait for the input to be loaded\n wait = WebDriverWait(assignee, 10)\n wait.until(expected_conditions.presence_of_element_located((By.TAG_NAME, \"input\")))\n\n assignee.find_element(by=By.TAG_NAME, value=\"input\").send_keys(user)\n\n # Click the member/team\n options = assignee.find_elements(\n by=By.CSS_SELECTOR, value='[data-test-id=\"assignee-option\"]'\n )\n assert len(options) > 0, \"No assignees could be found.\"\n options[0].click()\n\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def find_comment_form(self):\n self.browser.wait_until_test_id(\"note-input-form\")\n return self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"note-input-form\"]'\n )\n\n def has_comment(self, text):\n element = self.browser.element('[data-test-id=\"activity-note-body\"]')\n return text in element.text\n\n def wait_until_loaded(self):\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n self.browser.wait_until_not('[data-test-id=\"event-errors-loading\"]')\n self.browser.wait_until_test_id(\"linked-issues\")\n self.browser.wait_until_test_id(\"loaded-device-name\")\n if self.browser.element_exists(\"#grouping-info\"):\n self.browser.wait_until_test_id(\"loaded-grouping-info\")\n self.browser.wait_until_not('[data-test-id=\"loading-placeholder\"]')\n\n def mark_reviewed(self):\n self.browser.click('[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"mark-review\"]')\n self.browser.click('[data-test-id=\"mark-review\"]')\n self.browser.click('[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"mark-review\"][aria-disabled=\"true\"]')\n", "path": "fixtures/page_objects/issue_details.py"}], "after_files": [{"content": "from selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions\nfrom selenium.webdriver.support.wait import WebDriverWait\n\nfrom .base import BasePage\nfrom .global_selection import GlobalSelectionPage\n\n\nclass IssueDetailsPage(BasePage):\n def __init__(self, browser, client):\n super().__init__(browser)\n self.client = client\n self.global_selection = GlobalSelectionPage(browser)\n\n def visit_issue(self, org, groupid):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/\")\n self.wait_until_loaded()\n\n def visit_issue_activity(self, org, groupid):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/activity/\")\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def visit_issue_in_environment(self, org, groupid, environment):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/?environment={environment}\")\n self.browser.wait_until(\".group-detail\")\n\n def visit_tag_values(self, org, groupid, tag):\n self.browser.get(f\"/organizations/{org}/issues/{groupid}/tags/{tag}/\")\n self.browser.wait_until('[data-test-id=\"group-tag-value\"]')\n\n def get_environment(self):\n return self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"env-label\"'\n ).text.lower()\n\n def go_back_to_issues(self):\n self.global_selection.go_back_to_issues()\n\n def api_issue_get(self, groupid):\n return self.client.get(f\"/api/0/issues/{groupid}/\")\n\n def go_to_subtab(self, key):\n tabs = self.browser.find_element(by=By.CSS_SELECTOR, value='[role=\"tablist\"]')\n tabs.find_element(by=By.CSS_SELECTOR, value=f'[role=\"tab\"][data-key=\"{key}\"]').click()\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def open_issue_errors(self):\n self.browser.click(\".errors-toggle\")\n self.browser.wait_until(\".entries > .errors ul\")\n\n def open_curl(self):\n self.browser.find_element(by=By.XPATH, value=\"//a//code[contains(text(), 'curl')]\").click()\n\n def resolve_issue(self):\n self.browser.click('[aria-label=\"Resolve\"]')\n # Resolve should become unresolve\n self.browser.wait_until('[aria-label=\"Resolved\"]')\n\n def archive_issue(self):\n self.browser.click('[aria-label=\"Archive\"]')\n # Ignore should become unresolve\n self.browser.wait_until('[aria-label=\"Archived\"]')\n\n def bookmark_issue(self):\n self.browser.click('button[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"bookmark\"]')\n button = self.browser.element('[data-test-id=\"bookmark\"]')\n button.click()\n self.browser.click('button[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"unbookmark\"]')\n\n def assign_to(self, user):\n assignee = self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"assigned-to\"]'\n )\n\n # Open the assignee picker\n assignee.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"assignee-selector\"]'\n ).click()\n\n # Wait for the input to be loaded\n wait = WebDriverWait(assignee, 10)\n wait.until(expected_conditions.presence_of_element_located((By.TAG_NAME, \"input\")))\n\n assignee.find_element(by=By.TAG_NAME, value=\"input\").send_keys(user)\n\n # Click the member/team\n options = assignee.find_elements(by=By.CSS_SELECTOR, value='[role=\"option\"]')\n assert len(options) > 0, \"No assignees could be found.\"\n options[0].click()\n\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n\n def find_comment_form(self):\n self.browser.wait_until_test_id(\"note-input-form\")\n return self.browser.find_element(\n by=By.CSS_SELECTOR, value='[data-test-id=\"note-input-form\"]'\n )\n\n def has_comment(self, text):\n element = self.browser.element('[data-test-id=\"activity-note-body\"]')\n return text in element.text\n\n def wait_until_loaded(self):\n self.browser.wait_until_not('[data-test-id=\"loading-indicator\"]')\n self.browser.wait_until_not('[data-test-id=\"event-errors-loading\"]')\n self.browser.wait_until_test_id(\"linked-issues\")\n self.browser.wait_until_test_id(\"loaded-device-name\")\n if self.browser.element_exists(\"#grouping-info\"):\n self.browser.wait_until_test_id(\"loaded-grouping-info\")\n self.browser.wait_until_not('[data-test-id=\"loading-placeholder\"]')\n\n def mark_reviewed(self):\n self.browser.click('[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"mark-review\"]')\n self.browser.click('[data-test-id=\"mark-review\"]')\n self.browser.click('[aria-label=\"More Actions\"]')\n self.browser.wait_until('[data-test-id=\"mark-review\"][aria-disabled=\"true\"]')\n", "path": "fixtures/page_objects/issue_details.py"}]} | 1,668 | 157 |
gh_patches_debug_1264 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1932 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump MSAL to the latest version
**Is your feature request related to a problem? Please describe.**
Old version of MSAL is used in [botframework-connector](https://github.com/microsoft/botbuilder-python/blob/main/libraries/botframework-connector/requirements.txt#L6) (v1.6.0)
**Describe the solution you'd like**
Upgrade to the [latest version](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) (v1.13.0 is the latest at this moment).
**Describe alternatives you've considered**
No alternatives.
**Additional context**
Please also consider to not pin this dependency (#1467).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-connector/setup.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 NAME = "botframework-connector"
8 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
9 REQUIRES = [
10 "msrest==0.6.19",
11 "requests>=2.23.0,<2.26",
12 "PyJWT>=1.5.3,<2.0.0",
13 "botbuilder-schema==4.15.0",
14 "msal==1.6.0",
15 ]
16
17 root = os.path.abspath(os.path.dirname(__file__))
18
19 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
20 long_description = f.read()
21
22 setup(
23 name=NAME,
24 version=VERSION,
25 description="Microsoft Bot Framework Bot Builder SDK for Python.",
26 author="Microsoft",
27 url="https://www.github.com/Microsoft/botbuilder-python",
28 keywords=["BotFrameworkConnector", "bots", "ai", "botframework", "botbuilder"],
29 install_requires=REQUIRES,
30 packages=[
31 "botframework.connector",
32 "botframework.connector.auth",
33 "botframework.connector.async_mixin",
34 "botframework.connector.operations",
35 "botframework.connector.models",
36 "botframework.connector.aio",
37 "botframework.connector.aio.operations_async",
38 "botframework.connector.skills",
39 "botframework.connector.teams",
40 "botframework.connector.teams.operations",
41 "botframework.connector.token_api",
42 "botframework.connector.token_api.aio",
43 "botframework.connector.token_api.aio.operations_async",
44 "botframework.connector.token_api.models",
45 "botframework.connector.token_api.operations",
46 ],
47 include_package_data=True,
48 long_description=long_description,
49 long_description_content_type="text/x-rst",
50 license="MIT",
51 classifiers=[
52 "Programming Language :: Python :: 3.7",
53 "Intended Audience :: Developers",
54 "License :: OSI Approved :: MIT License",
55 "Operating System :: OS Independent",
56 "Development Status :: 5 - Production/Stable",
57 "Topic :: Scientific/Engineering :: Artificial Intelligence",
58 ],
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py
--- a/libraries/botframework-connector/setup.py
+++ b/libraries/botframework-connector/setup.py
@@ -11,7 +11,7 @@
"requests>=2.23.0,<2.26",
"PyJWT>=1.5.3,<2.0.0",
"botbuilder-schema==4.15.0",
- "msal==1.6.0",
+ "msal==1.17.0",
]
root = os.path.abspath(os.path.dirname(__file__))
| {"golden_diff": "diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py\n--- a/libraries/botframework-connector/setup.py\n+++ b/libraries/botframework-connector/setup.py\n@@ -11,7 +11,7 @@\n \"requests>=2.23.0,<2.26\",\n \"PyJWT>=1.5.3,<2.0.0\",\n \"botbuilder-schema==4.15.0\",\n- \"msal==1.6.0\",\n+ \"msal==1.17.0\",\n ]\n \n root = os.path.abspath(os.path.dirname(__file__))\n", "issue": "Bump MSAL to the latest version\n**Is your feature request related to a problem? Please describe.**\r\nOld version of MSAL is used in [botframework-connector](https://github.com/microsoft/botbuilder-python/blob/main/libraries/botframework-connector/requirements.txt#L6) (v1.6.0)\r\n\r\n**Describe the solution you'd like**\r\nUpgrade to the [latest version](https://github.com/AzureAD/microsoft-authentication-library-for-python/releases) (v1.13.0 is the latest at this moment).\r\n\r\n**Describe alternatives you've considered**\r\nNo alternatives.\r\n\r\n**Additional context**\r\nPlease also consider to not pin this dependency (#1467).\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\nREQUIRES = [\n \"msrest==0.6.19\",\n \"requests>=2.23.0,<2.26\",\n \"PyJWT>=1.5.3,<2.0.0\",\n \"botbuilder-schema==4.15.0\",\n \"msal==1.6.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.skills\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.aio.operations_async\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\nREQUIRES = [\n \"msrest==0.6.19\",\n \"requests>=2.23.0,<2.26\",\n \"PyJWT>=1.5.3,<2.0.0\",\n \"botbuilder-schema==4.15.0\",\n \"msal==1.17.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.skills\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.aio.operations_async\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botframework-connector/setup.py"}]} | 998 | 144 |
gh_patches_debug_7019 | rasdani/github-patches | git_diff | python__peps-2541 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change "Contents" to "Table of Contents" ? (Or delete it?)
Currently PEPs have a "mysterious" triangle pointing to the word "Contents". I don't know why, but somehow every time I see this I do a double take before I realize "oh, that's the ToC". Maybe spell "Table of Contents" in full? There should be plenty of horizontal space for that. (Not in the side bar though -- there mightn't be room for it, and there it's always expanded which provides enough context for the single word to be understood.)
Alternatively, why have this in the main body of the PEP at all when it's already in the sidebar?
(If there was a "nit" label I'd check it. :-)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pep_sphinx_extensions/pep_processor/html/pep_html_translator.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from docutils import nodes
6 import sphinx.writers.html5 as html5
7
8 if TYPE_CHECKING:
9 from sphinx.builders import html
10
11
12 class PEPTranslator(html5.HTML5Translator):
13 """Custom RST -> HTML translation rules for PEPs."""
14
15 def __init__(self, document: nodes.document, builder: html.StandaloneHTMLBuilder):
16 super().__init__(document, builder)
17 self.compact_simple: bool = False
18
19 @staticmethod
20 def should_be_compact_paragraph(node: nodes.paragraph) -> bool:
21 """Check if paragraph should be compact.
22
23 Omitting <p/> tags around paragraph nodes gives visually compact lists.
24
25 """
26 # Never compact paragraphs that are children of document or compound.
27 if isinstance(node.parent, (nodes.document, nodes.compound)):
28 return False
29
30 # Check for custom attributes in paragraph.
31 for key, value in node.non_default_attributes().items():
32 # if key equals "classes", carry on
33 # if value is empty, or contains only "first", only "last", or both
34 # "first" and "last", carry on
35 # else return False
36 if any((key != "classes", not set(value) <= {"first", "last"})):
37 return False
38
39 # Only first paragraph can be compact (ignoring initial label & invisible nodes)
40 first = isinstance(node.parent[0], nodes.label)
41 visible_siblings = [child for child in node.parent.children[first:] if not isinstance(child, nodes.Invisible)]
42 if visible_siblings[0] is not node:
43 return False
44
45 # otherwise, the paragraph should be compact
46 return True
47
48 def visit_paragraph(self, node: nodes.paragraph) -> None:
49 """Remove <p> tags if possible."""
50 if self.should_be_compact_paragraph(node):
51 self.context.append("")
52 else:
53 self.body.append(self.starttag(node, "p", ""))
54 self.context.append("</p>\n")
55
56 def depart_paragraph(self, _: nodes.paragraph) -> None:
57 """Add corresponding end tag from `visit_paragraph`."""
58 self.body.append(self.context.pop())
59
60 def visit_footnote_reference(self, node):
61 self.body.append(self.starttag(node, "a", suffix="[",
62 CLASS=f"footnote-reference {self.settings.footnote_references}",
63 href=f"#{node['refid']}"
64 ))
65
66 def depart_footnote_reference(self, node):
67 self.body.append(']</a>')
68
69 def visit_label(self, node):
70 # pass parent node to get id into starttag:
71 self.body.append(self.starttag(node.parent, "dt", suffix="[", CLASS="label"))
72
73 # footnote/citation backrefs:
74 back_refs = node.parent["backrefs"]
75 if self.settings.footnote_backlinks and len(back_refs) == 1:
76 self.body.append(f'<a href="#{back_refs[0]}">')
77 self.context.append(f"</a>]")
78 else:
79 self.context.append("]")
80
81 def depart_label(self, node) -> None:
82 """PEP link/citation block cleanup with italicised backlinks."""
83 self.body.append(self.context.pop())
84 back_refs = node.parent["backrefs"]
85 if self.settings.footnote_backlinks and len(back_refs) > 1:
86 back_links = ", ".join(f"<a href='#{ref}'>{i}</a>" for i, ref in enumerate(back_refs, start=1))
87 self.body.append(f"<em> ({back_links}) </em>")
88
89 # Close the def tags
90 self.body.append("</dt>\n<dd>")
91
92 def visit_bullet_list(self, node):
93 if isinstance(node.parent, nodes.section) and "contents" in node.parent["names"]:
94 self.body.append("<details><summary>Contents</summary>")
95 self.context.append("</details>")
96 super().visit_bullet_list(node)
97
98 def depart_bullet_list(self, node):
99 super().depart_bullet_list(node)
100 if isinstance(node.parent, nodes.section) and "contents" in node.parent["names"]:
101 self.body.append(self.context.pop())
102
103 def unknown_visit(self, node: nodes.Node) -> None:
104 """No processing for unknown node types."""
105 pass
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py b/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py
--- a/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py
+++ b/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py
@@ -91,7 +91,7 @@
def visit_bullet_list(self, node):
if isinstance(node.parent, nodes.section) and "contents" in node.parent["names"]:
- self.body.append("<details><summary>Contents</summary>")
+ self.body.append("<details><summary>Table of Contents</summary>")
self.context.append("</details>")
super().visit_bullet_list(node)
| {"golden_diff": "diff --git a/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py b/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py\n--- a/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py\n+++ b/pep_sphinx_extensions/pep_processor/html/pep_html_translator.py\n@@ -91,7 +91,7 @@\n \n def visit_bullet_list(self, node):\n if isinstance(node.parent, nodes.section) and \"contents\" in node.parent[\"names\"]:\n- self.body.append(\"<details><summary>Contents</summary>\")\n+ self.body.append(\"<details><summary>Table of Contents</summary>\")\n self.context.append(\"</details>\")\n super().visit_bullet_list(node)\n", "issue": "Change \"Contents\" to \"Table of Contents\" ? (Or delete it?)\nCurrently PEPs have a \"mysterious\" triangle pointing to the word \"Contents\". I don't know why, but somehow every time I see this I do a double take before I realize \"oh, that's the ToC\". Maybe spell \"Table of Contents\" in full? There should be plenty of horizontal space for that. (Not in the side bar though -- there mightn't be room for it, and there it's always expanded which provides enough context for the single word to be understood.)\r\n\r\nAlternatively, why have this in the main body of the PEP at all when it's already in the sidebar?\r\n\r\n(If there was a \"nit\" label I'd check it. :-)\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom docutils import nodes\nimport sphinx.writers.html5 as html5\n\nif TYPE_CHECKING:\n from sphinx.builders import html\n\n\nclass PEPTranslator(html5.HTML5Translator):\n \"\"\"Custom RST -> HTML translation rules for PEPs.\"\"\"\n\n def __init__(self, document: nodes.document, builder: html.StandaloneHTMLBuilder):\n super().__init__(document, builder)\n self.compact_simple: bool = False\n\n @staticmethod\n def should_be_compact_paragraph(node: nodes.paragraph) -> bool:\n \"\"\"Check if paragraph should be compact.\n\n Omitting <p/> tags around paragraph nodes gives visually compact lists.\n\n \"\"\"\n # Never compact paragraphs that are children of document or compound.\n if isinstance(node.parent, (nodes.document, nodes.compound)):\n return False\n\n # Check for custom attributes in paragraph.\n for key, value in node.non_default_attributes().items():\n # if key equals \"classes\", carry on\n # if value is empty, or contains only \"first\", only \"last\", or both\n # \"first\" and \"last\", carry on\n # else return False\n if any((key != \"classes\", not set(value) <= {\"first\", \"last\"})):\n return False\n\n # Only first paragraph can be compact (ignoring initial label & invisible nodes)\n first = isinstance(node.parent[0], nodes.label)\n visible_siblings = [child for child in node.parent.children[first:] if not isinstance(child, nodes.Invisible)]\n if visible_siblings[0] is not node:\n return False\n\n # otherwise, the paragraph should be compact\n return True\n\n def visit_paragraph(self, node: nodes.paragraph) -> None:\n \"\"\"Remove <p> tags if possible.\"\"\"\n if self.should_be_compact_paragraph(node):\n self.context.append(\"\")\n else:\n self.body.append(self.starttag(node, \"p\", \"\"))\n self.context.append(\"</p>\\n\")\n\n def depart_paragraph(self, _: nodes.paragraph) -> None:\n \"\"\"Add corresponding end tag from `visit_paragraph`.\"\"\"\n self.body.append(self.context.pop())\n\n def visit_footnote_reference(self, node):\n self.body.append(self.starttag(node, \"a\", suffix=\"[\",\n CLASS=f\"footnote-reference {self.settings.footnote_references}\",\n href=f\"#{node['refid']}\"\n ))\n\n def depart_footnote_reference(self, node):\n self.body.append(']</a>')\n\n def visit_label(self, node):\n # pass parent node to get id into starttag:\n self.body.append(self.starttag(node.parent, \"dt\", suffix=\"[\", CLASS=\"label\"))\n\n # footnote/citation backrefs:\n back_refs = node.parent[\"backrefs\"]\n if self.settings.footnote_backlinks and len(back_refs) == 1:\n self.body.append(f'<a href=\"#{back_refs[0]}\">')\n self.context.append(f\"</a>]\")\n else:\n self.context.append(\"]\")\n\n def depart_label(self, node) -> None:\n \"\"\"PEP link/citation block cleanup with italicised backlinks.\"\"\"\n self.body.append(self.context.pop())\n back_refs = node.parent[\"backrefs\"]\n if self.settings.footnote_backlinks and len(back_refs) > 1:\n back_links = \", \".join(f\"<a href='#{ref}'>{i}</a>\" for i, ref in enumerate(back_refs, start=1))\n self.body.append(f\"<em> ({back_links}) </em>\")\n\n # Close the def tags\n self.body.append(\"</dt>\\n<dd>\")\n\n def visit_bullet_list(self, node):\n if isinstance(node.parent, nodes.section) and \"contents\" in node.parent[\"names\"]:\n self.body.append(\"<details><summary>Contents</summary>\")\n self.context.append(\"</details>\")\n super().visit_bullet_list(node)\n\n def depart_bullet_list(self, node):\n super().depart_bullet_list(node)\n if isinstance(node.parent, nodes.section) and \"contents\" in node.parent[\"names\"]:\n self.body.append(self.context.pop())\n\n def unknown_visit(self, node: nodes.Node) -> None:\n \"\"\"No processing for unknown node types.\"\"\"\n pass\n", "path": "pep_sphinx_extensions/pep_processor/html/pep_html_translator.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom docutils import nodes\nimport sphinx.writers.html5 as html5\n\nif TYPE_CHECKING:\n from sphinx.builders import html\n\n\nclass PEPTranslator(html5.HTML5Translator):\n \"\"\"Custom RST -> HTML translation rules for PEPs.\"\"\"\n\n def __init__(self, document: nodes.document, builder: html.StandaloneHTMLBuilder):\n super().__init__(document, builder)\n self.compact_simple: bool = False\n\n @staticmethod\n def should_be_compact_paragraph(node: nodes.paragraph) -> bool:\n \"\"\"Check if paragraph should be compact.\n\n Omitting <p/> tags around paragraph nodes gives visually compact lists.\n\n \"\"\"\n # Never compact paragraphs that are children of document or compound.\n if isinstance(node.parent, (nodes.document, nodes.compound)):\n return False\n\n # Check for custom attributes in paragraph.\n for key, value in node.non_default_attributes().items():\n # if key equals \"classes\", carry on\n # if value is empty, or contains only \"first\", only \"last\", or both\n # \"first\" and \"last\", carry on\n # else return False\n if any((key != \"classes\", not set(value) <= {\"first\", \"last\"})):\n return False\n\n # Only first paragraph can be compact (ignoring initial label & invisible nodes)\n first = isinstance(node.parent[0], nodes.label)\n visible_siblings = [child for child in node.parent.children[first:] if not isinstance(child, nodes.Invisible)]\n if visible_siblings[0] is not node:\n return False\n\n # otherwise, the paragraph should be compact\n return True\n\n def visit_paragraph(self, node: nodes.paragraph) -> None:\n \"\"\"Remove <p> tags if possible.\"\"\"\n if self.should_be_compact_paragraph(node):\n self.context.append(\"\")\n else:\n self.body.append(self.starttag(node, \"p\", \"\"))\n self.context.append(\"</p>\\n\")\n\n def depart_paragraph(self, _: nodes.paragraph) -> None:\n \"\"\"Add corresponding end tag from `visit_paragraph`.\"\"\"\n self.body.append(self.context.pop())\n\n def visit_footnote_reference(self, node):\n self.body.append(self.starttag(node, \"a\", suffix=\"[\",\n CLASS=f\"footnote-reference {self.settings.footnote_references}\",\n href=f\"#{node['refid']}\"\n ))\n\n def depart_footnote_reference(self, node):\n self.body.append(']</a>')\n\n def visit_label(self, node):\n # pass parent node to get id into starttag:\n self.body.append(self.starttag(node.parent, \"dt\", suffix=\"[\", CLASS=\"label\"))\n\n # footnote/citation backrefs:\n back_refs = node.parent[\"backrefs\"]\n if self.settings.footnote_backlinks and len(back_refs) == 1:\n self.body.append(f'<a href=\"#{back_refs[0]}\">')\n self.context.append(f\"</a>]\")\n else:\n self.context.append(\"]\")\n\n def depart_label(self, node) -> None:\n \"\"\"PEP link/citation block cleanup with italicised backlinks.\"\"\"\n self.body.append(self.context.pop())\n back_refs = node.parent[\"backrefs\"]\n if self.settings.footnote_backlinks and len(back_refs) > 1:\n back_links = \", \".join(f\"<a href='#{ref}'>{i}</a>\" for i, ref in enumerate(back_refs, start=1))\n self.body.append(f\"<em> ({back_links}) </em>\")\n\n # Close the def tags\n self.body.append(\"</dt>\\n<dd>\")\n\n def visit_bullet_list(self, node):\n if isinstance(node.parent, nodes.section) and \"contents\" in node.parent[\"names\"]:\n self.body.append(\"<details><summary>Table of Contents</summary>\")\n self.context.append(\"</details>\")\n super().visit_bullet_list(node)\n\n def depart_bullet_list(self, node):\n super().depart_bullet_list(node)\n if isinstance(node.parent, nodes.section) and \"contents\" in node.parent[\"names\"]:\n self.body.append(self.context.pop())\n\n def unknown_visit(self, node: nodes.Node) -> None:\n \"\"\"No processing for unknown node types.\"\"\"\n pass\n", "path": "pep_sphinx_extensions/pep_processor/html/pep_html_translator.py"}]} | 1,562 | 170 |
gh_patches_debug_26485 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1405 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AWS::Events::Rule ScheduleExpression: "cron(* 1 * * * *)"
*cfn-lint version: (cfn-lint 0.27.5)*
*Description of issue.*
```
EventRule:
Type: "AWS::Events::Rule"
Properties:
ScheduleExpression: "cron(* 1 * * * *)"
State: "ENABLED"
Targets:
- Arn: !Ref Foo
Id: "Foo"
RoleArn: !GetAtt FooArn.Arn
```
Check should be probably in:
[src/cfnlint/rules/resources/events/RuleScheduleExpression.py](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/events/RuleScheduleExpression.py)
The above `ScheduleExpression` is invalid (need a value for minute if hour is set). For example `cron(0 1 * * ? *)`
---
[Schedule Expressions for Rules documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/scheduled-events.html)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/events/RuleScheduleExpression.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule
6 from cfnlint.rules import RuleMatch
7
8
9 class RuleScheduleExpression(CloudFormationLintRule):
10 """Validate AWS Events Schedule expression format"""
11 id = 'E3027'
12 shortdesc = 'Validate AWS Event ScheduleExpression format'
13 description = 'Validate the formation of the AWS::Event ScheduleExpression'
14 source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html'
15 tags = ['resources', 'events']
16
17 def initialize(self, cfn):
18 """Initialize the rule"""
19 self.resource_property_types = ['AWS::Events::Rule']
20
21 def check_rate(self, value, path):
22 """Check Rate configuration"""
23 matches = []
24 # Extract the expression from rate(XXX)
25 rate_expression = value[value.find('(')+1:value.find(')')]
26
27 if not rate_expression:
28 matches.append(RuleMatch(path, 'Rate value of ScheduleExpression cannot be empty'))
29 else:
30 # Rate format: rate(Value Unit)
31 items = rate_expression.split(' ')
32
33 if len(items) != 2:
34 message = 'Rate expression must contain 2 elements (Value Unit), rate contains {} elements'
35 matches.append(RuleMatch(path, message.format(len(items))))
36 else:
37 # Check the Value
38 if not items[0].isdigit():
39 message = 'Rate Value ({}) should be of type Integer.'
40 extra_args = {'actual_type': type(items[0]).__name__, 'expected_type': int.__name__}
41 matches.append(RuleMatch(path, message.format(items[0]), **extra_args))
42
43 return matches
44
45 def check_cron(self, value, path):
46 """Check Cron configuration"""
47 matches = []
48 # Extract the expression from cron(XXX)
49 cron_expression = value[value.find('(')+1:value.find(')')]
50
51 if not cron_expression:
52 matches.append(RuleMatch(path, 'Cron value of ScheduleExpression cannot be empty'))
53 else:
54 # Rate format: cron(Minutes Hours Day-of-month Month Day-of-week Year)
55 items = cron_expression.split(' ')
56
57 if len(items) != 6:
58 message = 'Cron expression must contain 6 elements (Minutes Hours Day-of-month Month Day-of-week Year), cron contains {} elements'
59 matches.append(RuleMatch(path, message.format(len(items))))
60
61 return matches
62
63 def check_value(self, value, path):
64 """Count ScheduledExpression value"""
65 matches = []
66
67 # Value is either "cron()" or "rate()"
68 if value.startswith('rate(') and value.endswith(')'):
69 matches.extend(self.check_rate(value, path))
70 elif value.startswith('cron(') and value.endswith(')'):
71 matches.extend(self.check_cron(value, path))
72 else:
73 message = 'Invalid ScheduledExpression specified ({}). Value has to be either cron() or rate()'
74 matches.append(RuleMatch(path, message.format(value)))
75
76 return matches
77
78 def match_resource_properties(self, properties, _, path, cfn):
79 """Check CloudFormation Properties"""
80 matches = []
81
82 matches.extend(
83 cfn.check_value(
84 obj=properties, key='ScheduleExpression',
85 path=path[:],
86 check_value=self.check_value
87 ))
88
89 return matches
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/events/RuleScheduleExpression.py b/src/cfnlint/rules/resources/events/RuleScheduleExpression.py
--- a/src/cfnlint/rules/resources/events/RuleScheduleExpression.py
+++ b/src/cfnlint/rules/resources/events/RuleScheduleExpression.py
@@ -37,7 +37,8 @@
# Check the Value
if not items[0].isdigit():
message = 'Rate Value ({}) should be of type Integer.'
- extra_args = {'actual_type': type(items[0]).__name__, 'expected_type': int.__name__}
+ extra_args = {'actual_type': type(
+ items[0]).__name__, 'expected_type': int.__name__}
matches.append(RuleMatch(path, message.format(items[0]), **extra_args))
return matches
@@ -57,6 +58,12 @@
if len(items) != 6:
message = 'Cron expression must contain 6 elements (Minutes Hours Day-of-month Month Day-of-week Year), cron contains {} elements'
matches.append(RuleMatch(path, message.format(len(items))))
+ return matches
+
+ _, _, day_of_month, _, day_of_week, _ = cron_expression.split(' ')
+ if day_of_month != '?' and day_of_week != '?':
+ matches.append(RuleMatch(
+ path, 'Don\'t specify the Day-of-month and Day-of-week fields in the same cron expression'))
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/events/RuleScheduleExpression.py b/src/cfnlint/rules/resources/events/RuleScheduleExpression.py\n--- a/src/cfnlint/rules/resources/events/RuleScheduleExpression.py\n+++ b/src/cfnlint/rules/resources/events/RuleScheduleExpression.py\n@@ -37,7 +37,8 @@\n # Check the Value\n if not items[0].isdigit():\n message = 'Rate Value ({}) should be of type Integer.'\n- extra_args = {'actual_type': type(items[0]).__name__, 'expected_type': int.__name__}\n+ extra_args = {'actual_type': type(\n+ items[0]).__name__, 'expected_type': int.__name__}\n matches.append(RuleMatch(path, message.format(items[0]), **extra_args))\n \n return matches\n@@ -57,6 +58,12 @@\n if len(items) != 6:\n message = 'Cron expression must contain 6 elements (Minutes Hours Day-of-month Month Day-of-week Year), cron contains {} elements'\n matches.append(RuleMatch(path, message.format(len(items))))\n+ return matches\n+\n+ _, _, day_of_month, _, day_of_week, _ = cron_expression.split(' ')\n+ if day_of_month != '?' and day_of_week != '?':\n+ matches.append(RuleMatch(\n+ path, 'Don\\'t specify the Day-of-month and Day-of-week fields in the same cron expression'))\n \n return matches\n", "issue": "AWS::Events::Rule ScheduleExpression: \"cron(* 1 * * * *)\" \n*cfn-lint version: (cfn-lint 0.27.5)*\r\n\r\n*Description of issue.*\r\n\r\n```\r\n EventRule:\r\n Type: \"AWS::Events::Rule\"\r\n Properties:\r\n ScheduleExpression: \"cron(* 1 * * * *)\" \r\n State: \"ENABLED\"\r\n Targets:\r\n - Arn: !Ref Foo\r\n Id: \"Foo\"\r\n RoleArn: !GetAtt FooArn.Arn\r\n```\r\n\r\nCheck should be probably in:\r\n\r\n[src/cfnlint/rules/resources/events/RuleScheduleExpression.py](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/events/RuleScheduleExpression.py)\r\n\r\nThe above `ScheduleExpression` is invalid (need a value for minute if hour is set). For example `cron(0 1 * * ? *)`\r\n\r\n---\r\n\r\n[Schedule Expressions for Rules documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/scheduled-events.html)\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RuleScheduleExpression(CloudFormationLintRule):\n \"\"\"Validate AWS Events Schedule expression format\"\"\"\n id = 'E3027'\n shortdesc = 'Validate AWS Event ScheduleExpression format'\n description = 'Validate the formation of the AWS::Event ScheduleExpression'\n source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html'\n tags = ['resources', 'events']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n self.resource_property_types = ['AWS::Events::Rule']\n\n def check_rate(self, value, path):\n \"\"\"Check Rate configuration\"\"\"\n matches = []\n # Extract the expression from rate(XXX)\n rate_expression = value[value.find('(')+1:value.find(')')]\n\n if not rate_expression:\n matches.append(RuleMatch(path, 'Rate value of ScheduleExpression cannot be empty'))\n else:\n # Rate format: rate(Value Unit)\n items = rate_expression.split(' ')\n\n if len(items) != 2:\n message = 'Rate expression must contain 2 elements (Value Unit), rate contains {} elements'\n matches.append(RuleMatch(path, message.format(len(items))))\n else:\n # Check the Value\n if not items[0].isdigit():\n message = 'Rate Value ({}) should be of type Integer.'\n extra_args = {'actual_type': type(items[0]).__name__, 'expected_type': int.__name__}\n matches.append(RuleMatch(path, message.format(items[0]), **extra_args))\n\n return matches\n\n def check_cron(self, value, path):\n \"\"\"Check Cron configuration\"\"\"\n matches = []\n # Extract the expression from cron(XXX)\n cron_expression = value[value.find('(')+1:value.find(')')]\n\n if not cron_expression:\n matches.append(RuleMatch(path, 'Cron value of ScheduleExpression cannot be empty'))\n else:\n # Rate format: cron(Minutes Hours Day-of-month Month Day-of-week Year)\n items = cron_expression.split(' ')\n\n if len(items) != 6:\n message = 'Cron expression must contain 6 elements (Minutes Hours Day-of-month Month Day-of-week Year), cron contains {} elements'\n matches.append(RuleMatch(path, message.format(len(items))))\n\n return matches\n\n def check_value(self, value, path):\n \"\"\"Count ScheduledExpression value\"\"\"\n matches = []\n\n # Value is either \"cron()\" or \"rate()\"\n if value.startswith('rate(') and value.endswith(')'):\n matches.extend(self.check_rate(value, path))\n elif value.startswith('cron(') and value.endswith(')'):\n matches.extend(self.check_cron(value, path))\n else:\n message = 'Invalid ScheduledExpression specified ({}). Value has to be either cron() or rate()'\n matches.append(RuleMatch(path, message.format(value)))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='ScheduleExpression',\n path=path[:],\n check_value=self.check_value\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/events/RuleScheduleExpression.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RuleScheduleExpression(CloudFormationLintRule):\n \"\"\"Validate AWS Events Schedule expression format\"\"\"\n id = 'E3027'\n shortdesc = 'Validate AWS Event ScheduleExpression format'\n description = 'Validate the formation of the AWS::Event ScheduleExpression'\n source_url = 'https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html'\n tags = ['resources', 'events']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n self.resource_property_types = ['AWS::Events::Rule']\n\n def check_rate(self, value, path):\n \"\"\"Check Rate configuration\"\"\"\n matches = []\n # Extract the expression from rate(XXX)\n rate_expression = value[value.find('(')+1:value.find(')')]\n\n if not rate_expression:\n matches.append(RuleMatch(path, 'Rate value of ScheduleExpression cannot be empty'))\n else:\n # Rate format: rate(Value Unit)\n items = rate_expression.split(' ')\n\n if len(items) != 2:\n message = 'Rate expression must contain 2 elements (Value Unit), rate contains {} elements'\n matches.append(RuleMatch(path, message.format(len(items))))\n else:\n # Check the Value\n if not items[0].isdigit():\n message = 'Rate Value ({}) should be of type Integer.'\n extra_args = {'actual_type': type(\n items[0]).__name__, 'expected_type': int.__name__}\n matches.append(RuleMatch(path, message.format(items[0]), **extra_args))\n\n return matches\n\n def check_cron(self, value, path):\n \"\"\"Check Cron configuration\"\"\"\n matches = []\n # Extract the expression from cron(XXX)\n cron_expression = value[value.find('(')+1:value.find(')')]\n\n if not cron_expression:\n matches.append(RuleMatch(path, 'Cron value of ScheduleExpression cannot be empty'))\n else:\n # Rate format: cron(Minutes Hours Day-of-month Month Day-of-week Year)\n items = cron_expression.split(' ')\n\n if len(items) != 6:\n message = 'Cron expression must contain 6 elements (Minutes Hours Day-of-month Month Day-of-week Year), cron contains {} elements'\n matches.append(RuleMatch(path, message.format(len(items))))\n return matches\n\n _, _, day_of_month, _, day_of_week, _ = cron_expression.split(' ')\n if day_of_month != '?' and day_of_week != '?':\n matches.append(RuleMatch(\n path, 'Don\\'t specify the Day-of-month and Day-of-week fields in the same cron expression'))\n\n return matches\n\n def check_value(self, value, path):\n \"\"\"Count ScheduledExpression value\"\"\"\n matches = []\n\n # Value is either \"cron()\" or \"rate()\"\n if value.startswith('rate(') and value.endswith(')'):\n matches.extend(self.check_rate(value, path))\n elif value.startswith('cron(') and value.endswith(')'):\n matches.extend(self.check_cron(value, path))\n else:\n message = 'Invalid ScheduledExpression specified ({}). Value has to be either cron() or rate()'\n matches.append(RuleMatch(path, message.format(value)))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='ScheduleExpression',\n path=path[:],\n check_value=self.check_value\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/events/RuleScheduleExpression.py"}]} | 1,390 | 321 |
gh_patches_debug_20115 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-1473 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
policies where Resource=user-pool results in error=Missing required parameter in input: "MaxResults"
A user-pool based resource policy results in an error for missing required parameter=MaxResults
```
policies:
- name: RequiredTagsAbsentCognitoUserPool
resource: user-pool
description: |
Notify if Required tags Absent
filters:
- "tag:OwnerEmail": absent
```
2017-08-03 22:49:43,321: custodian.output:ERROR Error while executing policy
Traceback (most recent call last):
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/policy.py", line 306, in run
resources = self.policy.resource_manager.resources()
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py", line 292, in resources
resources = self.augment(self.source.resources(query))
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py", line 154, in resources
resources = self.query.filter(self.manager.resource_type, **query)
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py", line 67, in filter
data = op(**params)
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py", line 310, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py", line 573, in _make_api_call
api_params, operation_model, context=request_context)
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py", line 628, in _convert_to_request_dict
api_params, operation_model)
File "/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
ParamValidationError: Parameter validation failed:
Missing required parameter in input: "MaxResults"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/cognito.py`
Content:
```
1 # Copyright 2016 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.manager import resources
17 from c7n.query import QueryResourceManager
18
19
20 @resources.register('identity-pool')
21 class CognitoIdentityPool(QueryResourceManager):
22
23 class resource_type(object):
24 service = 'cognito-identity'
25 enum_spec = ('list_identity_pools', 'IdentityPools', None)
26 detail_spec = (
27 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId')
28 id = 'IdentityPoolId'
29 name = 'IdentityPoolName'
30 filter_name = None
31 dimension = None
32
33
34 @resources.register('user-pool')
35 class CognitoUserPool(QueryResourceManager):
36
37 class resource_type(object):
38 service = "cognito-idp"
39 enum_spec = ('list_user_pools', 'UserPools', None)
40 detail_spec = (
41 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')
42 id = 'Id'
43 name = 'Name'
44 filter_name = None
45 dimension = None
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py
--- a/c7n/resources/cognito.py
+++ b/c7n/resources/cognito.py
@@ -22,9 +22,9 @@
class resource_type(object):
service = 'cognito-identity'
- enum_spec = ('list_identity_pools', 'IdentityPools', None)
+ enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})
detail_spec = (
- 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId')
+ 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)
id = 'IdentityPoolId'
name = 'IdentityPoolName'
filter_name = None
@@ -36,7 +36,7 @@
class resource_type(object):
service = "cognito-idp"
- enum_spec = ('list_user_pools', 'UserPools', None)
+ enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})
detail_spec = (
'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')
id = 'Id'
| {"golden_diff": "diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py\n--- a/c7n/resources/cognito.py\n+++ b/c7n/resources/cognito.py\n@@ -22,9 +22,9 @@\n \n class resource_type(object):\n service = 'cognito-identity'\n- enum_spec = ('list_identity_pools', 'IdentityPools', None)\n+ enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n- 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId')\n+ 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n filter_name = None\n@@ -36,7 +36,7 @@\n \n class resource_type(object):\n service = \"cognito-idp\"\n- enum_spec = ('list_user_pools', 'UserPools', None)\n+ enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n", "issue": "policies where Resource=user-pool results in error=Missing required parameter in input: \"MaxResults\" \nA user-pool based resource policy results in an error for missing required parameter=MaxResults\r\n```\r\npolicies:\r\n - name: RequiredTagsAbsentCognitoUserPool\r\n resource: user-pool\r\n description: |\r\n Notify if Required tags Absent\r\n filters:\r\n - \"tag:OwnerEmail\": absent\r\n```\r\n2017-08-03 22:49:43,321: custodian.output:ERROR Error while executing policy\r\nTraceback (most recent call last):\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/policy.py\", line 306, in run\r\n resources = self.policy.resource_manager.resources()\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py\", line 292, in resources\r\n resources = self.augment(self.source.resources(query))\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py\", line 154, in resources\r\n resources = self.query.filter(self.manager.resource_type, **query)\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/c7n/query.py\", line 67, in filter\r\n data = op(**params)\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py\", line 310, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py\", line 573, in _make_api_call\r\n api_params, operation_model, context=request_context)\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/client.py\", line 628, in _convert_to_request_dict\r\n api_params, operation_model)\r\n File \"/mnt/home2/opensrc/cloud-custodian-public-release/.tox/py27/local/lib/python2.7/site-packages/botocore/validate.py\", line 291, in serialize_to_request\r\n raise ParamValidationError(report=report.generate_report())\r\nParamValidationError: Parameter validation failed:\r\nMissing required parameter in input: \"MaxResults\"\n", "before_files": [{"content": "# Copyright 2016 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(object):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', None)\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId')\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n filter_name = None\n dimension = None\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(object):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', None)\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n filter_name = None\n dimension = None\n", "path": "c7n/resources/cognito.py"}], "after_files": [{"content": "# Copyright 2016 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(object):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n filter_name = None\n dimension = None\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(object):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n filter_name = None\n dimension = None\n", "path": "c7n/resources/cognito.py"}]} | 1,307 | 272 |
gh_patches_debug_16900 | rasdani/github-patches | git_diff | beeware__toga-867 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On Linux, window contents are rendered behind a menubar
But only if the menubar wasn't defined in the application code. Here's a slightly modified [example code](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html) I'm running:
```py
import toga
def build(app):
box = toga.Box()
button = toga.Button('Hello world', on_press=lambda _: print("hello"))
box.add(button)
return box
if __name__ == '__main__':
app = toga.App('First App', 'org.pybee.sample', startup=build)
app.main_loop()
```
Here's how it looks like to me:

Note that the button is behind the menubar (although no menubar was requested in the code). When I click on the menubar (including the Application or Help menu items), the button is being pressed instead.
I've tried this with a few GTK themes, including Arc-OSX-dark, Mint-X, Redmond and Adwaita, and in every case it behaves this way.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gtk/toga_gtk/widgets/box.py`
Content:
```
1 from ..libs import Gtk, Gdk
2 from .base import Widget
3
4
5 class TogaBox(Gtk.Fixed):
6 def __init__(self, impl):
7 super().__init__()
8 self._impl = impl
9 self.interface = self._impl.interface
10
11 def do_get_preferred_width(self):
12 # Calculate the minimum and natural width of the container.
13 # print("GET PREFERRED WIDTH", self._impl.native)
14 width = self._impl.interface.layout.width
15 min_width = self._impl.min_width
16 if min_width is None:
17 min_width = 0
18 elif min_width > width:
19 width = min_width
20
21 # print(min_width, width)
22 return min_width, width
23
24 def do_get_preferred_height(self):
25 # Calculate the minimum and natural height of the container.
26 # height = self._impl.layout.height
27 # print("GET PREFERRED HEIGHT", self._impl.native)
28 height = self._impl.interface.layout.height
29 min_height = self._impl.min_height
30 if min_height is None:
31 min_height = 0
32 elif min_height > height:
33 height = min_height
34 # print(min_height, height)
35 return min_height, height
36
37 def do_size_allocate(self, allocation):
38 # print(self._impl, "Container layout", allocation.width, 'x', allocation.height, ' @ ', allocation.x, 'x', allocation.y)
39 if self._impl.viewport is not None:
40 self.set_allocation(allocation)
41 self.interface.refresh()
42
43 # WARNING! This list of children is *not* the same
44 # as the list provided by the interface!
45 # For GTK's layout purposes, all widgets in the tree
46 # are children of the *container* - that is, the impl
47 # object of the root object in the tree of widgets.
48 for widget in self.get_children():
49 if not widget.get_visible():
50 # print("CHILD NOT VISIBLE", widget.interface)
51 pass
52 else:
53 # print("update ", widget.interface, widget.interface.layout)
54 widget.interface._impl.rehint()
55 widget_allocation = Gdk.Rectangle()
56 widget_allocation.x = widget.interface.layout.absolute_content_left
57 widget_allocation.y = widget.interface.layout.absolute_content_top
58 widget_allocation.width = widget.interface.layout.content_width
59 widget_allocation.height = widget.interface.layout.content_height
60
61 widget.size_allocate(widget_allocation)
62
63
64 class Box(Widget):
65 def create(self):
66 self.min_width = None
67 self.min_height = None
68 self.native = TogaBox(self)
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/gtk/toga_gtk/widgets/box.py b/src/gtk/toga_gtk/widgets/box.py
--- a/src/gtk/toga_gtk/widgets/box.py
+++ b/src/gtk/toga_gtk/widgets/box.py
@@ -53,8 +53,8 @@
# print("update ", widget.interface, widget.interface.layout)
widget.interface._impl.rehint()
widget_allocation = Gdk.Rectangle()
- widget_allocation.x = widget.interface.layout.absolute_content_left
- widget_allocation.y = widget.interface.layout.absolute_content_top
+ widget_allocation.x = widget.interface.layout.absolute_content_left + allocation.x
+ widget_allocation.y = widget.interface.layout.absolute_content_top + allocation.y
widget_allocation.width = widget.interface.layout.content_width
widget_allocation.height = widget.interface.layout.content_height
| {"golden_diff": "diff --git a/src/gtk/toga_gtk/widgets/box.py b/src/gtk/toga_gtk/widgets/box.py\n--- a/src/gtk/toga_gtk/widgets/box.py\n+++ b/src/gtk/toga_gtk/widgets/box.py\n@@ -53,8 +53,8 @@\n # print(\"update \", widget.interface, widget.interface.layout)\n widget.interface._impl.rehint()\n widget_allocation = Gdk.Rectangle()\n- widget_allocation.x = widget.interface.layout.absolute_content_left\n- widget_allocation.y = widget.interface.layout.absolute_content_top\n+ widget_allocation.x = widget.interface.layout.absolute_content_left + allocation.x\n+ widget_allocation.y = widget.interface.layout.absolute_content_top + allocation.y\n widget_allocation.width = widget.interface.layout.content_width\n widget_allocation.height = widget.interface.layout.content_height\n", "issue": "On Linux, window contents are rendered behind a menubar\nBut only if the menubar wasn't defined in the application code. Here's a slightly modified [example code](https://toga.readthedocs.io/en/latest/tutorial/tutorial-0.html) I'm running:\r\n\r\n```py\r\nimport toga\r\n\r\ndef build(app):\r\n box = toga.Box()\r\n button = toga.Button('Hello world', on_press=lambda _: print(\"hello\"))\r\n box.add(button)\r\n return box\r\n\r\nif __name__ == '__main__':\r\n app = toga.App('First App', 'org.pybee.sample', startup=build)\r\n app.main_loop()\r\n```\r\n\r\nHere's how it looks like to me:\r\n\r\n\r\n\r\nNote that the button is behind the menubar (although no menubar was requested in the code). When I click on the menubar (including the Application or Help menu items), the button is being pressed instead.\r\n\r\nI've tried this with a few GTK themes, including Arc-OSX-dark, Mint-X, Redmond and Adwaita, and in every case it behaves this way.\r\n\n", "before_files": [{"content": "from ..libs import Gtk, Gdk\nfrom .base import Widget\n\n\nclass TogaBox(Gtk.Fixed):\n def __init__(self, impl):\n super().__init__()\n self._impl = impl\n self.interface = self._impl.interface\n\n def do_get_preferred_width(self):\n # Calculate the minimum and natural width of the container.\n # print(\"GET PREFERRED WIDTH\", self._impl.native)\n width = self._impl.interface.layout.width\n min_width = self._impl.min_width\n if min_width is None:\n min_width = 0\n elif min_width > width:\n width = min_width\n\n # print(min_width, width)\n return min_width, width\n\n def do_get_preferred_height(self):\n # Calculate the minimum and natural height of the container.\n # height = self._impl.layout.height\n # print(\"GET PREFERRED HEIGHT\", self._impl.native)\n height = self._impl.interface.layout.height\n min_height = self._impl.min_height\n if min_height is None:\n min_height = 0\n elif min_height > height:\n height = min_height\n # print(min_height, height)\n return min_height, height\n\n def do_size_allocate(self, allocation):\n # print(self._impl, \"Container layout\", allocation.width, 'x', allocation.height, ' @ ', allocation.x, 'x', allocation.y)\n if self._impl.viewport is not None:\n self.set_allocation(allocation)\n self.interface.refresh()\n\n # WARNING! This list of children is *not* the same\n # as the list provided by the interface!\n # For GTK's layout purposes, all widgets in the tree\n # are children of the *container* - that is, the impl\n # object of the root object in the tree of widgets.\n for widget in self.get_children():\n if not widget.get_visible():\n # print(\"CHILD NOT VISIBLE\", widget.interface)\n pass\n else:\n # print(\"update \", widget.interface, widget.interface.layout)\n widget.interface._impl.rehint()\n widget_allocation = Gdk.Rectangle()\n widget_allocation.x = widget.interface.layout.absolute_content_left\n widget_allocation.y = widget.interface.layout.absolute_content_top\n widget_allocation.width = widget.interface.layout.content_width\n widget_allocation.height = widget.interface.layout.content_height\n\n widget.size_allocate(widget_allocation)\n\n\nclass Box(Widget):\n def create(self):\n self.min_width = None\n self.min_height = None\n self.native = TogaBox(self)\n", "path": "src/gtk/toga_gtk/widgets/box.py"}], "after_files": [{"content": "from ..libs import Gtk, Gdk\nfrom .base import Widget\n\n\nclass TogaBox(Gtk.Fixed):\n def __init__(self, impl):\n super().__init__()\n self._impl = impl\n self.interface = self._impl.interface\n\n def do_get_preferred_width(self):\n # Calculate the minimum and natural width of the container.\n # print(\"GET PREFERRED WIDTH\", self._impl.native)\n width = self._impl.interface.layout.width\n min_width = self._impl.min_width\n if min_width is None:\n min_width = 0\n elif min_width > width:\n width = min_width\n\n # print(min_width, width)\n return min_width, width\n\n def do_get_preferred_height(self):\n # Calculate the minimum and natural height of the container.\n # height = self._impl.layout.height\n # print(\"GET PREFERRED HEIGHT\", self._impl.native)\n height = self._impl.interface.layout.height\n min_height = self._impl.min_height\n if min_height is None:\n min_height = 0\n elif min_height > height:\n height = min_height\n # print(min_height, height)\n return min_height, height\n\n def do_size_allocate(self, allocation):\n # print(self._impl, \"Container layout\", allocation.width, 'x', allocation.height, ' @ ', allocation.x, 'x', allocation.y)\n if self._impl.viewport is not None:\n self.set_allocation(allocation)\n self.interface.refresh()\n\n # WARNING! This list of children is *not* the same\n # as the list provided by the interface!\n # For GTK's layout purposes, all widgets in the tree\n # are children of the *container* - that is, the impl\n # object of the root object in the tree of widgets.\n for widget in self.get_children():\n if not widget.get_visible():\n # print(\"CHILD NOT VISIBLE\", widget.interface)\n pass\n else:\n # print(\"update \", widget.interface, widget.interface.layout)\n widget.interface._impl.rehint()\n widget_allocation = Gdk.Rectangle()\n widget_allocation.x = widget.interface.layout.absolute_content_left + allocation.x\n widget_allocation.y = widget.interface.layout.absolute_content_top + allocation.y\n widget_allocation.width = widget.interface.layout.content_width\n widget_allocation.height = widget.interface.layout.content_height\n\n widget.size_allocate(widget_allocation)\n\n\nclass Box(Widget):\n def create(self):\n self.min_width = None\n self.min_height = None\n self.native = TogaBox(self)\n", "path": "src/gtk/toga_gtk/widgets/box.py"}]} | 1,223 | 176 |
gh_patches_debug_6488 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-1048 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Send an email on feeback record creation
- rel https://github.com/makinacorpus/Geotrek-rando/issues/132
- Send to managers
- Import test send email from Geotrek-rando
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/feedback/models.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.db import models
5 from django.contrib.gis.db import models as gis_models
6 from django.contrib.contenttypes.generic import GenericForeignKey
7 from django.contrib.contenttypes.models import ContentType
8 from django.utils.translation import ugettext_lazy as _
9 from django.db.models.signals import post_save
10 from django.dispatch import receiver
11 from mapentity.models import MapEntityMixin
12
13 from geotrek.common.models import TimeStampedModel
14
15 from .helpers import send_report_managers
16
17
18 logger = logging.getLogger(__name__)
19
20
21 class Report(MapEntityMixin, TimeStampedModel):
22 """ User reports, mainly submitted via *Geotrek-rando*.
23 """
24 name = models.CharField(verbose_name=_(u"Name"), max_length=256)
25 email = models.EmailField(verbose_name=_(u"Email"))
26 comment = models.TextField(blank=True,
27 default="",
28 verbose_name=_(u"Comment"))
29 category = models.ForeignKey('ReportCategory',
30 null=True,
31 blank=True,
32 default=None,
33 verbose_name=_(u"Category"))
34 geom = gis_models.PointField(null=True,
35 blank=True,
36 default=None,
37 verbose_name=_(u"Location"),
38 srid=settings.SRID)
39 context_content_type = models.ForeignKey(ContentType,
40 null=True,
41 blank=True,
42 editable=False)
43 context_object_id = models.PositiveIntegerField(null=True,
44 blank=True,
45 editable=False)
46 context_object = GenericForeignKey('context_content_type',
47 'context_object_id')
48
49 objects = gis_models.GeoManager()
50
51 class Meta:
52 db_table = 'f_t_signalement'
53 verbose_name = _(u"Report")
54 verbose_name_plural = _(u"Reports")
55 ordering = ['-date_insert']
56
57 def __unicode__(self):
58 return self.name
59
60 @property
61 def name_display(self):
62 return u'<a data-pk="%s" href="%s" title="%s" >%s</a>' % (self.pk,
63 self.get_detail_url(),
64 self,
65 self)
66
67
68 @receiver(post_save, sender=Report, dispatch_uid="on_report_created")
69 def on_report_created(sender, instance, created, **kwargs):
70 """ Send an email to managers when a report is created.
71 """
72 try:
73 send_report_managers(instance)
74 except Exception as e:
75 logger.error('Email could not be sent to managers.')
76 logger.exception(e) # This sends an email to admins :)
77
78
79 class ReportCategory(models.Model):
80 category = models.CharField(verbose_name=_(u"Category"),
81 max_length=128)
82
83 class Meta:
84 db_table = 'f_b_categorie'
85 verbose_name = _(u"Category")
86 verbose_name_plural = _(u"Categories")
87
88 def __unicode__(self):
89 return self.category
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/feedback/models.py b/geotrek/feedback/models.py
--- a/geotrek/feedback/models.py
+++ b/geotrek/feedback/models.py
@@ -66,9 +66,11 @@
@receiver(post_save, sender=Report, dispatch_uid="on_report_created")
-def on_report_created(sender, instance, created, **kwargs):
+def on_report_saved(sender, instance, created, **kwargs):
""" Send an email to managers when a report is created.
"""
+ if not created:
+ return
try:
send_report_managers(instance)
except Exception as e:
| {"golden_diff": "diff --git a/geotrek/feedback/models.py b/geotrek/feedback/models.py\n--- a/geotrek/feedback/models.py\n+++ b/geotrek/feedback/models.py\n@@ -66,9 +66,11 @@\n \n \n @receiver(post_save, sender=Report, dispatch_uid=\"on_report_created\")\n-def on_report_created(sender, instance, created, **kwargs):\n+def on_report_saved(sender, instance, created, **kwargs):\n \"\"\" Send an email to managers when a report is created.\n \"\"\"\n+ if not created:\n+ return\n try:\n send_report_managers(instance)\n except Exception as e:\n", "issue": "Send an email on feeback record creation\n- rel https://github.com/makinacorpus/Geotrek-rando/issues/132\n- Send to managers\n- Import test send email from Geotrek-rando\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.contrib.gis.db import models as gis_models\nfrom django.contrib.contenttypes.generic import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom mapentity.models import MapEntityMixin\n\nfrom geotrek.common.models import TimeStampedModel\n\nfrom .helpers import send_report_managers\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Report(MapEntityMixin, TimeStampedModel):\n \"\"\" User reports, mainly submitted via *Geotrek-rando*.\n \"\"\"\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=256)\n email = models.EmailField(verbose_name=_(u\"Email\"))\n comment = models.TextField(blank=True,\n default=\"\",\n verbose_name=_(u\"Comment\"))\n category = models.ForeignKey('ReportCategory',\n null=True,\n blank=True,\n default=None,\n verbose_name=_(u\"Category\"))\n geom = gis_models.PointField(null=True,\n blank=True,\n default=None,\n verbose_name=_(u\"Location\"),\n srid=settings.SRID)\n context_content_type = models.ForeignKey(ContentType,\n null=True,\n blank=True,\n editable=False)\n context_object_id = models.PositiveIntegerField(null=True,\n blank=True,\n editable=False)\n context_object = GenericForeignKey('context_content_type',\n 'context_object_id')\n\n objects = gis_models.GeoManager()\n\n class Meta:\n db_table = 'f_t_signalement'\n verbose_name = _(u\"Report\")\n verbose_name_plural = _(u\"Reports\")\n ordering = ['-date_insert']\n\n def __unicode__(self):\n return self.name\n\n @property\n def name_display(self):\n return u'<a data-pk=\"%s\" href=\"%s\" title=\"%s\" >%s</a>' % (self.pk,\n self.get_detail_url(),\n self,\n self)\n\n\n@receiver(post_save, sender=Report, dispatch_uid=\"on_report_created\")\ndef on_report_created(sender, instance, created, **kwargs):\n \"\"\" Send an email to managers when a report is created.\n \"\"\"\n try:\n send_report_managers(instance)\n except Exception as e:\n logger.error('Email could not be sent to managers.')\n logger.exception(e) # This sends an email to admins :)\n\n\nclass ReportCategory(models.Model):\n category = models.CharField(verbose_name=_(u\"Category\"),\n max_length=128)\n\n class Meta:\n db_table = 'f_b_categorie'\n verbose_name = _(u\"Category\")\n verbose_name_plural = _(u\"Categories\")\n\n def __unicode__(self):\n return self.category\n", "path": "geotrek/feedback/models.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.contrib.gis.db import models as gis_models\nfrom django.contrib.contenttypes.generic import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom mapentity.models import MapEntityMixin\n\nfrom geotrek.common.models import TimeStampedModel\n\nfrom .helpers import send_report_managers\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Report(MapEntityMixin, TimeStampedModel):\n \"\"\" User reports, mainly submitted via *Geotrek-rando*.\n \"\"\"\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=256)\n email = models.EmailField(verbose_name=_(u\"Email\"))\n comment = models.TextField(blank=True,\n default=\"\",\n verbose_name=_(u\"Comment\"))\n category = models.ForeignKey('ReportCategory',\n null=True,\n blank=True,\n default=None,\n verbose_name=_(u\"Category\"))\n geom = gis_models.PointField(null=True,\n blank=True,\n default=None,\n verbose_name=_(u\"Location\"),\n srid=settings.SRID)\n context_content_type = models.ForeignKey(ContentType,\n null=True,\n blank=True,\n editable=False)\n context_object_id = models.PositiveIntegerField(null=True,\n blank=True,\n editable=False)\n context_object = GenericForeignKey('context_content_type',\n 'context_object_id')\n\n objects = gis_models.GeoManager()\n\n class Meta:\n db_table = 'f_t_signalement'\n verbose_name = _(u\"Report\")\n verbose_name_plural = _(u\"Reports\")\n ordering = ['-date_insert']\n\n def __unicode__(self):\n return self.name\n\n @property\n def name_display(self):\n return u'<a data-pk=\"%s\" href=\"%s\" title=\"%s\" >%s</a>' % (self.pk,\n self.get_detail_url(),\n self,\n self)\n\n\n@receiver(post_save, sender=Report, dispatch_uid=\"on_report_created\")\ndef on_report_saved(sender, instance, created, **kwargs):\n \"\"\" Send an email to managers when a report is created.\n \"\"\"\n if not created:\n return\n try:\n send_report_managers(instance)\n except Exception as e:\n logger.error('Email could not be sent to managers.')\n logger.exception(e) # This sends an email to admins :)\n\n\nclass ReportCategory(models.Model):\n category = models.CharField(verbose_name=_(u\"Category\"),\n max_length=128)\n\n class Meta:\n db_table = 'f_b_categorie'\n verbose_name = _(u\"Category\")\n verbose_name_plural = _(u\"Categories\")\n\n def __unicode__(self):\n return self.category\n", "path": "geotrek/feedback/models.py"}]} | 1,084 | 138 |
gh_patches_debug_18736 | rasdani/github-patches | git_diff | sublimelsp__LSP-341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use case-insensitive Regexps for matching syntaxes
### Suggestion:
When using `settings.get('syntax')` to filter configurations, it's needed such configurations to match exactly. Many times there are more than one single package for any given syntax (and oftentimes there are packages that haven't been updated in some time and for which syntax files have moved or extension definition changed from `tmLanguage` to `sublime-syntax`). This can lead in the need to configure server languages "guessing" what syntax files are configured for it.
Real-world examples of such:
**Markdown**:
1. `Packages/Markdown/Markdown.sublime-syntax`
2. `Packages/Markdown Extended/Syntaxes/Markdown Extended.sublime-syntax`
3. `Packages/Markdown Extended/Syntaxes/Markdown Extended.tmLanguage`
**SCSS**:
1. `Packages/Sass/Syntaxes/SCSS.sublime-syntax`
2. `Packages/SCSS/SCSS.tmLanguage`
**JSON**:
1. `Packages/JavaScript/JSON.sublime-syntax`
2. `Packages/Babel/JSON (Babel).sublime-syntax`
3. `Packages/PackageDev/Package/Sublime Text Settings/Sublime Text Settings.sublime-syntax`
It could very well be better to have a set of case insensitive regexps to match the language from the syntax (after all scopes are later used to confirm). That way we could have, accordingly, the following syntaxes that would do a `re.search()`.
```json
"syntaxes": [
"Markdown"
]
```
```json
"syntaxes": [
"SCSS"
]
```
```json
"syntaxes": [
"JSON",
"Sublime Text Settings"
]
```
_Alternatively_, we could add `languages` as the list of regexps that match, keeping `syntaxes` for exact matches and using `languages` as fallbacks to figure out the language configuration.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/configurations.py`
Content:
```
1 import sublime
2
3 from .settings import ClientConfig, client_configs
4 from .logging import debug
5 from .workspace import get_project_config
6
7 assert ClientConfig
8
9 try:
10 from typing import Any, List, Dict, Tuple, Callable, Optional
11 assert Any and List and Dict and Tuple and Callable and Optional
12 except ImportError:
13 pass
14
15
16 window_client_configs = dict() # type: Dict[int, List[ClientConfig]]
17
18
19 def get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':
20 for config in configs:
21 for scope in config.scopes:
22 if len(view.sel()) > 0:
23 if view.match_selector(view.sel()[0].begin(), scope):
24 return config
25
26 return None
27
28
29 def register_client_config(config: ClientConfig) -> None:
30 window_client_configs.clear()
31 client_configs.add_external_config(config)
32
33
34 def get_global_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
35 return get_scope_client_config(view, client_configs.all)
36
37
38 def get_default_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
39 return get_scope_client_config(view, client_configs.defaults)
40
41
42 def get_window_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
43 window = view.window()
44 if window:
45 configs_for_window = window_client_configs.get(window.id(), [])
46 return get_scope_client_config(view, configs_for_window)
47 else:
48 return None
49
50
51 def config_for_scope(view: sublime.View) -> 'Optional[ClientConfig]':
52 # check window_client_config first
53 window_client_config = get_window_client_config(view)
54 if not window_client_config:
55 global_client_config = get_global_client_config(view)
56
57 if global_client_config:
58 window = view.window()
59 if window:
60 window_client_config = apply_window_settings(global_client_config, view)
61 add_window_client_config(window, window_client_config)
62 return window_client_config
63 else:
64 # always return a client config even if the view has no window anymore
65 return global_client_config
66
67 return window_client_config
68
69
70 def add_window_client_config(window: 'sublime.Window', config: 'ClientConfig'):
71 global window_client_configs
72 window_client_configs.setdefault(window.id(), []).append(config)
73
74
75 def clear_window_client_configs(window: 'sublime.Window'):
76 global window_client_configs
77 if window.id() in window_client_configs:
78 del window_client_configs[window.id()]
79
80
81 def apply_window_settings(client_config: 'ClientConfig', view: 'sublime.View') -> 'ClientConfig':
82 window = view.window()
83 if window:
84 window_config = get_project_config(window)
85
86 if client_config.name in window_config:
87 overrides = window_config[client_config.name]
88 debug('window has override for', client_config.name, overrides)
89 return ClientConfig(
90 client_config.name,
91 overrides.get("command", client_config.binary_args),
92 overrides.get("tcp_port", client_config.tcp_port),
93 overrides.get("scopes", client_config.scopes),
94 overrides.get("syntaxes", client_config.syntaxes),
95 overrides.get("languageId", client_config.languageId),
96 overrides.get("enabled", client_config.enabled),
97 overrides.get("initializationOptions", client_config.init_options),
98 overrides.get("settings", client_config.settings),
99 overrides.get("env", client_config.env)
100 )
101
102 return client_config
103
104
105 def is_supportable_syntax(syntax: str) -> bool:
106 # TODO: filter out configs disabled by the user.
107 for config in client_configs.defaults:
108 if syntax in config.syntaxes:
109 return True
110 return False
111
112
113 def is_supported_syntax(syntax: str) -> bool:
114 for config in client_configs.all:
115 if syntax in config.syntaxes:
116 return True
117 return False
118
119
120 def is_supported_view(view: sublime.View) -> bool:
121 # TODO: perhaps make this check for a client instead of a config
122 if config_for_scope(view):
123 return True
124 else:
125 return False
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py
--- a/plugin/core/configurations.py
+++ b/plugin/core/configurations.py
@@ -1,3 +1,4 @@
+import re
import sublime
from .settings import ClientConfig, client_configs
@@ -105,14 +106,14 @@
def is_supportable_syntax(syntax: str) -> bool:
# TODO: filter out configs disabled by the user.
for config in client_configs.defaults:
- if syntax in config.syntaxes:
+ if re.search(r'|'.join(r'\b%s\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):
return True
return False
def is_supported_syntax(syntax: str) -> bool:
for config in client_configs.all:
- if syntax in config.syntaxes:
+ if re.search(r'|'.join(r'\b%s\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):
return True
return False
| {"golden_diff": "diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py\n--- a/plugin/core/configurations.py\n+++ b/plugin/core/configurations.py\n@@ -1,3 +1,4 @@\n+import re\n import sublime\n \n from .settings import ClientConfig, client_configs\n@@ -105,14 +106,14 @@\n def is_supportable_syntax(syntax: str) -> bool:\n # TODO: filter out configs disabled by the user.\n for config in client_configs.defaults:\n- if syntax in config.syntaxes:\n+ if re.search(r'|'.join(r'\\b%s\\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):\n return True\n return False\n \n \n def is_supported_syntax(syntax: str) -> bool:\n for config in client_configs.all:\n- if syntax in config.syntaxes:\n+ if re.search(r'|'.join(r'\\b%s\\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):\n return True\n return False\n", "issue": "Use case-insensitive Regexps for matching syntaxes\n### Suggestion:\r\n\r\nWhen using `settings.get('syntax')` to filter configurations, it's needed such configurations to match exactly. Many times there are more than one single package for any given syntax (and oftentimes there are packages that haven't been updated in some time and for which syntax files have moved or extension definition changed from `tmLanguage` to `sublime-syntax`). This can lead in the need to configure server languages \"guessing\" what syntax files are configured for it.\r\n\r\nReal-world examples of such:\r\n\r\n**Markdown**:\r\n1. `Packages/Markdown/Markdown.sublime-syntax`\r\n2. `Packages/Markdown Extended/Syntaxes/Markdown Extended.sublime-syntax`\r\n3. `Packages/Markdown Extended/Syntaxes/Markdown Extended.tmLanguage`\r\n\r\n**SCSS**:\r\n1. `Packages/Sass/Syntaxes/SCSS.sublime-syntax`\r\n2. `Packages/SCSS/SCSS.tmLanguage`\r\n\r\n**JSON**:\r\n1. `Packages/JavaScript/JSON.sublime-syntax`\r\n2. `Packages/Babel/JSON (Babel).sublime-syntax`\r\n3. `Packages/PackageDev/Package/Sublime Text Settings/Sublime Text Settings.sublime-syntax`\r\n\r\nIt could very well be better to have a set of case insensitive regexps to match the language from the syntax (after all scopes are later used to confirm). That way we could have, accordingly, the following syntaxes that would do a `re.search()`.\r\n\r\n```json\r\n \"syntaxes\": [\r\n \"Markdown\"\r\n ]\r\n```\r\n\r\n```json\r\n \"syntaxes\": [\r\n \"SCSS\"\r\n ]\r\n```\r\n\r\n```json\r\n \"syntaxes\": [\r\n \"JSON\",\r\n \"Sublime Text Settings\"\r\n ]\r\n```\r\n\r\n_Alternatively_, we could add `languages` as the list of regexps that match, keeping `syntaxes` for exact matches and using `languages` as fallbacks to figure out the language configuration.\n", "before_files": [{"content": "import sublime\n\nfrom .settings import ClientConfig, client_configs\nfrom .logging import debug\nfrom .workspace import get_project_config\n\nassert ClientConfig\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\n\nwindow_client_configs = dict() # type: Dict[int, List[ClientConfig]]\n\n\ndef get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':\n for config in configs:\n for scope in config.scopes:\n if len(view.sel()) > 0:\n if view.match_selector(view.sel()[0].begin(), scope):\n return config\n\n return None\n\n\ndef register_client_config(config: ClientConfig) -> None:\n window_client_configs.clear()\n client_configs.add_external_config(config)\n\n\ndef get_global_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.all)\n\n\ndef get_default_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.defaults)\n\n\ndef get_window_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n window = view.window()\n if window:\n configs_for_window = window_client_configs.get(window.id(), [])\n return get_scope_client_config(view, configs_for_window)\n else:\n return None\n\n\ndef config_for_scope(view: sublime.View) -> 'Optional[ClientConfig]':\n # check window_client_config first\n window_client_config = get_window_client_config(view)\n if not window_client_config:\n global_client_config = get_global_client_config(view)\n\n if global_client_config:\n window = view.window()\n if window:\n window_client_config = apply_window_settings(global_client_config, view)\n add_window_client_config(window, window_client_config)\n return window_client_config\n else:\n # always return a client config even if the view has no window anymore\n return global_client_config\n\n return window_client_config\n\n\ndef add_window_client_config(window: 'sublime.Window', config: 'ClientConfig'):\n global window_client_configs\n window_client_configs.setdefault(window.id(), []).append(config)\n\n\ndef clear_window_client_configs(window: 'sublime.Window'):\n global window_client_configs\n if window.id() in window_client_configs:\n del window_client_configs[window.id()]\n\n\ndef apply_window_settings(client_config: 'ClientConfig', view: 'sublime.View') -> 'ClientConfig':\n window = view.window()\n if window:\n window_config = get_project_config(window)\n\n if client_config.name in window_config:\n overrides = window_config[client_config.name]\n debug('window has override for', client_config.name, overrides)\n return ClientConfig(\n client_config.name,\n overrides.get(\"command\", client_config.binary_args),\n overrides.get(\"tcp_port\", client_config.tcp_port),\n overrides.get(\"scopes\", client_config.scopes),\n overrides.get(\"syntaxes\", client_config.syntaxes),\n overrides.get(\"languageId\", client_config.languageId),\n overrides.get(\"enabled\", client_config.enabled),\n overrides.get(\"initializationOptions\", client_config.init_options),\n overrides.get(\"settings\", client_config.settings),\n overrides.get(\"env\", client_config.env)\n )\n\n return client_config\n\n\ndef is_supportable_syntax(syntax: str) -> bool:\n # TODO: filter out configs disabled by the user.\n for config in client_configs.defaults:\n if syntax in config.syntaxes:\n return True\n return False\n\n\ndef is_supported_syntax(syntax: str) -> bool:\n for config in client_configs.all:\n if syntax in config.syntaxes:\n return True\n return False\n\n\ndef is_supported_view(view: sublime.View) -> bool:\n # TODO: perhaps make this check for a client instead of a config\n if config_for_scope(view):\n return True\n else:\n return False\n", "path": "plugin/core/configurations.py"}], "after_files": [{"content": "import re\nimport sublime\n\nfrom .settings import ClientConfig, client_configs\nfrom .logging import debug\nfrom .workspace import get_project_config\n\nassert ClientConfig\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\n\nwindow_client_configs = dict() # type: Dict[int, List[ClientConfig]]\n\n\ndef get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':\n for config in configs:\n for scope in config.scopes:\n if len(view.sel()) > 0:\n if view.match_selector(view.sel()[0].begin(), scope):\n return config\n\n return None\n\n\ndef register_client_config(config: ClientConfig) -> None:\n window_client_configs.clear()\n client_configs.add_external_config(config)\n\n\ndef get_global_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.all)\n\n\ndef get_default_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.defaults)\n\n\ndef get_window_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n window = view.window()\n if window:\n configs_for_window = window_client_configs.get(window.id(), [])\n return get_scope_client_config(view, configs_for_window)\n else:\n return None\n\n\ndef config_for_scope(view: sublime.View) -> 'Optional[ClientConfig]':\n # check window_client_config first\n window_client_config = get_window_client_config(view)\n if not window_client_config:\n global_client_config = get_global_client_config(view)\n\n if global_client_config:\n window = view.window()\n if window:\n window_client_config = apply_window_settings(global_client_config, view)\n add_window_client_config(window, window_client_config)\n return window_client_config\n else:\n # always return a client config even if the view has no window anymore\n return global_client_config\n\n return window_client_config\n\n\ndef add_window_client_config(window: 'sublime.Window', config: 'ClientConfig'):\n global window_client_configs\n window_client_configs.setdefault(window.id(), []).append(config)\n\n\ndef clear_window_client_configs(window: 'sublime.Window'):\n global window_client_configs\n if window.id() in window_client_configs:\n del window_client_configs[window.id()]\n\n\ndef apply_window_settings(client_config: 'ClientConfig', view: 'sublime.View') -> 'ClientConfig':\n window = view.window()\n if window:\n window_config = get_project_config(window)\n\n if client_config.name in window_config:\n overrides = window_config[client_config.name]\n debug('window has override for', client_config.name, overrides)\n return ClientConfig(\n client_config.name,\n overrides.get(\"command\", client_config.binary_args),\n overrides.get(\"tcp_port\", client_config.tcp_port),\n overrides.get(\"scopes\", client_config.scopes),\n overrides.get(\"syntaxes\", client_config.syntaxes),\n overrides.get(\"languageId\", client_config.languageId),\n overrides.get(\"enabled\", client_config.enabled),\n overrides.get(\"initializationOptions\", client_config.init_options),\n overrides.get(\"settings\", client_config.settings),\n overrides.get(\"env\", client_config.env)\n )\n\n return client_config\n\n\ndef is_supportable_syntax(syntax: str) -> bool:\n # TODO: filter out configs disabled by the user.\n for config in client_configs.defaults:\n if re.search(r'|'.join(r'\\b%s\\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):\n return True\n return False\n\n\ndef is_supported_syntax(syntax: str) -> bool:\n for config in client_configs.all:\n if re.search(r'|'.join(r'\\b%s\\b' % re.escape(s) for s in config.syntaxes), syntax, re.IGNORECASE):\n return True\n return False\n\n\ndef is_supported_view(view: sublime.View) -> bool:\n # TODO: perhaps make this check for a client instead of a config\n if config_for_scope(view):\n return True\n else:\n return False\n", "path": "plugin/core/configurations.py"}]} | 1,808 | 229 |
gh_patches_debug_13576 | rasdani/github-patches | git_diff | hartwork__wnpp.debian.net-956 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tracker link

Both of [these columns](https://github.com/hartwork/wnpp.debian.net/blob/67b24c56c33aeac0238962a0480339a5820b3d79/wnpp_debian_net/templates/wnpp_debian_net/debianwnpp_list.html#L287-L288) link to the same bug report page. Would it be a good idea to change the href on the project name to the tracker.debian.org page of that package?
Tracker link

Both of [these columns](https://github.com/hartwork/wnpp.debian.net/blob/67b24c56c33aeac0238962a0480339a5820b3d79/wnpp_debian_net/templates/wnpp_debian_net/debianwnpp_list.html#L287-L288) link to the same bug report page. Would it be a good idea to change the href on the project name to the tracker.debian.org page of that package?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wnpp_debian_net/templatetags/debian_urls.py`
Content:
```
1 # Copyright (C) 2021 Sebastian Pipping <[email protected]>
2 # Licensed under GNU Affero GPL v3 or later
3
4 from django import template
5
6 register = template.Library()
7
8
9 @register.simple_tag
10 def wnpp_issue_url(issue_id):
11 return f'https://bugs.debian.org/cgi-bin/bugreport.cgi?bug={issue_id}'
12
```
Path: `wnpp_debian_net/models.py`
Content:
```
1 # Copyright (C) 2021 Sebastian Pipping <[email protected]>
2 # Licensed under GNU Affero GPL v3 or later
3
4 from django.db import models
5 from django.db.models import CASCADE, DO_NOTHING, ForeignKey, OneToOneField, TextChoices
6 from django.utils.timezone import now
7 from django.utils.translation import gettext_lazy as _
8
9
10 class IssueKind(TextChoices):
11 ITA = 'ITA', _('ITA (Intent to adopt)')
12 ITP = 'ITP', _('ITP (Intent to package)')
13 O_ = 'O', _('O (Orphaned)')
14 RFA = 'RFA', _('RFA (Request for adoption)')
15 RFH = 'RFH', _('RFH (Request for help)')
16 RFP = 'RFP', _('RFP (request for packaging)')
17
18
19 class EventKind(TextChoices):
20 MODIFIED = 'MOD', _('modified')
21 OPENED = 'OPEN', _('opened')
22 CLOSED = 'CLOSE', _('closed')
23
24
25 class DebianLogIndex(models.Model):
26 log_id = models.AutoField(primary_key=True)
27 ident = models.IntegerField(blank=True, null=True)
28 kind = models.CharField(max_length=3,
29 choices=IssueKind.choices,
30 blank=True,
31 null=True,
32 db_column='type')
33 project = models.CharField(max_length=255, blank=True, null=True)
34 description = models.CharField(max_length=255, blank=True, null=True)
35 log_stamp = models.DateTimeField(blank=True, null=True)
36 event = models.CharField(max_length=5, choices=EventKind.choices)
37 event_stamp = models.DateTimeField(blank=True, null=True)
38
39 class Meta:
40 db_table = 'debian_log_index'
41
42
43 class DebianLogMods(models.Model):
44 log = OneToOneField(DebianLogIndex,
45 on_delete=CASCADE,
46 primary_key=True,
47 related_name='kind_change',
48 related_query_name='kind_change')
49 old_kind = models.CharField(max_length=3,
50 choices=IssueKind.choices,
51 blank=True,
52 null=True,
53 db_column='before_type')
54 new_kind = models.CharField(max_length=3,
55 choices=IssueKind.choices,
56 blank=True,
57 null=True,
58 db_column='after_type')
59
60 class Meta:
61 db_table = 'debian_log_mods'
62
63
64 class DebianPopcon(models.Model):
65 package = models.CharField(primary_key=True, max_length=255)
66 inst = models.IntegerField(blank=True, null=True)
67 vote = models.IntegerField(blank=True, null=True)
68 old = models.IntegerField(blank=True, null=True)
69 recent = models.IntegerField(blank=True, null=True)
70 nofiles = models.IntegerField(blank=True, null=True)
71
72 class Meta:
73 db_table = 'debian_popcon'
74
75
76 class DebianWnpp(models.Model):
77 ident = models.IntegerField(primary_key=True)
78 open_person = models.CharField(max_length=255, blank=True, null=True)
79 open_stamp = models.DateTimeField(blank=True, null=True)
80 mod_stamp = models.DateTimeField(blank=True, null=True)
81 kind = models.CharField(max_length=3, choices=IssueKind.choices, db_column='type')
82 # Original was: project = models.CharField(max_length=255, blank=True, null=True)
83 popcon = ForeignKey(DebianPopcon, on_delete=DO_NOTHING, null=True, db_column='project')
84 description = models.CharField(max_length=255, blank=True, null=True)
85 charge_person = models.CharField(max_length=255, blank=True, null=True)
86 cron_stamp = models.DateTimeField()
87 has_smaller_sibling = models.BooleanField(default=False)
88
89 class Meta:
90 db_table = 'debian_wnpp'
91
92 def age_days(self, until=None) -> int:
93 if until is None:
94 until = now()
95 return (until - self.open_stamp).days
96
97 def dust_days(self, until=None) -> int:
98 if until is None:
99 until = now()
100 return (until - self.mod_stamp).days
101
102 def __lt__(self, other):
103 return self.ident < other.ident
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wnpp_debian_net/models.py b/wnpp_debian_net/models.py
--- a/wnpp_debian_net/models.py
+++ b/wnpp_debian_net/models.py
@@ -99,5 +99,8 @@
until = now()
return (until - self.mod_stamp).days
+ def has_existing_package(self):
+ return IssueKind(self.kind) not in (IssueKind.ITP, IssueKind.RFP)
+
def __lt__(self, other):
return self.ident < other.ident
diff --git a/wnpp_debian_net/templatetags/debian_urls.py b/wnpp_debian_net/templatetags/debian_urls.py
--- a/wnpp_debian_net/templatetags/debian_urls.py
+++ b/wnpp_debian_net/templatetags/debian_urls.py
@@ -9,3 +9,8 @@
@register.simple_tag
def wnpp_issue_url(issue_id):
return f'https://bugs.debian.org/cgi-bin/bugreport.cgi?bug={issue_id}'
+
+
[email protected]_tag
+def debian_package_tracker_url(package_name):
+ return f'https://tracker.debian.org/pkg/{package_name}'
| {"golden_diff": "diff --git a/wnpp_debian_net/models.py b/wnpp_debian_net/models.py\n--- a/wnpp_debian_net/models.py\n+++ b/wnpp_debian_net/models.py\n@@ -99,5 +99,8 @@\n until = now()\n return (until - self.mod_stamp).days\n \n+ def has_existing_package(self):\n+ return IssueKind(self.kind) not in (IssueKind.ITP, IssueKind.RFP)\n+\n def __lt__(self, other):\n return self.ident < other.ident\ndiff --git a/wnpp_debian_net/templatetags/debian_urls.py b/wnpp_debian_net/templatetags/debian_urls.py\n--- a/wnpp_debian_net/templatetags/debian_urls.py\n+++ b/wnpp_debian_net/templatetags/debian_urls.py\n@@ -9,3 +9,8 @@\n @register.simple_tag\n def wnpp_issue_url(issue_id):\n return f'https://bugs.debian.org/cgi-bin/bugreport.cgi?bug={issue_id}'\n+\n+\[email protected]_tag\n+def debian_package_tracker_url(package_name):\n+ return f'https://tracker.debian.org/pkg/{package_name}'\n", "issue": "Tracker link \n\r\n\r\n\r\nBoth of [these columns](https://github.com/hartwork/wnpp.debian.net/blob/67b24c56c33aeac0238962a0480339a5820b3d79/wnpp_debian_net/templates/wnpp_debian_net/debianwnpp_list.html#L287-L288) link to the same bug report page. Would it be a good idea to change the href on the project name to the tracker.debian.org page of that package?\nTracker link \n\r\n\r\n\r\nBoth of [these columns](https://github.com/hartwork/wnpp.debian.net/blob/67b24c56c33aeac0238962a0480339a5820b3d79/wnpp_debian_net/templates/wnpp_debian_net/debianwnpp_list.html#L287-L288) link to the same bug report page. Would it be a good idea to change the href on the project name to the tracker.debian.org page of that package?\n", "before_files": [{"content": "# Copyright (C) 2021 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef wnpp_issue_url(issue_id):\n return f'https://bugs.debian.org/cgi-bin/bugreport.cgi?bug={issue_id}'\n", "path": "wnpp_debian_net/templatetags/debian_urls.py"}, {"content": "# Copyright (C) 2021 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nfrom django.db import models\nfrom django.db.models import CASCADE, DO_NOTHING, ForeignKey, OneToOneField, TextChoices\nfrom django.utils.timezone import now\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass IssueKind(TextChoices):\n ITA = 'ITA', _('ITA (Intent to adopt)')\n ITP = 'ITP', _('ITP (Intent to package)')\n O_ = 'O', _('O (Orphaned)')\n RFA = 'RFA', _('RFA (Request for adoption)')\n RFH = 'RFH', _('RFH (Request for help)')\n RFP = 'RFP', _('RFP (request for packaging)')\n\n\nclass EventKind(TextChoices):\n MODIFIED = 'MOD', _('modified')\n OPENED = 'OPEN', _('opened')\n CLOSED = 'CLOSE', _('closed')\n\n\nclass DebianLogIndex(models.Model):\n log_id = models.AutoField(primary_key=True)\n ident = models.IntegerField(blank=True, null=True)\n kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='type')\n project = models.CharField(max_length=255, blank=True, null=True)\n description = models.CharField(max_length=255, blank=True, null=True)\n log_stamp = models.DateTimeField(blank=True, null=True)\n event = models.CharField(max_length=5, choices=EventKind.choices)\n event_stamp = models.DateTimeField(blank=True, null=True)\n\n class Meta:\n db_table = 'debian_log_index'\n\n\nclass DebianLogMods(models.Model):\n log = OneToOneField(DebianLogIndex,\n on_delete=CASCADE,\n primary_key=True,\n related_name='kind_change',\n related_query_name='kind_change')\n old_kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='before_type')\n new_kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='after_type')\n\n class Meta:\n db_table = 'debian_log_mods'\n\n\nclass DebianPopcon(models.Model):\n package = models.CharField(primary_key=True, max_length=255)\n inst = models.IntegerField(blank=True, null=True)\n vote = models.IntegerField(blank=True, null=True)\n old = models.IntegerField(blank=True, null=True)\n recent = models.IntegerField(blank=True, null=True)\n nofiles = models.IntegerField(blank=True, null=True)\n\n class Meta:\n db_table = 'debian_popcon'\n\n\nclass DebianWnpp(models.Model):\n ident = models.IntegerField(primary_key=True)\n open_person = models.CharField(max_length=255, blank=True, null=True)\n open_stamp = models.DateTimeField(blank=True, null=True)\n mod_stamp = models.DateTimeField(blank=True, null=True)\n kind = models.CharField(max_length=3, choices=IssueKind.choices, db_column='type')\n # Original was: project = models.CharField(max_length=255, blank=True, null=True)\n popcon = ForeignKey(DebianPopcon, on_delete=DO_NOTHING, null=True, db_column='project')\n description = models.CharField(max_length=255, blank=True, null=True)\n charge_person = models.CharField(max_length=255, blank=True, null=True)\n cron_stamp = models.DateTimeField()\n has_smaller_sibling = models.BooleanField(default=False)\n\n class Meta:\n db_table = 'debian_wnpp'\n\n def age_days(self, until=None) -> int:\n if until is None:\n until = now()\n return (until - self.open_stamp).days\n\n def dust_days(self, until=None) -> int:\n if until is None:\n until = now()\n return (until - self.mod_stamp).days\n\n def __lt__(self, other):\n return self.ident < other.ident\n", "path": "wnpp_debian_net/models.py"}], "after_files": [{"content": "# Copyright (C) 2021 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef wnpp_issue_url(issue_id):\n return f'https://bugs.debian.org/cgi-bin/bugreport.cgi?bug={issue_id}'\n\n\[email protected]_tag\ndef debian_package_tracker_url(package_name):\n return f'https://tracker.debian.org/pkg/{package_name}'\n", "path": "wnpp_debian_net/templatetags/debian_urls.py"}, {"content": "# Copyright (C) 2021 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nfrom django.db import models\nfrom django.db.models import CASCADE, DO_NOTHING, ForeignKey, OneToOneField, TextChoices\nfrom django.utils.timezone import now\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass IssueKind(TextChoices):\n ITA = 'ITA', _('ITA (Intent to adopt)')\n ITP = 'ITP', _('ITP (Intent to package)')\n O_ = 'O', _('O (Orphaned)')\n RFA = 'RFA', _('RFA (Request for adoption)')\n RFH = 'RFH', _('RFH (Request for help)')\n RFP = 'RFP', _('RFP (request for packaging)')\n\n\nclass EventKind(TextChoices):\n MODIFIED = 'MOD', _('modified')\n OPENED = 'OPEN', _('opened')\n CLOSED = 'CLOSE', _('closed')\n\n\nclass DebianLogIndex(models.Model):\n log_id = models.AutoField(primary_key=True)\n ident = models.IntegerField(blank=True, null=True)\n kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='type')\n project = models.CharField(max_length=255, blank=True, null=True)\n description = models.CharField(max_length=255, blank=True, null=True)\n log_stamp = models.DateTimeField(blank=True, null=True)\n event = models.CharField(max_length=5, choices=EventKind.choices)\n event_stamp = models.DateTimeField(blank=True, null=True)\n\n class Meta:\n db_table = 'debian_log_index'\n\n\nclass DebianLogMods(models.Model):\n log = OneToOneField(DebianLogIndex,\n on_delete=CASCADE,\n primary_key=True,\n related_name='kind_change',\n related_query_name='kind_change')\n old_kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='before_type')\n new_kind = models.CharField(max_length=3,\n choices=IssueKind.choices,\n blank=True,\n null=True,\n db_column='after_type')\n\n class Meta:\n db_table = 'debian_log_mods'\n\n\nclass DebianPopcon(models.Model):\n package = models.CharField(primary_key=True, max_length=255)\n inst = models.IntegerField(blank=True, null=True)\n vote = models.IntegerField(blank=True, null=True)\n old = models.IntegerField(blank=True, null=True)\n recent = models.IntegerField(blank=True, null=True)\n nofiles = models.IntegerField(blank=True, null=True)\n\n class Meta:\n db_table = 'debian_popcon'\n\n\nclass DebianWnpp(models.Model):\n ident = models.IntegerField(primary_key=True)\n open_person = models.CharField(max_length=255, blank=True, null=True)\n open_stamp = models.DateTimeField(blank=True, null=True)\n mod_stamp = models.DateTimeField(blank=True, null=True)\n kind = models.CharField(max_length=3, choices=IssueKind.choices, db_column='type')\n # Original was: project = models.CharField(max_length=255, blank=True, null=True)\n popcon = ForeignKey(DebianPopcon, on_delete=DO_NOTHING, null=True, db_column='project')\n description = models.CharField(max_length=255, blank=True, null=True)\n charge_person = models.CharField(max_length=255, blank=True, null=True)\n cron_stamp = models.DateTimeField()\n has_smaller_sibling = models.BooleanField(default=False)\n\n class Meta:\n db_table = 'debian_wnpp'\n\n def age_days(self, until=None) -> int:\n if until is None:\n until = now()\n return (until - self.open_stamp).days\n\n def dust_days(self, until=None) -> int:\n if until is None:\n until = now()\n return (until - self.mod_stamp).days\n\n def has_existing_package(self):\n return IssueKind(self.kind) not in (IssueKind.ITP, IssueKind.RFP)\n\n def __lt__(self, other):\n return self.ident < other.ident\n", "path": "wnpp_debian_net/models.py"}]} | 1,829 | 265 |
gh_patches_debug_10515 | rasdani/github-patches | git_diff | bokeh__bokeh-4686 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Draw Legend After WebGL glyphs
<img width="906" alt="screen shot 2016-06-24 at 6 50 29 pm" src="https://cloud.githubusercontent.com/assets/433221/16357521/c6f00a1c-3abe-11e6-8835-0e4bb17550d4.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/webgl/iris_blend.py`
Content:
```
1 """ The iris dataset, drawn twice with semi-transparent markers. This is
2 an interesting use-case to test blending, because several samples itself
3 overlap, and by drawing the set twice with different colors, we realize
4 even more interesting blending. Also note how this makes use of
5 different ways to specify (css) colors. This example is a good reference
6 to test WebGL blending.
7
8 """
9
10 from bokeh.plotting import figure, show, output_file
11 from bokeh.sampledata.iris import flowers
12
13 colormap1 = {'setosa': 'rgb(255, 0, 0)',
14 'versicolor': 'rgb(0, 255, 0)',
15 'virginica': 'rgb(0, 0, 255)'}
16 colors1 = [colormap1[x] for x in flowers['species']]
17
18 colormap2 = {'setosa': '#0f0', 'versicolor': '#0f0', 'virginica': '#f00'}
19 colors2 = [colormap2[x] for x in flowers['species']]
20
21 p = figure(title = "Iris Morphology", webgl=True)
22 p.xaxis.axis_label = 'Petal Length'
23 p.yaxis.axis_label = 'Petal Width'
24
25 p.diamond(flowers["petal_length"], flowers["petal_width"],
26 color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25)
27
28 p.circle(flowers["petal_length"], flowers["petal_width"],
29 color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10)
30
31 output_file("iris_blend.html", title="iris_blend.py example")
32
33 show(p)
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/webgl/iris_blend.py b/examples/webgl/iris_blend.py
--- a/examples/webgl/iris_blend.py
+++ b/examples/webgl/iris_blend.py
@@ -23,10 +23,10 @@
p.yaxis.axis_label = 'Petal Width'
p.diamond(flowers["petal_length"], flowers["petal_width"],
- color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25)
+ color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25, legend='diamonds')
p.circle(flowers["petal_length"], flowers["petal_width"],
- color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10)
+ color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10, legend='circles')
output_file("iris_blend.html", title="iris_blend.py example")
| {"golden_diff": "diff --git a/examples/webgl/iris_blend.py b/examples/webgl/iris_blend.py\n--- a/examples/webgl/iris_blend.py\n+++ b/examples/webgl/iris_blend.py\n@@ -23,10 +23,10 @@\n p.yaxis.axis_label = 'Petal Width'\n \n p.diamond(flowers[\"petal_length\"], flowers[\"petal_width\"],\n- color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25)\n+ color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25, legend='diamonds')\n \n p.circle(flowers[\"petal_length\"], flowers[\"petal_width\"],\n- color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10)\n+ color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10, legend='circles')\n \n output_file(\"iris_blend.html\", title=\"iris_blend.py example\")\n", "issue": "Draw Legend After WebGL glyphs\n<img width=\"906\" alt=\"screen shot 2016-06-24 at 6 50 29 pm\" src=\"https://cloud.githubusercontent.com/assets/433221/16357521/c6f00a1c-3abe-11e6-8835-0e4bb17550d4.png\">\n\n", "before_files": [{"content": "\"\"\" The iris dataset, drawn twice with semi-transparent markers. This is\nan interesting use-case to test blending, because several samples itself\noverlap, and by drawing the set twice with different colors, we realize\neven more interesting blending. Also note how this makes use of\ndifferent ways to specify (css) colors. This example is a good reference\nto test WebGL blending.\n\n\"\"\"\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.sampledata.iris import flowers\n\ncolormap1 = {'setosa': 'rgb(255, 0, 0)',\n 'versicolor': 'rgb(0, 255, 0)',\n 'virginica': 'rgb(0, 0, 255)'}\ncolors1 = [colormap1[x] for x in flowers['species']]\n\ncolormap2 = {'setosa': '#0f0', 'versicolor': '#0f0', 'virginica': '#f00'}\ncolors2 = [colormap2[x] for x in flowers['species']]\n\np = figure(title = \"Iris Morphology\", webgl=True)\np.xaxis.axis_label = 'Petal Length'\np.yaxis.axis_label = 'Petal Width'\n\np.diamond(flowers[\"petal_length\"], flowers[\"petal_width\"],\n color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25)\n\np.circle(flowers[\"petal_length\"], flowers[\"petal_width\"],\n color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10)\n\noutput_file(\"iris_blend.html\", title=\"iris_blend.py example\")\n\nshow(p)\n", "path": "examples/webgl/iris_blend.py"}], "after_files": [{"content": "\"\"\" The iris dataset, drawn twice with semi-transparent markers. This is\nan interesting use-case to test blending, because several samples itself\noverlap, and by drawing the set twice with different colors, we realize\neven more interesting blending. Also note how this makes use of\ndifferent ways to specify (css) colors. This example is a good reference\nto test WebGL blending.\n\n\"\"\"\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.sampledata.iris import flowers\n\ncolormap1 = {'setosa': 'rgb(255, 0, 0)',\n 'versicolor': 'rgb(0, 255, 0)',\n 'virginica': 'rgb(0, 0, 255)'}\ncolors1 = [colormap1[x] for x in flowers['species']]\n\ncolormap2 = {'setosa': '#0f0', 'versicolor': '#0f0', 'virginica': '#f00'}\ncolors2 = [colormap2[x] for x in flowers['species']]\n\np = figure(title = \"Iris Morphology\", webgl=True)\np.xaxis.axis_label = 'Petal Length'\np.yaxis.axis_label = 'Petal Width'\n\np.diamond(flowers[\"petal_length\"], flowers[\"petal_width\"],\n color=colors1, line_alpha=0.5, fill_alpha=0.2, size=25, legend='diamonds')\n\np.circle(flowers[\"petal_length\"], flowers[\"petal_width\"],\n color=colors2, line_alpha=0.5, fill_alpha=0.2, size=10, legend='circles')\n\noutput_file(\"iris_blend.html\", title=\"iris_blend.py example\")\n\nshow(p)\n", "path": "examples/webgl/iris_blend.py"}]} | 783 | 227 |
gh_patches_debug_15391 | rasdani/github-patches | git_diff | mlflow__mlflow-5627 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `MlflowException.invalid_parameter_value`
## Motivation
We frequently construct an `MlflowException` instance with `error_code=INVALID_PARAMETER_VALUE`:
```python
import mlflow.exceptions from MlflowException
from mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE
raise MlflowException(
"error message",
error_code=INVALID_PARAMETER_VALUE,
)
```
If we had a class method `invalid_parameter_value`:
```python
class MlflowException(...):
@classmethod
def invalid_parameter_value(cls, message, **kwargs):
return cls(message, error_code=INVALID_PARAMETER_VALUE, **kwargs)
```
we could simplify the code above to:
```python
import mlflow.exceptions from MlflowException
raise MlflowException.invalid_parameter_value("error message")
```
which is shorter and has fewer import statements.
## Notes
- We don't need to replace existing `MlflowException("error message",error_code=INVALID_PARAMETER_VALUE)` for now (we may in the future).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/exceptions.py`
Content:
```
1 import json
2
3 from mlflow.protos.databricks_pb2 import (
4 INTERNAL_ERROR,
5 TEMPORARILY_UNAVAILABLE,
6 ENDPOINT_NOT_FOUND,
7 PERMISSION_DENIED,
8 REQUEST_LIMIT_EXCEEDED,
9 BAD_REQUEST,
10 INVALID_PARAMETER_VALUE,
11 RESOURCE_DOES_NOT_EXIST,
12 INVALID_STATE,
13 RESOURCE_ALREADY_EXISTS,
14 ErrorCode,
15 )
16
17 ERROR_CODE_TO_HTTP_STATUS = {
18 ErrorCode.Name(INTERNAL_ERROR): 500,
19 ErrorCode.Name(INVALID_STATE): 500,
20 ErrorCode.Name(TEMPORARILY_UNAVAILABLE): 503,
21 ErrorCode.Name(REQUEST_LIMIT_EXCEEDED): 429,
22 ErrorCode.Name(ENDPOINT_NOT_FOUND): 404,
23 ErrorCode.Name(RESOURCE_DOES_NOT_EXIST): 404,
24 ErrorCode.Name(PERMISSION_DENIED): 403,
25 ErrorCode.Name(BAD_REQUEST): 400,
26 ErrorCode.Name(RESOURCE_ALREADY_EXISTS): 400,
27 ErrorCode.Name(INVALID_PARAMETER_VALUE): 400,
28 }
29
30
31 class MlflowException(Exception):
32 """
33 Generic exception thrown to surface failure information about external-facing operations.
34 The error message associated with this exception may be exposed to clients in HTTP responses
35 for debugging purposes. If the error text is sensitive, raise a generic `Exception` object
36 instead.
37 """
38
39 def __init__(self, message, error_code=INTERNAL_ERROR, **kwargs):
40 """
41 :param message: The message describing the error that occured. This will be included in the
42 exception's serialized JSON representation.
43 :param error_code: An appropriate error code for the error that occured; it will be included
44 in the exception's serialized JSON representation. This should be one of
45 the codes listed in the `mlflow.protos.databricks_pb2` proto.
46 :param kwargs: Additional key-value pairs to include in the serialized JSON representation
47 of the MlflowException.
48 """
49 try:
50 self.error_code = ErrorCode.Name(error_code)
51 except (ValueError, TypeError):
52 self.error_code = ErrorCode.Name(INTERNAL_ERROR)
53 self.message = message
54 self.json_kwargs = kwargs
55 super().__init__(message)
56
57 def serialize_as_json(self):
58 exception_dict = {"error_code": self.error_code, "message": self.message}
59 exception_dict.update(self.json_kwargs)
60 return json.dumps(exception_dict)
61
62 def get_http_status_code(self):
63 return ERROR_CODE_TO_HTTP_STATUS.get(self.error_code, 500)
64
65
66 class RestException(MlflowException):
67 """Exception thrown on non 200-level responses from the REST API"""
68
69 def __init__(self, json):
70 error_code = json.get("error_code", ErrorCode.Name(INTERNAL_ERROR))
71 message = "%s: %s" % (
72 error_code,
73 json["message"] if "message" in json else "Response: " + str(json),
74 )
75 super().__init__(message, error_code=ErrorCode.Value(error_code))
76 self.json = json
77
78
79 class ExecutionException(MlflowException):
80 """Exception thrown when executing a project fails"""
81
82 pass
83
84
85 class MissingConfigException(MlflowException):
86 """Exception thrown when expected configuration file/directory not found"""
87
88 pass
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mlflow/exceptions.py b/mlflow/exceptions.py
--- a/mlflow/exceptions.py
+++ b/mlflow/exceptions.py
@@ -62,6 +62,18 @@
def get_http_status_code(self):
return ERROR_CODE_TO_HTTP_STATUS.get(self.error_code, 500)
+ @classmethod
+ def invalid_parameter_value(cls, message, **kwargs):
+ """
+ Constructs an `MlflowException` object with the `INVALID_PARAMETER_VALUE` error code.
+
+ :param message: The message describing the error that occured. This will be included in the
+ exception's serialized JSON representation.
+ :param kwargs: Additional key-value pairs to include in the serialized JSON representation
+ of the MlflowException.
+ """
+ return cls(message, error_code=INVALID_PARAMETER_VALUE, **kwargs)
+
class RestException(MlflowException):
"""Exception thrown on non 200-level responses from the REST API"""
| {"golden_diff": "diff --git a/mlflow/exceptions.py b/mlflow/exceptions.py\n--- a/mlflow/exceptions.py\n+++ b/mlflow/exceptions.py\n@@ -62,6 +62,18 @@\n def get_http_status_code(self):\n return ERROR_CODE_TO_HTTP_STATUS.get(self.error_code, 500)\n \n+ @classmethod\n+ def invalid_parameter_value(cls, message, **kwargs):\n+ \"\"\"\n+ Constructs an `MlflowException` object with the `INVALID_PARAMETER_VALUE` error code.\n+\n+ :param message: The message describing the error that occured. This will be included in the\n+ exception's serialized JSON representation.\n+ :param kwargs: Additional key-value pairs to include in the serialized JSON representation\n+ of the MlflowException.\n+ \"\"\"\n+ return cls(message, error_code=INVALID_PARAMETER_VALUE, **kwargs)\n+\n \n class RestException(MlflowException):\n \"\"\"Exception thrown on non 200-level responses from the REST API\"\"\"\n", "issue": "Add `MlflowException.invalid_parameter_value`\n## Motivation\r\n\r\nWe frequently construct an `MlflowException` instance with `error_code=INVALID_PARAMETER_VALUE`:\r\n\r\n\r\n```python\r\nimport mlflow.exceptions from MlflowException\r\nfrom mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE\r\n\r\nraise MlflowException(\r\n \"error message\",\r\n error_code=INVALID_PARAMETER_VALUE,\r\n)\r\n```\r\n\r\nIf we had a class method `invalid_parameter_value`:\r\n\r\n```python\r\nclass MlflowException(...):\r\n @classmethod\r\n def invalid_parameter_value(cls, message, **kwargs):\r\n return cls(message, error_code=INVALID_PARAMETER_VALUE, **kwargs)\r\n```\r\n\r\nwe could simplify the code above to:\r\n\r\n```python\r\nimport mlflow.exceptions from MlflowException\r\n\r\nraise MlflowException.invalid_parameter_value(\"error message\")\r\n```\r\n\r\nwhich is shorter and has fewer import statements.\r\n\r\n## Notes\r\n\r\n- We don't need to replace existing `MlflowException(\"error message\",error_code=INVALID_PARAMETER_VALUE)` for now (we may in the future).\n", "before_files": [{"content": "import json\n\nfrom mlflow.protos.databricks_pb2 import (\n INTERNAL_ERROR,\n TEMPORARILY_UNAVAILABLE,\n ENDPOINT_NOT_FOUND,\n PERMISSION_DENIED,\n REQUEST_LIMIT_EXCEEDED,\n BAD_REQUEST,\n INVALID_PARAMETER_VALUE,\n RESOURCE_DOES_NOT_EXIST,\n INVALID_STATE,\n RESOURCE_ALREADY_EXISTS,\n ErrorCode,\n)\n\nERROR_CODE_TO_HTTP_STATUS = {\n ErrorCode.Name(INTERNAL_ERROR): 500,\n ErrorCode.Name(INVALID_STATE): 500,\n ErrorCode.Name(TEMPORARILY_UNAVAILABLE): 503,\n ErrorCode.Name(REQUEST_LIMIT_EXCEEDED): 429,\n ErrorCode.Name(ENDPOINT_NOT_FOUND): 404,\n ErrorCode.Name(RESOURCE_DOES_NOT_EXIST): 404,\n ErrorCode.Name(PERMISSION_DENIED): 403,\n ErrorCode.Name(BAD_REQUEST): 400,\n ErrorCode.Name(RESOURCE_ALREADY_EXISTS): 400,\n ErrorCode.Name(INVALID_PARAMETER_VALUE): 400,\n}\n\n\nclass MlflowException(Exception):\n \"\"\"\n Generic exception thrown to surface failure information about external-facing operations.\n The error message associated with this exception may be exposed to clients in HTTP responses\n for debugging purposes. If the error text is sensitive, raise a generic `Exception` object\n instead.\n \"\"\"\n\n def __init__(self, message, error_code=INTERNAL_ERROR, **kwargs):\n \"\"\"\n :param message: The message describing the error that occured. This will be included in the\n exception's serialized JSON representation.\n :param error_code: An appropriate error code for the error that occured; it will be included\n in the exception's serialized JSON representation. This should be one of\n the codes listed in the `mlflow.protos.databricks_pb2` proto.\n :param kwargs: Additional key-value pairs to include in the serialized JSON representation\n of the MlflowException.\n \"\"\"\n try:\n self.error_code = ErrorCode.Name(error_code)\n except (ValueError, TypeError):\n self.error_code = ErrorCode.Name(INTERNAL_ERROR)\n self.message = message\n self.json_kwargs = kwargs\n super().__init__(message)\n\n def serialize_as_json(self):\n exception_dict = {\"error_code\": self.error_code, \"message\": self.message}\n exception_dict.update(self.json_kwargs)\n return json.dumps(exception_dict)\n\n def get_http_status_code(self):\n return ERROR_CODE_TO_HTTP_STATUS.get(self.error_code, 500)\n\n\nclass RestException(MlflowException):\n \"\"\"Exception thrown on non 200-level responses from the REST API\"\"\"\n\n def __init__(self, json):\n error_code = json.get(\"error_code\", ErrorCode.Name(INTERNAL_ERROR))\n message = \"%s: %s\" % (\n error_code,\n json[\"message\"] if \"message\" in json else \"Response: \" + str(json),\n )\n super().__init__(message, error_code=ErrorCode.Value(error_code))\n self.json = json\n\n\nclass ExecutionException(MlflowException):\n \"\"\"Exception thrown when executing a project fails\"\"\"\n\n pass\n\n\nclass MissingConfigException(MlflowException):\n \"\"\"Exception thrown when expected configuration file/directory not found\"\"\"\n\n pass\n", "path": "mlflow/exceptions.py"}], "after_files": [{"content": "import json\n\nfrom mlflow.protos.databricks_pb2 import (\n INTERNAL_ERROR,\n TEMPORARILY_UNAVAILABLE,\n ENDPOINT_NOT_FOUND,\n PERMISSION_DENIED,\n REQUEST_LIMIT_EXCEEDED,\n BAD_REQUEST,\n INVALID_PARAMETER_VALUE,\n RESOURCE_DOES_NOT_EXIST,\n INVALID_STATE,\n RESOURCE_ALREADY_EXISTS,\n ErrorCode,\n)\n\nERROR_CODE_TO_HTTP_STATUS = {\n ErrorCode.Name(INTERNAL_ERROR): 500,\n ErrorCode.Name(INVALID_STATE): 500,\n ErrorCode.Name(TEMPORARILY_UNAVAILABLE): 503,\n ErrorCode.Name(REQUEST_LIMIT_EXCEEDED): 429,\n ErrorCode.Name(ENDPOINT_NOT_FOUND): 404,\n ErrorCode.Name(RESOURCE_DOES_NOT_EXIST): 404,\n ErrorCode.Name(PERMISSION_DENIED): 403,\n ErrorCode.Name(BAD_REQUEST): 400,\n ErrorCode.Name(RESOURCE_ALREADY_EXISTS): 400,\n ErrorCode.Name(INVALID_PARAMETER_VALUE): 400,\n}\n\n\nclass MlflowException(Exception):\n \"\"\"\n Generic exception thrown to surface failure information about external-facing operations.\n The error message associated with this exception may be exposed to clients in HTTP responses\n for debugging purposes. If the error text is sensitive, raise a generic `Exception` object\n instead.\n \"\"\"\n\n def __init__(self, message, error_code=INTERNAL_ERROR, **kwargs):\n \"\"\"\n :param message: The message describing the error that occured. This will be included in the\n exception's serialized JSON representation.\n :param error_code: An appropriate error code for the error that occured; it will be included\n in the exception's serialized JSON representation. This should be one of\n the codes listed in the `mlflow.protos.databricks_pb2` proto.\n :param kwargs: Additional key-value pairs to include in the serialized JSON representation\n of the MlflowException.\n \"\"\"\n try:\n self.error_code = ErrorCode.Name(error_code)\n except (ValueError, TypeError):\n self.error_code = ErrorCode.Name(INTERNAL_ERROR)\n self.message = message\n self.json_kwargs = kwargs\n super().__init__(message)\n\n def serialize_as_json(self):\n exception_dict = {\"error_code\": self.error_code, \"message\": self.message}\n exception_dict.update(self.json_kwargs)\n return json.dumps(exception_dict)\n\n def get_http_status_code(self):\n return ERROR_CODE_TO_HTTP_STATUS.get(self.error_code, 500)\n\n @classmethod\n def invalid_parameter_value(cls, message, **kwargs):\n \"\"\"\n Constructs an `MlflowException` object with the `INVALID_PARAMETER_VALUE` error code.\n\n :param message: The message describing the error that occured. This will be included in the\n exception's serialized JSON representation.\n :param kwargs: Additional key-value pairs to include in the serialized JSON representation\n of the MlflowException.\n \"\"\"\n return cls(message, error_code=INVALID_PARAMETER_VALUE, **kwargs)\n\n\nclass RestException(MlflowException):\n \"\"\"Exception thrown on non 200-level responses from the REST API\"\"\"\n\n def __init__(self, json):\n error_code = json.get(\"error_code\", ErrorCode.Name(INTERNAL_ERROR))\n message = \"%s: %s\" % (\n error_code,\n json[\"message\"] if \"message\" in json else \"Response: \" + str(json),\n )\n super().__init__(message, error_code=ErrorCode.Value(error_code))\n self.json = json\n\n\nclass ExecutionException(MlflowException):\n \"\"\"Exception thrown when executing a project fails\"\"\"\n\n pass\n\n\nclass MissingConfigException(MlflowException):\n \"\"\"Exception thrown when expected configuration file/directory not found\"\"\"\n\n pass\n", "path": "mlflow/exceptions.py"}]} | 1,340 | 216 |
gh_patches_debug_11292 | rasdani/github-patches | git_diff | optuna__optuna-889 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use OrderedDict to keep the order of intermediate values.
This PR addresses #886.
I think it has two issues.
1. `dict` does not keep the order of keys
The current implementation of `FrozenTrial.intermediate_values` employs `dict`. The order of keys is not ensured if the users use Python 3.6 or older. This PR uses `OrderedDict` for `FrozenTrial.intermediate_values`.
2. RDBs do not ensure the order of results if a query has no `order by` clause.
This PR arranges the intermediate values when they are retrieved from databases.
Please note that the results are not necessarily ordered by primary keys when queries do not contain `order by`. (c.f., ['When no 'Order by' is specified, what order does a query choose for your record set?'](https://stackoverflow.com/questions/20050341/when-no-order-by-is-specified-what-order-does-a-query-choose-for-your-record))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/visualization/intermediate_values.py`
Content:
```
1 from optuna.logging import get_logger
2 from optuna.structs import TrialState
3 from optuna import type_checking
4 from optuna.visualization.utils import _check_plotly_availability
5 from optuna.visualization.utils import is_available
6
7 if type_checking.TYPE_CHECKING:
8 from optuna.study import Study # NOQA
9
10 if is_available():
11 from optuna.visualization.plotly_imports import go
12
13 logger = get_logger(__name__)
14
15
16 def plot_intermediate_values(study):
17 # type: (Study) -> go.Figure
18 """Plot intermediate values of all trials in a study.
19
20 Example:
21
22 The following code snippet shows how to plot intermediate values.
23
24 .. code::
25
26 import optuna
27
28 def objective(trial):
29 # Intermediate values are supposed to be reported inside the objective function.
30 ...
31
32 study = optuna.create_study()
33 study.optimize(objective, n_trials=100)
34
35 optuna.visualization.plot_intermediate_values(study)
36
37 Args:
38 study:
39 A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate
40 values.
41
42 Returns:
43 A :class:`plotly.graph_objs.Figure` object.
44 """
45
46 _check_plotly_availability()
47 return _get_intermediate_plot(study)
48
49
50 def _get_intermediate_plot(study):
51 # type: (Study) -> go.Figure
52
53 layout = go.Layout(
54 title='Intermediate Values Plot',
55 xaxis={'title': 'Step'},
56 yaxis={'title': 'Intermediate Value'},
57 showlegend=False
58 )
59
60 target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]
61 trials = [trial for trial in study.trials if trial.state in target_state]
62
63 if len(trials) == 0:
64 logger.warning('Study instance does not contain trials.')
65 return go.Figure(data=[], layout=layout)
66
67 traces = []
68 for trial in trials:
69 if trial.intermediate_values:
70 trace = go.Scatter(
71 x=tuple(trial.intermediate_values.keys()),
72 y=tuple(trial.intermediate_values.values()),
73 mode='lines+markers',
74 marker={
75 'maxdisplayed': 10
76 },
77 name='Trial{}'.format(trial.number)
78 )
79 traces.append(trace)
80
81 if not traces:
82 logger.warning(
83 'You need to set up the pruning feature to utilize `plot_intermediate_values()`')
84 return go.Figure(data=[], layout=layout)
85
86 figure = go.Figure(data=traces, layout=layout)
87
88 return figure
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/visualization/intermediate_values.py b/optuna/visualization/intermediate_values.py
--- a/optuna/visualization/intermediate_values.py
+++ b/optuna/visualization/intermediate_values.py
@@ -67,9 +67,10 @@
traces = []
for trial in trials:
if trial.intermediate_values:
+ sorted_intermediate_values = sorted(trial.intermediate_values.items())
trace = go.Scatter(
- x=tuple(trial.intermediate_values.keys()),
- y=tuple(trial.intermediate_values.values()),
+ x=tuple((x for x, _ in sorted_intermediate_values)),
+ y=tuple((y for _, y in sorted_intermediate_values)),
mode='lines+markers',
marker={
'maxdisplayed': 10
| {"golden_diff": "diff --git a/optuna/visualization/intermediate_values.py b/optuna/visualization/intermediate_values.py\n--- a/optuna/visualization/intermediate_values.py\n+++ b/optuna/visualization/intermediate_values.py\n@@ -67,9 +67,10 @@\n traces = []\n for trial in trials:\n if trial.intermediate_values:\n+ sorted_intermediate_values = sorted(trial.intermediate_values.items())\n trace = go.Scatter(\n- x=tuple(trial.intermediate_values.keys()),\n- y=tuple(trial.intermediate_values.values()),\n+ x=tuple((x for x, _ in sorted_intermediate_values)),\n+ y=tuple((y for _, y in sorted_intermediate_values)),\n mode='lines+markers',\n marker={\n 'maxdisplayed': 10\n", "issue": "Use OrderedDict to keep the order of intermediate values.\nThis PR addresses #886. \r\n\r\nI think it has two issues.\r\n\r\n1. `dict` does not keep the order of keys\r\n\r\nThe current implementation of `FrozenTrial.intermediate_values` employs `dict`. The order of keys is not ensured if the users use Python 3.6 or older. This PR uses `OrderedDict` for `FrozenTrial.intermediate_values`.\r\n\r\n2. RDBs do not ensure the order of results if a query has no `order by` clause.\r\n\r\nThis PR arranges the intermediate values when they are retrieved from databases.\r\nPlease note that the results are not necessarily ordered by primary keys when queries do not contain `order by`. (c.f., ['When no 'Order by' is specified, what order does a query choose for your record set?'](https://stackoverflow.com/questions/20050341/when-no-order-by-is-specified-what-order-does-a-query-choose-for-your-record))\n", "before_files": [{"content": "from optuna.logging import get_logger\nfrom optuna.structs import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from optuna.study import Study # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n\nlogger = get_logger(__name__)\n\n\ndef plot_intermediate_values(study):\n # type: (Study) -> go.Figure\n \"\"\"Plot intermediate values of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot intermediate values.\n\n .. code::\n\n import optuna\n\n def objective(trial):\n # Intermediate values are supposed to be reported inside the objective function.\n ...\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=100)\n\n optuna.visualization.plot_intermediate_values(study)\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate\n values.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_intermediate_plot(study)\n\n\ndef _get_intermediate_plot(study):\n # type: (Study) -> go.Figure\n\n layout = go.Layout(\n title='Intermediate Values Plot',\n xaxis={'title': 'Step'},\n yaxis={'title': 'Intermediate Value'},\n showlegend=False\n )\n\n target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]\n trials = [trial for trial in study.trials if trial.state in target_state]\n\n if len(trials) == 0:\n logger.warning('Study instance does not contain trials.')\n return go.Figure(data=[], layout=layout)\n\n traces = []\n for trial in trials:\n if trial.intermediate_values:\n trace = go.Scatter(\n x=tuple(trial.intermediate_values.keys()),\n y=tuple(trial.intermediate_values.values()),\n mode='lines+markers',\n marker={\n 'maxdisplayed': 10\n },\n name='Trial{}'.format(trial.number)\n )\n traces.append(trace)\n\n if not traces:\n logger.warning(\n 'You need to set up the pruning feature to utilize `plot_intermediate_values()`')\n return go.Figure(data=[], layout=layout)\n\n figure = go.Figure(data=traces, layout=layout)\n\n return figure\n", "path": "optuna/visualization/intermediate_values.py"}], "after_files": [{"content": "from optuna.logging import get_logger\nfrom optuna.structs import TrialState\nfrom optuna import type_checking\nfrom optuna.visualization.utils import _check_plotly_availability\nfrom optuna.visualization.utils import is_available\n\nif type_checking.TYPE_CHECKING:\n from optuna.study import Study # NOQA\n\nif is_available():\n from optuna.visualization.plotly_imports import go\n\nlogger = get_logger(__name__)\n\n\ndef plot_intermediate_values(study):\n # type: (Study) -> go.Figure\n \"\"\"Plot intermediate values of all trials in a study.\n\n Example:\n\n The following code snippet shows how to plot intermediate values.\n\n .. code::\n\n import optuna\n\n def objective(trial):\n # Intermediate values are supposed to be reported inside the objective function.\n ...\n\n study = optuna.create_study()\n study.optimize(objective, n_trials=100)\n\n optuna.visualization.plot_intermediate_values(study)\n\n Args:\n study:\n A :class:`~optuna.study.Study` object whose trials are plotted for their intermediate\n values.\n\n Returns:\n A :class:`plotly.graph_objs.Figure` object.\n \"\"\"\n\n _check_plotly_availability()\n return _get_intermediate_plot(study)\n\n\ndef _get_intermediate_plot(study):\n # type: (Study) -> go.Figure\n\n layout = go.Layout(\n title='Intermediate Values Plot',\n xaxis={'title': 'Step'},\n yaxis={'title': 'Intermediate Value'},\n showlegend=False\n )\n\n target_state = [TrialState.PRUNED, TrialState.COMPLETE, TrialState.RUNNING]\n trials = [trial for trial in study.trials if trial.state in target_state]\n\n if len(trials) == 0:\n logger.warning('Study instance does not contain trials.')\n return go.Figure(data=[], layout=layout)\n\n traces = []\n for trial in trials:\n if trial.intermediate_values:\n sorted_intermediate_values = sorted(trial.intermediate_values.items())\n trace = go.Scatter(\n x=tuple((x for x, _ in sorted_intermediate_values)),\n y=tuple((y for _, y in sorted_intermediate_values)),\n mode='lines+markers',\n marker={\n 'maxdisplayed': 10\n },\n name='Trial{}'.format(trial.number)\n )\n traces.append(trace)\n\n if not traces:\n logger.warning(\n 'You need to set up the pruning feature to utilize `plot_intermediate_values()`')\n return go.Figure(data=[], layout=layout)\n\n figure = go.Figure(data=traces, layout=layout)\n\n return figure\n", "path": "optuna/visualization/intermediate_values.py"}]} | 1,204 | 177 |
gh_patches_debug_21961 | rasdani/github-patches | git_diff | pantsbuild__pants-6405 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The python_requirements macro retains no handle to the requirements file.
This means changes in the requirements file do not result in changes in a live target graph (pantsd).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/python/python_requirements.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 import os
8 from builtins import object, open
9
10
11 class PythonRequirements(object):
12 """Translates a pip requirements file into an equivalent set of python_requirements
13
14 If the ``requirements.txt`` file has lines ``foo>=3.14`` and ``bar>=2.7``,
15 then this is roughly like::
16
17 python_requirement_library(name="foo", requirements=[
18 python_requirement("foo>=3.14"),
19 ])
20 python_requirement_library(name="bar", requirements=[
21 python_requirement("bar>=2.7"),
22 ])
23
24 NB some requirements files can't be unambiguously translated; ie: multiple
25 find links. For these files a ValueError will be raised that points out the issue.
26
27 See the requirements file spec here:
28 https://pip.pypa.io/en/latest/reference/pip_install.html#requirements-file-format
29 """
30
31 def __init__(self, parse_context):
32 self._parse_context = parse_context
33
34 def __call__(self, requirements_relpath='requirements.txt'):
35 """
36 :param string requirements_relpath: The relpath from this BUILD file to the requirements file.
37 Defaults to a `requirements.txt` file sibling to the BUILD file.
38 """
39
40 requirements = []
41 repository = None
42
43 requirements_path = os.path.join(self._parse_context.rel_path, requirements_relpath)
44 with open(requirements_path, 'r') as fp:
45 for line in fp:
46 line = line.strip()
47 if line and not line.startswith('#'):
48 if not line.startswith('-'):
49 requirements.append(line)
50 else:
51 # handle flags we know about
52 flag_value = line.split(' ', 1)
53 if len(flag_value) == 2:
54 flag = flag_value[0].strip()
55 value = flag_value[1].strip()
56 if flag in ('-f', '--find-links'):
57 if repository is not None:
58 raise ValueError('Only 1 --find-links url is supported per requirements file')
59 repository = value
60
61 for requirement in requirements:
62 req = self._parse_context.create_object('python_requirement', requirement,
63 repository=repository)
64 self._parse_context.create_object('python_requirement_library',
65 name=req.project_name,
66 requirements=[req])
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/python/python_requirements.py b/src/python/pants/backend/python/python_requirements.py
--- a/src/python/pants/backend/python/python_requirements.py
+++ b/src/python/pants/backend/python/python_requirements.py
@@ -58,9 +58,16 @@
raise ValueError('Only 1 --find-links url is supported per requirements file')
repository = value
+ requirements_file_target_name = requirements_relpath
+ self._parse_context.create_object('files',
+ name=requirements_file_target_name,
+ sources=[requirements_relpath])
+ requirements_dep = ':{}'.format(requirements_file_target_name)
+
for requirement in requirements:
req = self._parse_context.create_object('python_requirement', requirement,
repository=repository)
self._parse_context.create_object('python_requirement_library',
name=req.project_name,
- requirements=[req])
+ requirements=[req],
+ dependencies=[requirements_dep])
| {"golden_diff": "diff --git a/src/python/pants/backend/python/python_requirements.py b/src/python/pants/backend/python/python_requirements.py\n--- a/src/python/pants/backend/python/python_requirements.py\n+++ b/src/python/pants/backend/python/python_requirements.py\n@@ -58,9 +58,16 @@\n raise ValueError('Only 1 --find-links url is supported per requirements file')\n repository = value\n \n+ requirements_file_target_name = requirements_relpath\n+ self._parse_context.create_object('files',\n+ name=requirements_file_target_name,\n+ sources=[requirements_relpath])\n+ requirements_dep = ':{}'.format(requirements_file_target_name)\n+\n for requirement in requirements:\n req = self._parse_context.create_object('python_requirement', requirement,\n repository=repository)\n self._parse_context.create_object('python_requirement_library',\n name=req.project_name,\n- requirements=[req])\n+ requirements=[req],\n+ dependencies=[requirements_dep])\n", "issue": "The python_requirements macro retains no handle to the requirements file.\nThis means changes in the requirements file do not result in changes in a live target graph (pantsd).\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nfrom builtins import object, open\n\n\nclass PythonRequirements(object):\n \"\"\"Translates a pip requirements file into an equivalent set of python_requirements\n\n If the ``requirements.txt`` file has lines ``foo>=3.14`` and ``bar>=2.7``,\n then this is roughly like::\n\n python_requirement_library(name=\"foo\", requirements=[\n python_requirement(\"foo>=3.14\"),\n ])\n python_requirement_library(name=\"bar\", requirements=[\n python_requirement(\"bar>=2.7\"),\n ])\n\n NB some requirements files can't be unambiguously translated; ie: multiple\n find links. For these files a ValueError will be raised that points out the issue.\n\n See the requirements file spec here:\n https://pip.pypa.io/en/latest/reference/pip_install.html#requirements-file-format\n \"\"\"\n\n def __init__(self, parse_context):\n self._parse_context = parse_context\n\n def __call__(self, requirements_relpath='requirements.txt'):\n \"\"\"\n :param string requirements_relpath: The relpath from this BUILD file to the requirements file.\n Defaults to a `requirements.txt` file sibling to the BUILD file.\n \"\"\"\n\n requirements = []\n repository = None\n\n requirements_path = os.path.join(self._parse_context.rel_path, requirements_relpath)\n with open(requirements_path, 'r') as fp:\n for line in fp:\n line = line.strip()\n if line and not line.startswith('#'):\n if not line.startswith('-'):\n requirements.append(line)\n else:\n # handle flags we know about\n flag_value = line.split(' ', 1)\n if len(flag_value) == 2:\n flag = flag_value[0].strip()\n value = flag_value[1].strip()\n if flag in ('-f', '--find-links'):\n if repository is not None:\n raise ValueError('Only 1 --find-links url is supported per requirements file')\n repository = value\n\n for requirement in requirements:\n req = self._parse_context.create_object('python_requirement', requirement,\n repository=repository)\n self._parse_context.create_object('python_requirement_library',\n name=req.project_name,\n requirements=[req])\n", "path": "src/python/pants/backend/python/python_requirements.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nfrom builtins import object, open\n\n\nclass PythonRequirements(object):\n \"\"\"Translates a pip requirements file into an equivalent set of python_requirements\n\n If the ``requirements.txt`` file has lines ``foo>=3.14`` and ``bar>=2.7``,\n then this is roughly like::\n\n python_requirement_library(name=\"foo\", requirements=[\n python_requirement(\"foo>=3.14\"),\n ])\n python_requirement_library(name=\"bar\", requirements=[\n python_requirement(\"bar>=2.7\"),\n ])\n\n NB some requirements files can't be unambiguously translated; ie: multiple\n find links. For these files a ValueError will be raised that points out the issue.\n\n See the requirements file spec here:\n https://pip.pypa.io/en/latest/reference/pip_install.html#requirements-file-format\n \"\"\"\n\n def __init__(self, parse_context):\n self._parse_context = parse_context\n\n def __call__(self, requirements_relpath='requirements.txt'):\n \"\"\"\n :param string requirements_relpath: The relpath from this BUILD file to the requirements file.\n Defaults to a `requirements.txt` file sibling to the BUILD file.\n \"\"\"\n\n requirements = []\n repository = None\n\n requirements_path = os.path.join(self._parse_context.rel_path, requirements_relpath)\n with open(requirements_path, 'r') as fp:\n for line in fp:\n line = line.strip()\n if line and not line.startswith('#'):\n if not line.startswith('-'):\n requirements.append(line)\n else:\n # handle flags we know about\n flag_value = line.split(' ', 1)\n if len(flag_value) == 2:\n flag = flag_value[0].strip()\n value = flag_value[1].strip()\n if flag in ('-f', '--find-links'):\n if repository is not None:\n raise ValueError('Only 1 --find-links url is supported per requirements file')\n repository = value\n\n requirements_file_target_name = requirements_relpath\n self._parse_context.create_object('files',\n name=requirements_file_target_name,\n sources=[requirements_relpath])\n requirements_dep = ':{}'.format(requirements_file_target_name)\n\n for requirement in requirements:\n req = self._parse_context.create_object('python_requirement', requirement,\n repository=repository)\n self._parse_context.create_object('python_requirement_library',\n name=req.project_name,\n requirements=[req],\n dependencies=[requirements_dep])\n", "path": "src/python/pants/backend/python/python_requirements.py"}]} | 958 | 202 |
gh_patches_debug_780 | rasdani/github-patches | git_diff | graspologic-org__graspologic-428 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update requirements to scipy>=1.4
Scipy 1.4
- has much faster linear assignment problem, making FAQ way faster
- has MGC, which we eventually want for new nonpar, signal subgraph
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 from setuptools import setup, find_packages
4 from sys import platform
5
6 PACKAGE_NAME = "graspy"
7 DESCRIPTION = "A set of python modules for graph statistics"
8 with open("README.md", "r") as f:
9 LONG_DESCRIPTION = f.read()
10 AUTHOR = ("Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand",)
11 AUTHOR_EMAIL = "[email protected]"
12 URL = "https://github.com/neurodata/graspy"
13 MINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5
14 REQUIRED_PACKAGES = [
15 "networkx>=2.1",
16 "numpy>=1.8.1",
17 "scikit-learn>=0.19.1",
18 "scipy>=1.1.0",
19 "seaborn>=0.9.0",
20 "matplotlib>=3.0.0",
21 "hyppo>=0.1.3",
22 ]
23
24
25 # Find GraSPy version.
26 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
27 for line in open(os.path.join(PROJECT_PATH, "graspy", "__init__.py")):
28 if line.startswith("__version__ = "):
29 VERSION = line.strip().split()[2][1:-1]
30
31
32 def check_python_version():
33 """Exit when the Python version is too low."""
34 if sys.version_info < MINIMUM_PYTHON_VERSION:
35 sys.exit("Python {}.{}+ is required.".format(*MINIMUM_PYTHON_VERSION))
36
37
38 check_python_version()
39
40 setup(
41 name=PACKAGE_NAME,
42 version=VERSION,
43 description=DESCRIPTION,
44 long_description=LONG_DESCRIPTION,
45 long_description_content_type="text/markdown",
46 author=AUTHOR,
47 author_email=AUTHOR_EMAIL,
48 install_requires=REQUIRED_PACKAGES,
49 url=URL,
50 license="Apache License 2.0",
51 classifiers=[
52 "Development Status :: 3 - Alpha",
53 "Intended Audience :: Science/Research",
54 "Topic :: Scientific/Engineering :: Mathematics",
55 "License :: OSI Approved :: Apache Software License",
56 "Programming Language :: Python :: 3",
57 "Programming Language :: Python :: 3.6",
58 "Programming Language :: Python :: 3.7",
59 ],
60 packages=find_packages(),
61 include_package_data=True,
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,7 +15,7 @@
"networkx>=2.1",
"numpy>=1.8.1",
"scikit-learn>=0.19.1",
- "scipy>=1.1.0",
+ "scipy>=1.4.0",
"seaborn>=0.9.0",
"matplotlib>=3.0.0",
"hyppo>=0.1.3",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,7 +15,7 @@\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n- \"scipy>=1.1.0\",\n+ \"scipy>=1.4.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n \"hyppo>=0.1.3\",\n", "issue": "update requirements to scipy>=1.4\nScipy 1.4\r\n- has much faster linear assignment problem, making FAQ way faster\r\n- has MGC, which we eventually want for new nonpar, signal subgraph\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n \"hyppo>=0.1.3\",\n]\n\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.4.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n \"hyppo>=0.1.3\",\n]\n\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}]} | 919 | 122 |
gh_patches_debug_1137 | rasdani/github-patches | git_diff | hylang__hy-1201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix setup.py
at least hy.extra is missing from package data
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the "Software"),
6 # to deal in the Software without restriction, including without limitation
7 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 # and/or sell copies of the Software, and to permit persons to whom the
9 # Software is furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
17 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
19 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
20 # DEALINGS IN THE SOFTWARE.
21
22 import os
23 import re
24 import sys
25 import subprocess
26
27 from setuptools import find_packages, setup
28
29 os.chdir(os.path.split(os.path.abspath(__file__))[0])
30
31 PKG = "hy"
32 VERSIONFILE = os.path.join(PKG, "version.py")
33 try:
34 __version__ = (subprocess.check_output
35 (["git", "describe", "--tags", "--dirty"])
36 .decode('ASCII').strip()
37 .replace('-', '+', 1).replace('-', '.'))
38 with open(VERSIONFILE, "wt") as o:
39 o.write("__version__ = {!r}\n".format(__version__))
40 except subprocess.CalledProcessError:
41 __version__ = "unknown"
42
43 long_description = """Hy is a Python <--> Lisp layer. It helps
44 make things work nicer, and lets Python and the Hy lisp variant play
45 nice together. """
46
47 install_requires = ['rply>=0.7.0', 'astor>=0.5', 'clint>=0.4']
48 if sys.version_info[:2] < (2, 7):
49 install_requires.append('argparse>=1.2.1')
50 install_requires.append('importlib>=1.0.2')
51 if os.name == 'nt':
52 install_requires.append('pyreadline>=2.1')
53
54 ver = sys.version_info[0]
55
56 setup(
57 name=PKG,
58 version=__version__,
59 install_requires=install_requires,
60 entry_points={
61 'console_scripts': [
62 'hy = hy.cmdline:hy_main',
63 'hy%d = hy.cmdline:hy_main' % ver,
64 'hyc = hy.cmdline:hyc_main',
65 'hyc%d = hy.cmdline:hyc_main' % ver,
66 'hy2py = hy.cmdline:hy2py_main',
67 'hy2py%d = hy.cmdline:hy2py_main' % ver,
68 ]
69 },
70 packages=find_packages(exclude=['tests*']),
71 package_data={
72 'hy.contrib': ['*.hy'],
73 'hy.core': ['*.hy'],
74 },
75 author="Paul Tagliamonte",
76 author_email="[email protected]",
77 long_description=long_description,
78 description='Lisp and Python love each other.',
79 license="Expat",
80 url="http://hylang.org/",
81 platforms=['any'],
82 classifiers=[
83 "Development Status :: 4 - Beta",
84 "Intended Audience :: Developers",
85 "License :: DFSG approved",
86 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
87 "Operating System :: OS Independent",
88 "Programming Language :: Lisp",
89 "Programming Language :: Python",
90 "Programming Language :: Python :: 2",
91 "Programming Language :: Python :: 2.6",
92 "Programming Language :: Python :: 2.7",
93 "Programming Language :: Python :: 3",
94 "Programming Language :: Python :: 3.3",
95 "Programming Language :: Python :: 3.4",
96 "Programming Language :: Python :: 3.5",
97 "Topic :: Software Development :: Code Generators",
98 "Topic :: Software Development :: Compilers",
99 "Topic :: Software Development :: Libraries",
100 ]
101 )
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -71,6 +71,7 @@
package_data={
'hy.contrib': ['*.hy'],
'hy.core': ['*.hy'],
+ 'hy.extra': ['*.hy'],
},
author="Paul Tagliamonte",
author_email="[email protected]",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,6 +71,7 @@\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n+ 'hy.extra': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n", "issue": "fix setup.py\nat least hy.extra is missing from package data\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\nimport re\nimport sys\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\nVERSIONFILE = os.path.join(PKG, \"version.py\")\ntry:\n __version__ = (subprocess.check_output\n ([\"git\", \"describe\", \"--tags\", \"--dirty\"])\n .decode('ASCII').strip()\n .replace('-', '+', 1).replace('-', '.'))\n with open(VERSIONFILE, \"wt\") as o:\n o.write(\"__version__ = {!r}\\n\".format(__version__))\nexcept subprocess.CalledProcessError:\n __version__ = \"unknown\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\ninstall_requires = ['rply>=0.7.0', 'astor>=0.5', 'clint>=0.4']\nif sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\n install_requires.append('importlib>=1.0.2')\nif os.name == 'nt':\n install_requires.append('pyreadline>=2.1')\n\nver = sys.version_info[0]\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hy%d = hy.cmdline:hy_main' % ver,\n 'hyc = hy.cmdline:hyc_main',\n 'hyc%d = hy.cmdline:hyc_main' % ver,\n 'hy2py = hy.cmdline:hy2py_main',\n 'hy2py%d = hy.cmdline:hy2py_main' % ver,\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\nimport re\nimport sys\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\nVERSIONFILE = os.path.join(PKG, \"version.py\")\ntry:\n __version__ = (subprocess.check_output\n ([\"git\", \"describe\", \"--tags\", \"--dirty\"])\n .decode('ASCII').strip()\n .replace('-', '+', 1).replace('-', '.'))\n with open(VERSIONFILE, \"wt\") as o:\n o.write(\"__version__ = {!r}\\n\".format(__version__))\nexcept subprocess.CalledProcessError:\n __version__ = \"unknown\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\ninstall_requires = ['rply>=0.7.0', 'astor>=0.5', 'clint>=0.4']\nif sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\n install_requires.append('importlib>=1.0.2')\nif os.name == 'nt':\n install_requires.append('pyreadline>=2.1')\n\nver = sys.version_info[0]\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hy%d = hy.cmdline:hy_main' % ver,\n 'hyc = hy.cmdline:hyc_main',\n 'hyc%d = hy.cmdline:hyc_main' % ver,\n 'hy2py = hy.cmdline:hy2py_main',\n 'hy2py%d = hy.cmdline:hy2py_main' % ver,\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n 'hy.extra': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}]} | 1,409 | 85 |
gh_patches_debug_1145 | rasdani/github-patches | git_diff | cocotb__cocotb-1298 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change setup.py to list the version as 1.x-dev for versions installed from github
As suggested by @themperek, it would be neat if cocotb behaved like this:
```
> pip install git+https://github.com/cocotb/cocotb
> python -c "import cocotb; print(cocotb.__version__)"
1.4.0-dev
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cocotb/_version.py`
Content:
```
1 # Package versioning solution originally found here:
2 # http://stackoverflow.com/q/458550
3
4 # Store the version here so:
5 # 1) we don't load dependencies by storing it in __init__.py
6 # 2) we can import it in setup.py for the same reason
7 # 3) we can import it into your module
8 __version__ = '1.3.0'
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cocotb/_version.py b/cocotb/_version.py
--- a/cocotb/_version.py
+++ b/cocotb/_version.py
@@ -5,4 +5,4 @@
# 1) we don't load dependencies by storing it in __init__.py
# 2) we can import it in setup.py for the same reason
# 3) we can import it into your module
-__version__ = '1.3.0'
+__version__ = '1.4.0.dev0'
| {"golden_diff": "diff --git a/cocotb/_version.py b/cocotb/_version.py\n--- a/cocotb/_version.py\n+++ b/cocotb/_version.py\n@@ -5,4 +5,4 @@\n # 1) we don't load dependencies by storing it in __init__.py\n # 2) we can import it in setup.py for the same reason\n # 3) we can import it into your module\n-__version__ = '1.3.0'\n+__version__ = '1.4.0.dev0'\n", "issue": "Change setup.py to list the version as 1.x-dev for versions installed from github\nAs suggested by @themperek, it would be neat if cocotb behaved like this:\r\n```\r\n> pip install git+https://github.com/cocotb/cocotb\r\n> python -c \"import cocotb; print(cocotb.__version__)\"\r\n1.4.0-dev\r\n```\n", "before_files": [{"content": "# Package versioning solution originally found here:\n# http://stackoverflow.com/q/458550\n\n# Store the version here so:\n# 1) we don't load dependencies by storing it in __init__.py\n# 2) we can import it in setup.py for the same reason\n# 3) we can import it into your module\n__version__ = '1.3.0'\n", "path": "cocotb/_version.py"}], "after_files": [{"content": "# Package versioning solution originally found here:\n# http://stackoverflow.com/q/458550\n\n# Store the version here so:\n# 1) we don't load dependencies by storing it in __init__.py\n# 2) we can import it in setup.py for the same reason\n# 3) we can import it into your module\n__version__ = '1.4.0.dev0'\n", "path": "cocotb/_version.py"}]} | 443 | 122 |
gh_patches_debug_13157 | rasdani/github-patches | git_diff | doccano__doccano-2201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Include Example number in the Project Comments view
Feature description
---------
On the backend API and in the dataset annotation interface, the Comments are associated with specific Examples. But in the Project Comments view, the Example association is unclear--all the comments are grouped together.
Can the Project Comments view tab be improved to detail Examples, maybe even sort or group by Example?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/examples/views/comment.py`
Content:
```
1 from django_filters.rest_framework import DjangoFilterBackend
2 from rest_framework import filters, generics, status
3 from rest_framework.permissions import IsAuthenticated
4 from rest_framework.response import Response
5
6 from examples.models import Comment
7 from examples.permissions import IsOwnComment
8 from examples.serializers import CommentSerializer
9 from projects.permissions import IsProjectMember
10
11
12 class CommentList(generics.ListCreateAPIView):
13 permission_classes = [IsAuthenticated & IsProjectMember]
14 serializer_class = CommentSerializer
15 filter_backends = (DjangoFilterBackend, filters.SearchFilter)
16 filterset_fields = ["example"]
17 search_fields = ("text",)
18
19 def get_queryset(self):
20 queryset = Comment.objects.filter(example__project_id=self.kwargs["project_id"])
21 return queryset
22
23 def perform_create(self, serializer):
24 serializer.save(example_id=self.request.query_params.get("example"), user=self.request.user)
25
26 def delete(self, request, *args, **kwargs):
27 delete_ids = request.data["ids"]
28 Comment.objects.filter(user=request.user, pk__in=delete_ids).delete()
29 return Response(status=status.HTTP_204_NO_CONTENT)
30
31
32 class CommentDetail(generics.RetrieveUpdateDestroyAPIView):
33 queryset = Comment.objects.all()
34 serializer_class = CommentSerializer
35 lookup_url_kwarg = "comment_id"
36 permission_classes = [IsAuthenticated & IsProjectMember & IsOwnComment]
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/examples/views/comment.py b/backend/examples/views/comment.py
--- a/backend/examples/views/comment.py
+++ b/backend/examples/views/comment.py
@@ -12,9 +12,10 @@
class CommentList(generics.ListCreateAPIView):
permission_classes = [IsAuthenticated & IsProjectMember]
serializer_class = CommentSerializer
- filter_backends = (DjangoFilterBackend, filters.SearchFilter)
+ filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)
filterset_fields = ["example"]
search_fields = ("text",)
+ ordering_fields = ("created_at", "example")
def get_queryset(self):
queryset = Comment.objects.filter(example__project_id=self.kwargs["project_id"])
| {"golden_diff": "diff --git a/backend/examples/views/comment.py b/backend/examples/views/comment.py\n--- a/backend/examples/views/comment.py\n+++ b/backend/examples/views/comment.py\n@@ -12,9 +12,10 @@\n class CommentList(generics.ListCreateAPIView):\n permission_classes = [IsAuthenticated & IsProjectMember]\n serializer_class = CommentSerializer\n- filter_backends = (DjangoFilterBackend, filters.SearchFilter)\n+ filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)\n filterset_fields = [\"example\"]\n search_fields = (\"text\",)\n+ ordering_fields = (\"created_at\", \"example\")\n \n def get_queryset(self):\n queryset = Comment.objects.filter(example__project_id=self.kwargs[\"project_id\"])\n", "issue": "Include Example number in the Project Comments view\nFeature description\r\n---------\r\nOn the backend API and in the dataset annotation interface, the Comments are associated with specific Examples. But in the Project Comments view, the Example association is unclear--all the comments are grouped together.\r\n\r\nCan the Project Comments view tab be improved to detail Examples, maybe even sort or group by Example?\r\n\r\nThanks!\n", "before_files": [{"content": "from django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import filters, generics, status\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom examples.models import Comment\nfrom examples.permissions import IsOwnComment\nfrom examples.serializers import CommentSerializer\nfrom projects.permissions import IsProjectMember\n\n\nclass CommentList(generics.ListCreateAPIView):\n permission_classes = [IsAuthenticated & IsProjectMember]\n serializer_class = CommentSerializer\n filter_backends = (DjangoFilterBackend, filters.SearchFilter)\n filterset_fields = [\"example\"]\n search_fields = (\"text\",)\n\n def get_queryset(self):\n queryset = Comment.objects.filter(example__project_id=self.kwargs[\"project_id\"])\n return queryset\n\n def perform_create(self, serializer):\n serializer.save(example_id=self.request.query_params.get(\"example\"), user=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data[\"ids\"]\n Comment.objects.filter(user=request.user, pk__in=delete_ids).delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass CommentDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Comment.objects.all()\n serializer_class = CommentSerializer\n lookup_url_kwarg = \"comment_id\"\n permission_classes = [IsAuthenticated & IsProjectMember & IsOwnComment]\n", "path": "backend/examples/views/comment.py"}], "after_files": [{"content": "from django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import filters, generics, status\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\n\nfrom examples.models import Comment\nfrom examples.permissions import IsOwnComment\nfrom examples.serializers import CommentSerializer\nfrom projects.permissions import IsProjectMember\n\n\nclass CommentList(generics.ListCreateAPIView):\n permission_classes = [IsAuthenticated & IsProjectMember]\n serializer_class = CommentSerializer\n filter_backends = (DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter)\n filterset_fields = [\"example\"]\n search_fields = (\"text\",)\n ordering_fields = (\"created_at\", \"example\")\n\n def get_queryset(self):\n queryset = Comment.objects.filter(example__project_id=self.kwargs[\"project_id\"])\n return queryset\n\n def perform_create(self, serializer):\n serializer.save(example_id=self.request.query_params.get(\"example\"), user=self.request.user)\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data[\"ids\"]\n Comment.objects.filter(user=request.user, pk__in=delete_ids).delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass CommentDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Comment.objects.all()\n serializer_class = CommentSerializer\n lookup_url_kwarg = \"comment_id\"\n permission_classes = [IsAuthenticated & IsProjectMember & IsOwnComment]\n", "path": "backend/examples/views/comment.py"}]} | 689 | 162 |
gh_patches_debug_17054 | rasdani/github-patches | git_diff | elastic__elasticsearch-py-206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
max_retries is ignored when using thrift
connection/thrift.py does not catch socket.error in perform_request which cause max_retries to be ignored in the transport.
One can argue that this socket.error exception should be translated into a TException in the offical thrift-library but I think it's better to have it included here aswell.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticsearch/connection/thrift.py`
Content:
```
1 from __future__ import absolute_import
2 from socket import timeout as SocketTimeout
3 import time
4 import logging
5
6 try:
7 from .esthrift import Rest
8 from .esthrift.ttypes import Method, RestRequest
9
10 from thrift.transport import TTransport, TSocket, TSSLSocket
11 from thrift.protocol import TBinaryProtocol
12 from thrift.Thrift import TException
13 THRIFT_AVAILABLE = True
14 except ImportError:
15 THRIFT_AVAILABLE = False
16
17 from ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout
18 from .pooling import PoolingConnection
19
20 logger = logging.getLogger('elasticsearch')
21
22 class ThriftConnection(PoolingConnection):
23 """
24 Connection using the `thrift` protocol to communicate with elasticsearch.
25
26 See https://github.com/elasticsearch/elasticsearch-transport-thrift for additional info.
27 """
28 transport_schema = 'thrift'
29
30 def __init__(self, host='localhost', port=9500, framed_transport=False, use_ssl=False, **kwargs):
31 """
32 :arg framed_transport: use `TTransport.TFramedTransport` instead of
33 `TTransport.TBufferedTransport`
34 """
35 if not THRIFT_AVAILABLE:
36 raise ImproperlyConfigured("Thrift is not available.")
37
38 super(ThriftConnection, self).__init__(host=host, port=port, **kwargs)
39 self._framed_transport = framed_transport
40 self._tsocket_class = TSocket.TSocket
41 if use_ssl:
42 self._tsocket_class = TSSLSocket.TSSLSocket
43 self._tsocket_args = (host, port)
44
45 def _make_connection(self):
46 socket = self._tsocket_class(*self._tsocket_args)
47 socket.setTimeout(self.timeout * 1000.0)
48 if self._framed_transport:
49 transport = TTransport.TFramedTransport(socket)
50 else:
51 transport = TTransport.TBufferedTransport(socket)
52
53 protocol = TBinaryProtocol.TBinaryProtocolAccelerated(transport)
54 client = Rest.Client(protocol)
55 client.transport = transport
56 transport.open()
57 return client
58
59 def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):
60 request = RestRequest(method=Method._NAMES_TO_VALUES[method.upper()], uri=url,
61 parameters=params, body=body)
62
63 start = time.time()
64 tclient = None
65 try:
66 tclient = self._get_connection()
67 response = tclient.execute(request)
68 duration = time.time() - start
69 except SocketTimeout as e:
70 self.log_request_fail(method, url, body, time.time() - start, exception=e)
71 raise ConnectionTimeout('TIMEOUT', str(e), e)
72 except (TException, SocketTimeout) as e:
73 self.log_request_fail(method, url, body, time.time() - start, exception=e)
74 if tclient:
75 try:
76 # try closing transport socket
77 tclient.transport.close()
78 except Exception as e:
79 logger.warning(
80 'Exception %s occured when closing a failed thrift connection.',
81 e, exc_info=True
82 )
83 raise ConnectionError('N/A', str(e), e)
84
85 self._release_connection(tclient)
86
87 if not (200 <= response.status < 300) and response.status not in ignore:
88 self.log_request_fail(method, url, body, duration, response.status)
89 self._raise_error(response.status, response.body)
90
91 self.log_request_success(method, url, url, body, response.status,
92 response.body, duration)
93
94 headers = {}
95 if response.headers:
96 headers = dict((k.lower(), v) for k, v in response.headers.items())
97 return response.status, headers, response.body or ''
98
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticsearch/connection/thrift.py b/elasticsearch/connection/thrift.py
--- a/elasticsearch/connection/thrift.py
+++ b/elasticsearch/connection/thrift.py
@@ -1,5 +1,6 @@
from __future__ import absolute_import
from socket import timeout as SocketTimeout
+from socket import error as SocketError
import time
import logging
@@ -69,7 +70,7 @@
except SocketTimeout as e:
self.log_request_fail(method, url, body, time.time() - start, exception=e)
raise ConnectionTimeout('TIMEOUT', str(e), e)
- except (TException, SocketTimeout) as e:
+ except (TException, SocketError) as e:
self.log_request_fail(method, url, body, time.time() - start, exception=e)
if tclient:
try:
| {"golden_diff": "diff --git a/elasticsearch/connection/thrift.py b/elasticsearch/connection/thrift.py\n--- a/elasticsearch/connection/thrift.py\n+++ b/elasticsearch/connection/thrift.py\n@@ -1,5 +1,6 @@\n from __future__ import absolute_import\n from socket import timeout as SocketTimeout\n+from socket import error as SocketError\n import time\n import logging\n \n@@ -69,7 +70,7 @@\n except SocketTimeout as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionTimeout('TIMEOUT', str(e), e)\n- except (TException, SocketTimeout) as e:\n+ except (TException, SocketError) as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n if tclient:\n try:\n", "issue": "max_retries is ignored when using thrift\nconnection/thrift.py does not catch socket.error in perform_request which cause max_retries to be ignored in the transport.\n\nOne can argue that this socket.error exception should be translated into a TException in the offical thrift-library but I think it's better to have it included here aswell.\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom socket import timeout as SocketTimeout\nimport time\nimport logging\n\ntry:\n from .esthrift import Rest\n from .esthrift.ttypes import Method, RestRequest\n\n from thrift.transport import TTransport, TSocket, TSSLSocket\n from thrift.protocol import TBinaryProtocol\n from thrift.Thrift import TException\n THRIFT_AVAILABLE = True\nexcept ImportError:\n THRIFT_AVAILABLE = False\n\nfrom ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout\nfrom .pooling import PoolingConnection\n\nlogger = logging.getLogger('elasticsearch')\n\nclass ThriftConnection(PoolingConnection):\n \"\"\"\n Connection using the `thrift` protocol to communicate with elasticsearch.\n\n See https://github.com/elasticsearch/elasticsearch-transport-thrift for additional info.\n \"\"\"\n transport_schema = 'thrift'\n\n def __init__(self, host='localhost', port=9500, framed_transport=False, use_ssl=False, **kwargs):\n \"\"\"\n :arg framed_transport: use `TTransport.TFramedTransport` instead of\n `TTransport.TBufferedTransport`\n \"\"\"\n if not THRIFT_AVAILABLE:\n raise ImproperlyConfigured(\"Thrift is not available.\")\n\n super(ThriftConnection, self).__init__(host=host, port=port, **kwargs)\n self._framed_transport = framed_transport\n self._tsocket_class = TSocket.TSocket\n if use_ssl:\n self._tsocket_class = TSSLSocket.TSSLSocket \n self._tsocket_args = (host, port)\n\n def _make_connection(self):\n socket = self._tsocket_class(*self._tsocket_args)\n socket.setTimeout(self.timeout * 1000.0)\n if self._framed_transport:\n transport = TTransport.TFramedTransport(socket)\n else:\n transport = TTransport.TBufferedTransport(socket)\n\n protocol = TBinaryProtocol.TBinaryProtocolAccelerated(transport)\n client = Rest.Client(protocol)\n client.transport = transport\n transport.open()\n return client\n\n def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):\n request = RestRequest(method=Method._NAMES_TO_VALUES[method.upper()], uri=url,\n parameters=params, body=body)\n\n start = time.time()\n tclient = None\n try:\n tclient = self._get_connection()\n response = tclient.execute(request)\n duration = time.time() - start\n except SocketTimeout as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionTimeout('TIMEOUT', str(e), e)\n except (TException, SocketTimeout) as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n if tclient:\n try:\n # try closing transport socket\n tclient.transport.close()\n except Exception as e:\n logger.warning(\n 'Exception %s occured when closing a failed thrift connection.',\n e, exc_info=True\n )\n raise ConnectionError('N/A', str(e), e)\n\n self._release_connection(tclient)\n\n if not (200 <= response.status < 300) and response.status not in ignore:\n self.log_request_fail(method, url, body, duration, response.status)\n self._raise_error(response.status, response.body)\n\n self.log_request_success(method, url, url, body, response.status,\n response.body, duration)\n\n headers = {}\n if response.headers:\n headers = dict((k.lower(), v) for k, v in response.headers.items())\n return response.status, headers, response.body or ''\n\n", "path": "elasticsearch/connection/thrift.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom socket import timeout as SocketTimeout\nfrom socket import error as SocketError\nimport time\nimport logging\n\ntry:\n from .esthrift import Rest\n from .esthrift.ttypes import Method, RestRequest\n\n from thrift.transport import TTransport, TSocket, TSSLSocket\n from thrift.protocol import TBinaryProtocol\n from thrift.Thrift import TException\n THRIFT_AVAILABLE = True\nexcept ImportError:\n THRIFT_AVAILABLE = False\n\nfrom ..exceptions import ConnectionError, ImproperlyConfigured, ConnectionTimeout\nfrom .pooling import PoolingConnection\n\nlogger = logging.getLogger('elasticsearch')\n\nclass ThriftConnection(PoolingConnection):\n \"\"\"\n Connection using the `thrift` protocol to communicate with elasticsearch.\n\n See https://github.com/elasticsearch/elasticsearch-transport-thrift for additional info.\n \"\"\"\n transport_schema = 'thrift'\n\n def __init__(self, host='localhost', port=9500, framed_transport=False, use_ssl=False, **kwargs):\n \"\"\"\n :arg framed_transport: use `TTransport.TFramedTransport` instead of\n `TTransport.TBufferedTransport`\n \"\"\"\n if not THRIFT_AVAILABLE:\n raise ImproperlyConfigured(\"Thrift is not available.\")\n\n super(ThriftConnection, self).__init__(host=host, port=port, **kwargs)\n self._framed_transport = framed_transport\n self._tsocket_class = TSocket.TSocket\n if use_ssl:\n self._tsocket_class = TSSLSocket.TSSLSocket \n self._tsocket_args = (host, port)\n\n def _make_connection(self):\n socket = self._tsocket_class(*self._tsocket_args)\n socket.setTimeout(self.timeout * 1000.0)\n if self._framed_transport:\n transport = TTransport.TFramedTransport(socket)\n else:\n transport = TTransport.TBufferedTransport(socket)\n\n protocol = TBinaryProtocol.TBinaryProtocolAccelerated(transport)\n client = Rest.Client(protocol)\n client.transport = transport\n transport.open()\n return client\n\n def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=()):\n request = RestRequest(method=Method._NAMES_TO_VALUES[method.upper()], uri=url,\n parameters=params, body=body)\n\n start = time.time()\n tclient = None\n try:\n tclient = self._get_connection()\n response = tclient.execute(request)\n duration = time.time() - start\n except SocketTimeout as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n raise ConnectionTimeout('TIMEOUT', str(e), e)\n except (TException, SocketError) as e:\n self.log_request_fail(method, url, body, time.time() - start, exception=e)\n if tclient:\n try:\n # try closing transport socket\n tclient.transport.close()\n except Exception as e:\n logger.warning(\n 'Exception %s occured when closing a failed thrift connection.',\n e, exc_info=True\n )\n raise ConnectionError('N/A', str(e), e)\n\n self._release_connection(tclient)\n\n if not (200 <= response.status < 300) and response.status not in ignore:\n self.log_request_fail(method, url, body, duration, response.status)\n self._raise_error(response.status, response.body)\n\n self.log_request_success(method, url, url, body, response.status,\n response.body, duration)\n\n headers = {}\n if response.headers:\n headers = dict((k.lower(), v) for k, v in response.headers.items())\n return response.status, headers, response.body or ''\n\n", "path": "elasticsearch/connection/thrift.py"}]} | 1,331 | 183 |
gh_patches_debug_35486 | rasdani/github-patches | git_diff | archlinux__archinstall-1021 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
invalid package names png++
Hi,
I try to install some package with the "package" list in the config file, but if i use the package name png++ (i put the link under the message) it return as invalid package names.
https://archlinux.org/packages/community/any/png++/
invalid package names png++
Hi,
I try to install some package with the "package" list in the config file, but if i use the package name png++ (i put the link under the message) it return as invalid package names.
https://archlinux.org/packages/community/any/png++/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/packages/packages.py`
Content:
```
1 import json
2 import ssl
3 import urllib.request
4 from typing import Dict, Any, Tuple, List
5
6 from ..exceptions import PackageError, SysCallError
7 from ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage
8 from ..pacman import run_pacman
9
10 BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/?name={package}'
11 # BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'
12 BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/?name={group}'
13
14
15 def group_search(name :str) -> List[PackageSearchResult]:
16 # TODO UPSTREAM: Implement /json/ for the groups search
17 ssl_context = ssl.create_default_context()
18 ssl_context.check_hostname = False
19 ssl_context.verify_mode = ssl.CERT_NONE
20 try:
21 response = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)
22 except urllib.error.HTTPError as err:
23 if err.code == 404:
24 return []
25 else:
26 raise err
27
28 # Just to be sure some code didn't slip through the exception
29 data = response.read().decode('UTF-8')
30
31 return [PackageSearchResult(**package) for package in json.loads(data)['results']]
32
33
34 def package_search(package :str) -> PackageSearch:
35 """
36 Finds a specific package via the package database.
37 It makes a simple web-request, which might be a bit slow.
38 """
39 # TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')
40 # TODO: utilize pacman cache first, upstream second.
41 ssl_context = ssl.create_default_context()
42 ssl_context.check_hostname = False
43 ssl_context.verify_mode = ssl.CERT_NONE
44 response = urllib.request.urlopen(BASE_URL_PKG_SEARCH.format(package=package), context=ssl_context)
45
46 if response.code != 200:
47 raise PackageError(f"Could not locate package: [{response.code}] {response}")
48
49 data = response.read().decode('UTF-8')
50
51 return PackageSearch(**json.loads(data))
52
53
54 def find_package(package :str) -> List[PackageSearchResult]:
55 data = package_search(package)
56 results = []
57
58 for result in data.results:
59 if result.pkgname == package:
60 results.append(result)
61
62 # If we didn't find the package in the search results,
63 # odds are it's a group package
64 if not results:
65 # Check if the package is actually a group
66 for result in group_search(package):
67 results.append(result)
68
69 return results
70
71
72 def find_packages(*names :str) -> Dict[str, Any]:
73 """
74 This function returns the search results for many packages.
75 The function itself is rather slow, so consider not sending to
76 many packages to the search query.
77 """
78 result = {}
79 for package in names:
80 for found_package in find_package(package):
81 result[package] = found_package
82
83 return result
84
85
86 def validate_package_list(packages :list) -> Tuple[list, list]:
87 """
88 Validates a list of given packages.
89 return: Tuple of lists containing valid packavges in the first and invalid
90 packages in the second entry
91 """
92 valid_packages = {package for package in packages if find_package(package)}
93 invalid_packages = set(packages) - valid_packages
94
95 return list(valid_packages), list(invalid_packages)
96
97
98 def installed_package(package :str) -> LocalPackage:
99 package_info = {}
100 try:
101 for line in run_pacman(f"-Q --info {package}"):
102 if b':' in line:
103 key, value = line.decode().split(':', 1)
104 package_info[key.strip().lower().replace(' ', '_')] = value.strip()
105 except SysCallError:
106 pass
107
108 return LocalPackage(**package_info)
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py
--- a/archinstall/lib/packages/packages.py
+++ b/archinstall/lib/packages/packages.py
@@ -1,25 +1,35 @@
import json
import ssl
-import urllib.request
from typing import Dict, Any, Tuple, List
+from urllib.error import HTTPError
+from urllib.parse import urlencode
+from urllib.request import urlopen
from ..exceptions import PackageError, SysCallError
from ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage
from ..pacman import run_pacman
-BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/?name={package}'
+BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'
# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'
-BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/?name={group}'
+BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'
-def group_search(name :str) -> List[PackageSearchResult]:
- # TODO UPSTREAM: Implement /json/ for the groups search
+def _make_request(url: str, params: Dict) -> Any:
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
+
+ encoded = urlencode(params)
+ full_url = f'{url}?{encoded}'
+
+ return urlopen(full_url, context=ssl_context)
+
+
+def group_search(name :str) -> List[PackageSearchResult]:
+ # TODO UPSTREAM: Implement /json/ for the groups search
try:
- response = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)
- except urllib.error.HTTPError as err:
+ response = _make_request(BASE_GROUP_URL, {'name': name})
+ except HTTPError as err:
if err.code == 404:
return []
else:
@@ -38,10 +48,7 @@
"""
# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')
# TODO: utilize pacman cache first, upstream second.
- ssl_context = ssl.create_default_context()
- ssl_context.check_hostname = False
- ssl_context.verify_mode = ssl.CERT_NONE
- response = urllib.request.urlopen(BASE_URL_PKG_SEARCH.format(package=package), context=ssl_context)
+ response = _make_request(BASE_URL_PKG_SEARCH, {'name': package})
if response.code != 200:
raise PackageError(f"Could not locate package: [{response.code}] {response}")
| {"golden_diff": "diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py\n--- a/archinstall/lib/packages/packages.py\n+++ b/archinstall/lib/packages/packages.py\n@@ -1,25 +1,35 @@\n import json\n import ssl\n-import urllib.request\n from typing import Dict, Any, Tuple, List\n+from urllib.error import HTTPError\n+from urllib.parse import urlencode\n+from urllib.request import urlopen\n \n from ..exceptions import PackageError, SysCallError\n from ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\n from ..pacman import run_pacman\n \n-BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/?name={package}'\n+BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'\n # BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\n-BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/?name={group}'\n+BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'\n \n \n-def group_search(name :str) -> List[PackageSearchResult]:\n-\t# TODO UPSTREAM: Implement /json/ for the groups search\n+def _make_request(url: str, params: Dict) -> Any:\n \tssl_context = ssl.create_default_context()\n \tssl_context.check_hostname = False\n \tssl_context.verify_mode = ssl.CERT_NONE\n+\n+\tencoded = urlencode(params)\n+\tfull_url = f'{url}?{encoded}'\n+\n+\treturn urlopen(full_url, context=ssl_context)\n+\n+\n+def group_search(name :str) -> List[PackageSearchResult]:\n+\t# TODO UPSTREAM: Implement /json/ for the groups search\n \ttry:\n-\t\tresponse = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)\n-\texcept urllib.error.HTTPError as err:\n+\t\tresponse = _make_request(BASE_GROUP_URL, {'name': name})\n+\texcept HTTPError as err:\n \t\tif err.code == 404:\n \t\t\treturn []\n \t\telse:\n@@ -38,10 +48,7 @@\n \t\"\"\"\n \t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n \t# TODO: utilize pacman cache first, upstream second.\n-\tssl_context = ssl.create_default_context()\n-\tssl_context.check_hostname = False\n-\tssl_context.verify_mode = ssl.CERT_NONE\n-\tresponse = urllib.request.urlopen(BASE_URL_PKG_SEARCH.format(package=package), context=ssl_context)\n+\tresponse = _make_request(BASE_URL_PKG_SEARCH, {'name': package})\n \n \tif response.code != 200:\n \t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n", "issue": "invalid package names png++\nHi,\r\n\r\nI try to install some package with the \"package\" list in the config file, but if i use the package name png++ (i put the link under the message) it return as invalid package names.\r\n\r\nhttps://archlinux.org/packages/community/any/png++/\r\n\ninvalid package names png++\nHi,\r\n\r\nI try to install some package with the \"package\" list in the config file, but if i use the package name png++ (i put the link under the message) it return as invalid package names.\r\n\r\nhttps://archlinux.org/packages/community/any/png++/\r\n\n", "before_files": [{"content": "import json\nimport ssl\nimport urllib.request\nfrom typing import Dict, Any, Tuple, List\n\nfrom ..exceptions import PackageError, SysCallError\nfrom ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\nfrom ..pacman import run_pacman\n\nBASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/?name={package}'\n# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\nBASE_GROUP_URL = 'https://archlinux.org/groups/search/json/?name={group}'\n\n\ndef group_search(name :str) -> List[PackageSearchResult]:\n\t# TODO UPSTREAM: Implement /json/ for the groups search\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\ttry:\n\t\tresponse = urllib.request.urlopen(BASE_GROUP_URL.format(group=name), context=ssl_context)\n\texcept urllib.error.HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn []\n\t\telse:\n\t\t\traise err\n\n\t# Just to be sure some code didn't slip through the exception\n\tdata = response.read().decode('UTF-8')\n\n\treturn [PackageSearchResult(**package) for package in json.loads(data)['results']]\n\n\ndef package_search(package :str) -> PackageSearch:\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n\t# TODO: utilize pacman cache first, upstream second.\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\tresponse = urllib.request.urlopen(BASE_URL_PKG_SEARCH.format(package=package), context=ssl_context)\n\n\tif response.code != 200:\n\t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n\n\tdata = response.read().decode('UTF-8')\n\n\treturn PackageSearch(**json.loads(data))\n\n\ndef find_package(package :str) -> List[PackageSearchResult]:\n\tdata = package_search(package)\n\tresults = []\n\n\tfor result in data.results:\n\t\tif result.pkgname == package:\n\t\t\tresults.append(result)\n\n\t# If we didn't find the package in the search results,\n\t# odds are it's a group package\n\tif not results:\n\t\t# Check if the package is actually a group\n\t\tfor result in group_search(package):\n\t\t\tresults.append(result)\n\n\treturn results\n\n\ndef find_packages(*names :str) -> Dict[str, Any]:\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tfor found_package in find_package(package):\n\t\t\tresult[package] = found_package\n\n\treturn result\n\n\ndef validate_package_list(packages :list) -> Tuple[list, list]:\n\t\"\"\"\n\tValidates a list of given packages.\n\treturn: Tuple of lists containing valid packavges in the first and invalid\n\tpackages in the second entry\n\t\"\"\"\n\tvalid_packages = {package for package in packages if find_package(package)}\n\tinvalid_packages = set(packages) - valid_packages\n\n\treturn list(valid_packages), list(invalid_packages)\n\n\ndef installed_package(package :str) -> LocalPackage:\n\tpackage_info = {}\n\ttry:\n\t\tfor line in run_pacman(f\"-Q --info {package}\"):\n\t\t\tif b':' in line:\n\t\t\t\tkey, value = line.decode().split(':', 1)\n\t\t\t\tpackage_info[key.strip().lower().replace(' ', '_')] = value.strip()\n\texcept SysCallError:\n\t\tpass\n\n\treturn LocalPackage(**package_info)\n", "path": "archinstall/lib/packages/packages.py"}], "after_files": [{"content": "import json\nimport ssl\nfrom typing import Dict, Any, Tuple, List\nfrom urllib.error import HTTPError\nfrom urllib.parse import urlencode\nfrom urllib.request import urlopen\n\nfrom ..exceptions import PackageError, SysCallError\nfrom ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\nfrom ..pacman import run_pacman\n\nBASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'\n# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\nBASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'\n\n\ndef _make_request(url: str, params: Dict) -> Any:\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\n\tencoded = urlencode(params)\n\tfull_url = f'{url}?{encoded}'\n\n\treturn urlopen(full_url, context=ssl_context)\n\n\ndef group_search(name :str) -> List[PackageSearchResult]:\n\t# TODO UPSTREAM: Implement /json/ for the groups search\n\ttry:\n\t\tresponse = _make_request(BASE_GROUP_URL, {'name': name})\n\texcept HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn []\n\t\telse:\n\t\t\traise err\n\n\t# Just to be sure some code didn't slip through the exception\n\tdata = response.read().decode('UTF-8')\n\n\treturn [PackageSearchResult(**package) for package in json.loads(data)['results']]\n\n\ndef package_search(package :str) -> PackageSearch:\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n\t# TODO: utilize pacman cache first, upstream second.\n\tresponse = _make_request(BASE_URL_PKG_SEARCH, {'name': package})\n\n\tif response.code != 200:\n\t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n\n\tdata = response.read().decode('UTF-8')\n\n\treturn PackageSearch(**json.loads(data))\n\n\ndef find_package(package :str) -> List[PackageSearchResult]:\n\tdata = package_search(package)\n\tresults = []\n\n\tfor result in data.results:\n\t\tif result.pkgname == package:\n\t\t\tresults.append(result)\n\n\t# If we didn't find the package in the search results,\n\t# odds are it's a group package\n\tif not results:\n\t\t# Check if the package is actually a group\n\t\tfor result in group_search(package):\n\t\t\tresults.append(result)\n\n\treturn results\n\n\ndef find_packages(*names :str) -> Dict[str, Any]:\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tfor found_package in find_package(package):\n\t\t\tresult[package] = found_package\n\n\treturn result\n\n\ndef validate_package_list(packages :list) -> Tuple[list, list]:\n\t\"\"\"\n\tValidates a list of given packages.\n\treturn: Tuple of lists containing valid packavges in the first and invalid\n\tpackages in the second entry\n\t\"\"\"\n\tvalid_packages = {package for package in packages if find_package(package)}\n\tinvalid_packages = set(packages) - valid_packages\n\n\treturn list(valid_packages), list(invalid_packages)\n\n\ndef installed_package(package :str) -> LocalPackage:\n\tpackage_info = {}\n\ttry:\n\t\tfor line in run_pacman(f\"-Q --info {package}\"):\n\t\t\tif b':' in line:\n\t\t\t\tkey, value = line.decode().split(':', 1)\n\t\t\t\tpackage_info[key.strip().lower().replace(' ', '_')] = value.strip()\n\texcept SysCallError:\n\t\tpass\n\n\treturn LocalPackage(**package_info)\n", "path": "archinstall/lib/packages/packages.py"}]} | 1,457 | 580 |
gh_patches_debug_11945 | rasdani/github-patches | git_diff | mdn__kuma-7185 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Django User admin not working related to stripe_customer_id
**Summary**
<img width="1450" alt="Screen Shot 2020-05-20 at 1 01 52 PM" src="https://user-images.githubusercontent.com/26739/82475221-28e5a080-9a9a-11ea-948f-be97bf2c15d6.png">
There *exists* a user with that username.
The user is there. I can manually type in the URL http://localhost.org:8000/admin/users/user/1311/change/ and it works.
It's related to `IsStripeCustomer` in `kuma/users/admin.py`.
**Steps To Reproduce (STR)**
1. Have a user with and without a `stripe_customer_id`
2. Go to http://localhost.org:8000/admin/users/user/
3. Is the user appearing?
**Actual behavior**
_What actually happened?_
**Expected behavior**
_What did you expect to happen?_
**Additional context**
Pretty sure it's the `IsStripeCustomer` thing that's filtering on the empty string even if you didn't filter on it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/users/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
3 from django.utils.html import format_html
4
5 from kuma.core.urlresolvers import reverse
6 from kuma.core.utils import urlparams
7
8 from .models import User, UserBan, UserSubscription
9
10
11 @admin.register(UserBan)
12 class UserBanAdmin(admin.ModelAdmin):
13 fields = ("user", "by", "reason", "is_active")
14 list_display = ("user", "by", "reason", "is_active")
15 list_editable = ("is_active",)
16 list_filter = ("is_active",)
17 raw_id_fields = ("user", "by")
18 search_fields = ("user__username", "reason", "by__username")
19
20
21 class IsStripeCustomer(admin.SimpleListFilter):
22 title = "is Stripe customer"
23 parameter_name = "is_stripe_customer"
24
25 def lookups(self, request, model_admin):
26 return (
27 (True, "Yes"),
28 (False, "No"),
29 )
30
31 def queryset(self, request, queryset):
32 if self.value():
33 return queryset.exclude(stripe_customer_id="")
34 else:
35 return queryset.filter(stripe_customer_id="")
36
37
38 @admin.register(User)
39 class UserAdmin(BaseUserAdmin):
40 """
41 Extends the admin view of users to show date_joined field
42 add a filter on the field too
43 """
44
45 fieldsets = BaseUserAdmin.fieldsets + (
46 ("Subscription", {"fields": ("stripe_customer_id", "subscriber_number")}),
47 )
48 readonly_fields = BaseUserAdmin.readonly_fields + (
49 "stripe_customer_id",
50 "subscriber_number",
51 )
52
53 list_display = (
54 "username",
55 "fullname",
56 "email",
57 "revisions",
58 "date_joined",
59 "is_staff",
60 "is_active",
61 )
62 list_filter = (
63 "is_staff",
64 "is_superuser",
65 "is_active",
66 "date_joined",
67 "groups",
68 IsStripeCustomer,
69 )
70 ordering = ("-date_joined",)
71 search_fields = (
72 "username",
73 "title",
74 "fullname",
75 "organization",
76 "location",
77 "email",
78 )
79
80 def revisions(self, obj):
81 """HTML link to user's revisions with count"""
82 link = urlparams(reverse("dashboards.revisions"), user=obj.username)
83 count = obj.created_revisions.count()
84 return format_html('<a href="{}"><strong>{}</strong></a>', link, count)
85
86
87 @admin.register(UserSubscription)
88 class UserSubscriptionAdmin(admin.ModelAdmin):
89 readonly_fields = ("user", "updated", "created", "stripe_subscription_id")
90 list_display = ("user", "canceled", "updated", "created")
91 search_fields = ("user__username",)
92 list_filter = ("canceled", "created")
93 ordering = ("updated",)
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kuma/users/admin.py b/kuma/users/admin.py
--- a/kuma/users/admin.py
+++ b/kuma/users/admin.py
@@ -24,15 +24,18 @@
def lookups(self, request, model_admin):
return (
- (True, "Yes"),
- (False, "No"),
+ ("yes", "Yes"),
+ ("no", "No"),
)
def queryset(self, request, queryset):
- if self.value():
+ value = self.value()
+ if value == "yes":
return queryset.exclude(stripe_customer_id="")
- else:
+ elif value == "no":
return queryset.filter(stripe_customer_id="")
+ else:
+ return queryset
@admin.register(User)
| {"golden_diff": "diff --git a/kuma/users/admin.py b/kuma/users/admin.py\n--- a/kuma/users/admin.py\n+++ b/kuma/users/admin.py\n@@ -24,15 +24,18 @@\n \n def lookups(self, request, model_admin):\n return (\n- (True, \"Yes\"),\n- (False, \"No\"),\n+ (\"yes\", \"Yes\"),\n+ (\"no\", \"No\"),\n )\n \n def queryset(self, request, queryset):\n- if self.value():\n+ value = self.value()\n+ if value == \"yes\":\n return queryset.exclude(stripe_customer_id=\"\")\n- else:\n+ elif value == \"no\":\n return queryset.filter(stripe_customer_id=\"\")\n+ else:\n+ return queryset\n \n \n @admin.register(User)\n", "issue": "Django User admin not working related to stripe_customer_id\n**Summary**\r\n<img width=\"1450\" alt=\"Screen Shot 2020-05-20 at 1 01 52 PM\" src=\"https://user-images.githubusercontent.com/26739/82475221-28e5a080-9a9a-11ea-948f-be97bf2c15d6.png\">\r\n\r\nThere *exists* a user with that username. \r\nThe user is there. I can manually type in the URL http://localhost.org:8000/admin/users/user/1311/change/ and it works. \r\n\r\nIt's related to `IsStripeCustomer` in `kuma/users/admin.py`.\r\n\r\n**Steps To Reproduce (STR)**\r\n\r\n1. Have a user with and without a `stripe_customer_id`\r\n2. Go to http://localhost.org:8000/admin/users/user/\r\n3. Is the user appearing?\r\n\r\n\r\n**Actual behavior**\r\n_What actually happened?_\r\n\r\n\r\n**Expected behavior**\r\n_What did you expect to happen?_\r\n\r\n\r\n**Additional context**\r\nPretty sure it's the `IsStripeCustomer` thing that's filtering on the empty string even if you didn't filter on it. \n", "before_files": [{"content": "from django.contrib import admin\nfrom django.contrib.auth.admin import UserAdmin as BaseUserAdmin\nfrom django.utils.html import format_html\n\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import urlparams\n\nfrom .models import User, UserBan, UserSubscription\n\n\[email protected](UserBan)\nclass UserBanAdmin(admin.ModelAdmin):\n fields = (\"user\", \"by\", \"reason\", \"is_active\")\n list_display = (\"user\", \"by\", \"reason\", \"is_active\")\n list_editable = (\"is_active\",)\n list_filter = (\"is_active\",)\n raw_id_fields = (\"user\", \"by\")\n search_fields = (\"user__username\", \"reason\", \"by__username\")\n\n\nclass IsStripeCustomer(admin.SimpleListFilter):\n title = \"is Stripe customer\"\n parameter_name = \"is_stripe_customer\"\n\n def lookups(self, request, model_admin):\n return (\n (True, \"Yes\"),\n (False, \"No\"),\n )\n\n def queryset(self, request, queryset):\n if self.value():\n return queryset.exclude(stripe_customer_id=\"\")\n else:\n return queryset.filter(stripe_customer_id=\"\")\n\n\[email protected](User)\nclass UserAdmin(BaseUserAdmin):\n \"\"\"\n Extends the admin view of users to show date_joined field\n add a filter on the field too\n \"\"\"\n\n fieldsets = BaseUserAdmin.fieldsets + (\n (\"Subscription\", {\"fields\": (\"stripe_customer_id\", \"subscriber_number\")}),\n )\n readonly_fields = BaseUserAdmin.readonly_fields + (\n \"stripe_customer_id\",\n \"subscriber_number\",\n )\n\n list_display = (\n \"username\",\n \"fullname\",\n \"email\",\n \"revisions\",\n \"date_joined\",\n \"is_staff\",\n \"is_active\",\n )\n list_filter = (\n \"is_staff\",\n \"is_superuser\",\n \"is_active\",\n \"date_joined\",\n \"groups\",\n IsStripeCustomer,\n )\n ordering = (\"-date_joined\",)\n search_fields = (\n \"username\",\n \"title\",\n \"fullname\",\n \"organization\",\n \"location\",\n \"email\",\n )\n\n def revisions(self, obj):\n \"\"\"HTML link to user's revisions with count\"\"\"\n link = urlparams(reverse(\"dashboards.revisions\"), user=obj.username)\n count = obj.created_revisions.count()\n return format_html('<a href=\"{}\"><strong>{}</strong></a>', link, count)\n\n\[email protected](UserSubscription)\nclass UserSubscriptionAdmin(admin.ModelAdmin):\n readonly_fields = (\"user\", \"updated\", \"created\", \"stripe_subscription_id\")\n list_display = (\"user\", \"canceled\", \"updated\", \"created\")\n search_fields = (\"user__username\",)\n list_filter = (\"canceled\", \"created\")\n ordering = (\"updated\",)\n", "path": "kuma/users/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.contrib.auth.admin import UserAdmin as BaseUserAdmin\nfrom django.utils.html import format_html\n\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import urlparams\n\nfrom .models import User, UserBan, UserSubscription\n\n\[email protected](UserBan)\nclass UserBanAdmin(admin.ModelAdmin):\n fields = (\"user\", \"by\", \"reason\", \"is_active\")\n list_display = (\"user\", \"by\", \"reason\", \"is_active\")\n list_editable = (\"is_active\",)\n list_filter = (\"is_active\",)\n raw_id_fields = (\"user\", \"by\")\n search_fields = (\"user__username\", \"reason\", \"by__username\")\n\n\nclass IsStripeCustomer(admin.SimpleListFilter):\n title = \"is Stripe customer\"\n parameter_name = \"is_stripe_customer\"\n\n def lookups(self, request, model_admin):\n return (\n (\"yes\", \"Yes\"),\n (\"no\", \"No\"),\n )\n\n def queryset(self, request, queryset):\n value = self.value()\n if value == \"yes\":\n return queryset.exclude(stripe_customer_id=\"\")\n elif value == \"no\":\n return queryset.filter(stripe_customer_id=\"\")\n else:\n return queryset\n\n\[email protected](User)\nclass UserAdmin(BaseUserAdmin):\n \"\"\"\n Extends the admin view of users to show date_joined field\n add a filter on the field too\n \"\"\"\n\n fieldsets = BaseUserAdmin.fieldsets + (\n (\"Subscription\", {\"fields\": (\"stripe_customer_id\", \"subscriber_number\")}),\n )\n readonly_fields = BaseUserAdmin.readonly_fields + (\n \"stripe_customer_id\",\n \"subscriber_number\",\n )\n\n list_display = (\n \"username\",\n \"fullname\",\n \"email\",\n \"revisions\",\n \"date_joined\",\n \"is_staff\",\n \"is_active\",\n )\n list_filter = (\n \"is_staff\",\n \"is_superuser\",\n \"is_active\",\n \"date_joined\",\n \"groups\",\n IsStripeCustomer,\n )\n ordering = (\"-date_joined\",)\n search_fields = (\n \"username\",\n \"title\",\n \"fullname\",\n \"organization\",\n \"location\",\n \"email\",\n )\n\n def revisions(self, obj):\n \"\"\"HTML link to user's revisions with count\"\"\"\n link = urlparams(reverse(\"dashboards.revisions\"), user=obj.username)\n count = obj.created_revisions.count()\n return format_html('<a href=\"{}\"><strong>{}</strong></a>', link, count)\n\n\[email protected](UserSubscription)\nclass UserSubscriptionAdmin(admin.ModelAdmin):\n readonly_fields = (\"user\", \"updated\", \"created\", \"stripe_subscription_id\")\n list_display = (\"user\", \"canceled\", \"updated\", \"created\")\n search_fields = (\"user__username\",)\n list_filter = (\"canceled\", \"created\")\n ordering = (\"updated\",)\n", "path": "kuma/users/admin.py"}]} | 1,325 | 171 |
gh_patches_debug_15191 | rasdani/github-patches | git_diff | coqui-ai__TTS-611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] `tts_models/en/vctk/sc-glow-tts` fails to find `speakers.json` file
**Describe the bug**
`tts_models/en/vctk/sc-glow-tts` fails to find `speakers.json` file.
**To Reproduce**
```
$ tts --model_name tts_models/en/vctk/sc-glow-tts --text "test" --out_path tmp.wav
> tts_models/en/vctk/sc-glow-tts is already downloaded.
> vocoder_models/en/vctk/hifigan_v2 is already downloaded.
> Using model: glow_tts
Traceback (most recent call last):
File "/usr/local/bin/tts", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/TTS/bin/synthesize.py", line 226, in main
synthesizer = Synthesizer(
File "/usr/local/lib/python3.8/dist-packages/TTS/utils/synthesizer.py", line 73, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "/usr/local/lib/python3.8/dist-packages/TTS/utils/synthesizer.py", line 135, in _load_tts
self.tts_model = setup_tts_model(config=self.tts_config)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/models/__init__.py", line 27, in setup_model
model = MyModel(config)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/models/glow_tts.py", line 57, in __init__
self.init_multispeaker(config)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/models/glow_tts.py", line 106, in init_multispeaker
self.speaker_manager = get_speaker_manager(config, data=data)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py", line 354, in get_speaker_manager
speaker_manager.set_d_vectors_from_file(c.d_vector_file)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py", line 161, in set_d_vectors_from_file
self.d_vectors = self._load_json(file_path)
File "/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py", line 85, in _load_json
with open(json_file_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/erogol/.local/share/tts/tts_models--en--vctk--sc-glow-tts/speakers.json'
```
The problem appears to be on line 143 of the `config.json` within the [model ZIP file](https://github.com/coqui-ai/TTS/releases/download/v0.1.0/tts_models--en--vctk--sc-glow-tts.zip):
```
"d_vector_file": "/home/erogol/.local/share/tts/tts_models--en--vctk--sc-glow-tts/speakers.json"
```
If I change that line in `~/.local/share/tts/tts_models--en--vctk--sc-glow-tts/config.json` to my own local path, then it works. I haven't been able to figure out how to make a relative path work.
**Environment (please complete the following information):**
Using the following Dockerfile:
```
FROM ubuntu:focal
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y ca-certificates && update-ca-certificates && \
apt install -y espeak-ng git libsndfile1 python3 python3-pip
RUN python3 -m pip install git+git://github.com/coqui-ai/[email protected]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `TTS/utils/trainer_utils.py`
Content:
```
1 import importlib
2 from typing import Dict
3
4 import torch
5
6 from TTS.utils.training import NoamLR
7
8
9 def is_apex_available():
10 return importlib.util.find_spec("apex") is not None
11
12
13 def setup_torch_training_env(cudnn_enable, cudnn_benchmark):
14 torch.backends.cudnn.enabled = cudnn_enable
15 torch.backends.cudnn.benchmark = cudnn_benchmark
16 torch.manual_seed(54321)
17 use_cuda = torch.cuda.is_available()
18 num_gpus = torch.cuda.device_count()
19 print(" > Using CUDA: ", use_cuda)
20 print(" > Number of GPUs: ", num_gpus)
21 return use_cuda, num_gpus
22
23
24 def get_scheduler(
25 lr_scheduler: str, lr_scheduler_params: Dict, optimizer: torch.optim.Optimizer
26 ) -> torch.optim.lr_scheduler._LRScheduler: # pylint: disable=protected-access
27 """Find, initialize and return a scheduler.
28
29 Args:
30 lr_scheduler (str): Scheduler name.
31 lr_scheduler_params (Dict): Scheduler parameters.
32 optimizer (torch.optim.Optimizer): Optimizer to pass to the scheduler.
33
34 Returns:
35 torch.optim.lr_scheduler._LRScheduler: Functional scheduler.
36 """
37 if lr_scheduler is None:
38 return None
39 if lr_scheduler.lower() == "noamlr":
40 scheduler = NoamLR
41 else:
42 scheduler = getattr(torch.optim.lr_scheduler, lr_scheduler)
43 return scheduler(optimizer, **lr_scheduler_params)
44
45
46 def get_optimizer(
47 optimizer_name: str, optimizer_params: dict, lr: float, model: torch.nn.Module
48 ) -> torch.optim.Optimizer:
49 """Find, initialize and return a optimizer.
50
51 Args:
52 optimizer_name (str): Optimizer name.
53 optimizer_params (dict): Optimizer parameters.
54 lr (float): Initial learning rate.
55 model (torch.nn.Module): Model to pass to the optimizer.
56
57 Returns:
58 torch.optim.Optimizer: Functional optimizer.
59 """
60 if optimizer_name.lower() == "radam":
61 module = importlib.import_module("TTS.utils.radam")
62 optimizer = getattr(module, "RAdam")
63 else:
64 optimizer = getattr(torch.optim, optimizer_name)
65 return optimizer(model.parameters(), lr=lr, **optimizer_params)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/TTS/utils/trainer_utils.py b/TTS/utils/trainer_utils.py
--- a/TTS/utils/trainer_utils.py
+++ b/TTS/utils/trainer_utils.py
@@ -11,11 +11,15 @@
def setup_torch_training_env(cudnn_enable, cudnn_benchmark):
+ num_gpus = torch.cuda.device_count()
+ if num_gpus > 1:
+ raise RuntimeError(
+ f" [!] {num_gpus} active GPUs. Define the target GPU by `CUDA_VISIBLE_DEVICES`. For multi-gpu training use `TTS/bin/distribute.py`."
+ )
torch.backends.cudnn.enabled = cudnn_enable
torch.backends.cudnn.benchmark = cudnn_benchmark
torch.manual_seed(54321)
use_cuda = torch.cuda.is_available()
- num_gpus = torch.cuda.device_count()
print(" > Using CUDA: ", use_cuda)
print(" > Number of GPUs: ", num_gpus)
return use_cuda, num_gpus
| {"golden_diff": "diff --git a/TTS/utils/trainer_utils.py b/TTS/utils/trainer_utils.py\n--- a/TTS/utils/trainer_utils.py\n+++ b/TTS/utils/trainer_utils.py\n@@ -11,11 +11,15 @@\n \n \n def setup_torch_training_env(cudnn_enable, cudnn_benchmark):\n+ num_gpus = torch.cuda.device_count()\n+ if num_gpus > 1:\n+ raise RuntimeError(\n+ f\" [!] {num_gpus} active GPUs. Define the target GPU by `CUDA_VISIBLE_DEVICES`. For multi-gpu training use `TTS/bin/distribute.py`.\"\n+ )\n torch.backends.cudnn.enabled = cudnn_enable\n torch.backends.cudnn.benchmark = cudnn_benchmark\n torch.manual_seed(54321)\n use_cuda = torch.cuda.is_available()\n- num_gpus = torch.cuda.device_count()\n print(\" > Using CUDA: \", use_cuda)\n print(\" > Number of GPUs: \", num_gpus)\n return use_cuda, num_gpus\n", "issue": "[Bug] `tts_models/en/vctk/sc-glow-tts` fails to find `speakers.json` file\n**Describe the bug**\r\n`tts_models/en/vctk/sc-glow-tts` fails to find `speakers.json` file.\r\n\r\n**To Reproduce**\r\n```\r\n$ tts --model_name tts_models/en/vctk/sc-glow-tts --text \"test\" --out_path tmp.wav\r\n > tts_models/en/vctk/sc-glow-tts is already downloaded.\r\n > vocoder_models/en/vctk/hifigan_v2 is already downloaded.\r\n > Using model: glow_tts\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/tts\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/bin/synthesize.py\", line 226, in main\r\n synthesizer = Synthesizer(\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/utils/synthesizer.py\", line 73, in __init__\r\n self._load_tts(tts_checkpoint, tts_config_path, use_cuda)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/utils/synthesizer.py\", line 135, in _load_tts\r\n self.tts_model = setup_tts_model(config=self.tts_config)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/models/__init__.py\", line 27, in setup_model\r\n model = MyModel(config)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/models/glow_tts.py\", line 57, in __init__\r\n self.init_multispeaker(config)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/models/glow_tts.py\", line 106, in init_multispeaker\r\n self.speaker_manager = get_speaker_manager(config, data=data)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py\", line 354, in get_speaker_manager\r\n speaker_manager.set_d_vectors_from_file(c.d_vector_file)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py\", line 161, in set_d_vectors_from_file\r\n self.d_vectors = self._load_json(file_path)\r\n File \"/usr/local/lib/python3.8/dist-packages/TTS/tts/utils/speakers.py\", line 85, in _load_json\r\n with open(json_file_path) as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/erogol/.local/share/tts/tts_models--en--vctk--sc-glow-tts/speakers.json'\r\n```\r\n\r\nThe problem appears to be on line 143 of the `config.json` within the [model ZIP file](https://github.com/coqui-ai/TTS/releases/download/v0.1.0/tts_models--en--vctk--sc-glow-tts.zip):\r\n```\r\n\"d_vector_file\": \"/home/erogol/.local/share/tts/tts_models--en--vctk--sc-glow-tts/speakers.json\"\r\n```\r\n\r\nIf I change that line in `~/.local/share/tts/tts_models--en--vctk--sc-glow-tts/config.json` to my own local path, then it works. I haven't been able to figure out how to make a relative path work.\r\n\r\n**Environment (please complete the following information):**\r\nUsing the following Dockerfile:\r\n```\r\nFROM ubuntu:focal\r\nENV DEBIAN_FRONTEND=noninteractive \r\nRUN apt update && \\\r\n apt install -y ca-certificates && update-ca-certificates && \\\r\n apt install -y espeak-ng git libsndfile1 python3 python3-pip\r\nRUN python3 -m pip install git+git://github.com/coqui-ai/[email protected]\r\n```\r\n\n", "before_files": [{"content": "import importlib\nfrom typing import Dict\n\nimport torch\n\nfrom TTS.utils.training import NoamLR\n\n\ndef is_apex_available():\n return importlib.util.find_spec(\"apex\") is not None\n\n\ndef setup_torch_training_env(cudnn_enable, cudnn_benchmark):\n torch.backends.cudnn.enabled = cudnn_enable\n torch.backends.cudnn.benchmark = cudnn_benchmark\n torch.manual_seed(54321)\n use_cuda = torch.cuda.is_available()\n num_gpus = torch.cuda.device_count()\n print(\" > Using CUDA: \", use_cuda)\n print(\" > Number of GPUs: \", num_gpus)\n return use_cuda, num_gpus\n\n\ndef get_scheduler(\n lr_scheduler: str, lr_scheduler_params: Dict, optimizer: torch.optim.Optimizer\n) -> torch.optim.lr_scheduler._LRScheduler: # pylint: disable=protected-access\n \"\"\"Find, initialize and return a scheduler.\n\n Args:\n lr_scheduler (str): Scheduler name.\n lr_scheduler_params (Dict): Scheduler parameters.\n optimizer (torch.optim.Optimizer): Optimizer to pass to the scheduler.\n\n Returns:\n torch.optim.lr_scheduler._LRScheduler: Functional scheduler.\n \"\"\"\n if lr_scheduler is None:\n return None\n if lr_scheduler.lower() == \"noamlr\":\n scheduler = NoamLR\n else:\n scheduler = getattr(torch.optim.lr_scheduler, lr_scheduler)\n return scheduler(optimizer, **lr_scheduler_params)\n\n\ndef get_optimizer(\n optimizer_name: str, optimizer_params: dict, lr: float, model: torch.nn.Module\n) -> torch.optim.Optimizer:\n \"\"\"Find, initialize and return a optimizer.\n\n Args:\n optimizer_name (str): Optimizer name.\n optimizer_params (dict): Optimizer parameters.\n lr (float): Initial learning rate.\n model (torch.nn.Module): Model to pass to the optimizer.\n\n Returns:\n torch.optim.Optimizer: Functional optimizer.\n \"\"\"\n if optimizer_name.lower() == \"radam\":\n module = importlib.import_module(\"TTS.utils.radam\")\n optimizer = getattr(module, \"RAdam\")\n else:\n optimizer = getattr(torch.optim, optimizer_name)\n return optimizer(model.parameters(), lr=lr, **optimizer_params)\n", "path": "TTS/utils/trainer_utils.py"}], "after_files": [{"content": "import importlib\nfrom typing import Dict\n\nimport torch\n\nfrom TTS.utils.training import NoamLR\n\n\ndef is_apex_available():\n return importlib.util.find_spec(\"apex\") is not None\n\n\ndef setup_torch_training_env(cudnn_enable, cudnn_benchmark):\n num_gpus = torch.cuda.device_count()\n if num_gpus > 1:\n raise RuntimeError(\n f\" [!] {num_gpus} active GPUs. Define the target GPU by `CUDA_VISIBLE_DEVICES`. For multi-gpu training use `TTS/bin/distribute.py`.\"\n )\n torch.backends.cudnn.enabled = cudnn_enable\n torch.backends.cudnn.benchmark = cudnn_benchmark\n torch.manual_seed(54321)\n use_cuda = torch.cuda.is_available()\n print(\" > Using CUDA: \", use_cuda)\n print(\" > Number of GPUs: \", num_gpus)\n return use_cuda, num_gpus\n\n\ndef get_scheduler(\n lr_scheduler: str, lr_scheduler_params: Dict, optimizer: torch.optim.Optimizer\n) -> torch.optim.lr_scheduler._LRScheduler: # pylint: disable=protected-access\n \"\"\"Find, initialize and return a scheduler.\n\n Args:\n lr_scheduler (str): Scheduler name.\n lr_scheduler_params (Dict): Scheduler parameters.\n optimizer (torch.optim.Optimizer): Optimizer to pass to the scheduler.\n\n Returns:\n torch.optim.lr_scheduler._LRScheduler: Functional scheduler.\n \"\"\"\n if lr_scheduler is None:\n return None\n if lr_scheduler.lower() == \"noamlr\":\n scheduler = NoamLR\n else:\n scheduler = getattr(torch.optim.lr_scheduler, lr_scheduler)\n return scheduler(optimizer, **lr_scheduler_params)\n\n\ndef get_optimizer(\n optimizer_name: str, optimizer_params: dict, lr: float, model: torch.nn.Module\n) -> torch.optim.Optimizer:\n \"\"\"Find, initialize and return a optimizer.\n\n Args:\n optimizer_name (str): Optimizer name.\n optimizer_params (dict): Optimizer parameters.\n lr (float): Initial learning rate.\n model (torch.nn.Module): Model to pass to the optimizer.\n\n Returns:\n torch.optim.Optimizer: Functional optimizer.\n \"\"\"\n if optimizer_name.lower() == \"radam\":\n module = importlib.import_module(\"TTS.utils.radam\")\n optimizer = getattr(module, \"RAdam\")\n else:\n optimizer = getattr(torch.optim, optimizer_name)\n return optimizer(model.parameters(), lr=lr, **optimizer_params)\n", "path": "TTS/utils/trainer_utils.py"}]} | 1,742 | 229 |
gh_patches_debug_18697 | rasdani/github-patches | git_diff | graspologic-org__graspologic-345 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
deprecate python 3.5
sklearn 0.23 and above don't support 3.5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 from setuptools import setup, find_packages
4 from sys import platform
5
6 PACKAGE_NAME = "graspy"
7 DESCRIPTION = "A set of python modules for graph statistics"
8 with open("README.md", "r") as f:
9 LONG_DESCRIPTION = f.read()
10 AUTHOR = ("Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand",)
11 AUTHOR_EMAIL = "[email protected]"
12 URL = "https://github.com/neurodata/graspy"
13 MINIMUM_PYTHON_VERSION = 3, 5 # Minimum of Python 3.5
14 REQUIRED_PACKAGES = [
15 "networkx>=2.1",
16 "numpy>=1.8.1",
17 "scikit-learn>=0.19.1",
18 "scipy>=1.1.0",
19 "seaborn>=0.9.0",
20 "matplotlib>=3.0.0",
21 ]
22
23 # Find GraSPy version.
24 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
25 for line in open(os.path.join(PROJECT_PATH, "graspy", "__init__.py")):
26 if line.startswith("__version__ = "):
27 VERSION = line.strip().split()[2][1:-1]
28
29
30 def check_python_version():
31 """Exit when the Python version is too low."""
32 if sys.version_info < MINIMUM_PYTHON_VERSION:
33 sys.exit("Python {}.{}+ is required.".format(*MINIMUM_PYTHON_VERSION))
34
35
36 check_python_version()
37
38 setup(
39 name=PACKAGE_NAME,
40 version=VERSION,
41 description=DESCRIPTION,
42 long_description=LONG_DESCRIPTION,
43 long_description_content_type="text/markdown",
44 author=AUTHOR,
45 author_email=AUTHOR_EMAIL,
46 install_requires=REQUIRED_PACKAGES,
47 url=URL,
48 license="Apache License 2.0",
49 classifiers=[
50 "Development Status :: 3 - Alpha",
51 "Intended Audience :: Science/Research",
52 "Topic :: Scientific/Engineering :: Mathematics",
53 "License :: OSI Approved :: Apache Software License",
54 "Programming Language :: Python :: 3",
55 "Programming Language :: Python :: 3.5",
56 "Programming Language :: Python :: 3.6",
57 "Programming Language :: Python :: 3.7",
58 ],
59 packages=find_packages(),
60 include_package_data=True,
61 )
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
AUTHOR = ("Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand",)
AUTHOR_EMAIL = "[email protected]"
URL = "https://github.com/neurodata/graspy"
-MINIMUM_PYTHON_VERSION = 3, 5 # Minimum of Python 3.5
+MINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5
REQUIRED_PACKAGES = [
"networkx>=2.1",
"numpy>=1.8.1",
@@ -52,7 +52,6 @@
"Topic :: Scientific/Engineering :: Mathematics",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n AUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\n AUTHOR_EMAIL = \"[email protected]\"\n URL = \"https://github.com/neurodata/graspy\"\n-MINIMUM_PYTHON_VERSION = 3, 5 # Minimum of Python 3.5\n+MINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\n REQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n@@ -52,7 +52,6 @@\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n", "issue": "deprecate python 3.5\nsklearn 0.23 and above don't support 3.5\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 5 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n]\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n]\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}]} | 896 | 231 |
gh_patches_debug_4061 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8052 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pin click
resolves #8048
### Description
Pin main to `click>=8.1.1,<8.1.4`
### Checklist
- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me
- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)
- [ ] I have run this code in development and it appears to resolve the stated issue
- [ ] This PR includes tests, or tests are not required/relevant for this PR
- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR
- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.5.2"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": ["dbt = dbt.cli.main:cli"],
47 },
48 install_requires=[
49 "Jinja2==3.1.2",
50 "agate>=1.6,<1.7.1",
51 "click>=7.0,<9",
52 "colorama>=0.3.9,<0.4.7",
53 "hologram>=0.0.14,<=0.0.16",
54 "isodate>=0.6,<0.7",
55 "logbook>=1.5,<1.6",
56 "mashumaro[msgpack]==3.6",
57 "minimal-snowplow-tracker==0.0.2",
58 "networkx>=2.3,<2.8.1;python_version<'3.8'",
59 "networkx>=2.3,<3;python_version>='3.8'",
60 "packaging>20.9",
61 "sqlparse>=0.2.3",
62 "dbt-extractor~=0.4.1",
63 "typing-extensions>=3.7.4",
64 "werkzeug>=1,<3",
65 "pathspec>=0.9,<0.12",
66 "protobuf>=4.0.0",
67 "pytz>=2015.7",
68 # the following are all to match snowflake-connector-python
69 "requests<3.0.0",
70 "idna>=2.5,<4",
71 "cffi>=1.9,<2.0.0",
72 "pyyaml>=6.0",
73 ],
74 zip_safe=False,
75 classifiers=[
76 "Development Status :: 5 - Production/Stable",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: Microsoft :: Windows",
79 "Operating System :: MacOS :: MacOS X",
80 "Operating System :: POSIX :: Linux",
81 "Programming Language :: Python :: 3.7",
82 "Programming Language :: Python :: 3.8",
83 "Programming Language :: Python :: 3.9",
84 "Programming Language :: Python :: 3.10",
85 "Programming Language :: Python :: 3.11",
86 ],
87 python_requires=">=3.7.2",
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -48,7 +48,8 @@
install_requires=[
"Jinja2==3.1.2",
"agate>=1.6,<1.7.1",
- "click>=7.0,<9",
+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558
+ "click>=7.0,<8.1.4",
"colorama>=0.3.9,<0.4.7",
"hologram>=0.0.14,<=0.0.16",
"isodate>=0.6,<0.7",
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -48,7 +48,8 @@\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.7.1\",\n- \"click>=7.0,<9\",\n+ # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558\n+ \"click>=7.0,<8.1.4\",\n \"colorama>=0.3.9,<0.4.7\",\n \"hologram>=0.0.14,<=0.0.16\",\n \"isodate>=0.6,<0.7\",\n", "issue": "pin click\nresolves #8048 \r\n\r\n### Description\r\n\r\nPin main to `click>=8.1.1,<8.1.4`\r\n\r\n### Checklist\r\n\r\n- [ ] I have read [the contributing guide](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md) and understand what's expected of me\r\n- [ ] I have signed the [CLA](https://docs.getdbt.com/docs/contributor-license-agreements)\r\n- [ ] I have run this code in development and it appears to resolve the stated issue\r\n- [ ] This PR includes tests, or tests are not required/relevant for this PR\r\n- [ ] I have [opened an issue to add/update docs](https://github.com/dbt-labs/docs.getdbt.com/issues/new/choose), or docs changes are not required/relevant for this PR\r\n- [ ] I have run `changie new` to [create a changelog entry](https://github.com/dbt-labs/dbt-core/blob/main/CONTRIBUTING.md#adding-a-changelog-entry)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.5.2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.7.1\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.7\",\n \"hologram>=0.0.14,<=0.0.16\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.6\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>20.9\",\n \"sqlparse>=0.2.3\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pathspec>=0.9,<0.12\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.5.2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\"dbt = dbt.cli.main:cli\"],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.7.1\",\n # temporarily pinning click for mypy failures: https://github.com/pallets/click/issues/2558\n \"click>=7.0,<8.1.4\",\n \"colorama>=0.3.9,<0.4.7\",\n \"hologram>=0.0.14,<=0.0.16\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.6\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>20.9\",\n \"sqlparse>=0.2.3\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n \"pathspec>=0.9,<0.12\",\n \"protobuf>=4.0.0\",\n \"pytz>=2015.7\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]} | 1,456 | 171 |
gh_patches_debug_10922 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1571 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2004 issue with AWS::SSM::Parameter::Value<String> Parameters
*cfn-lint version: (`cfn-lint --version`)*
0.33.0
*Description of issue.*
When using an SSM paramter store lookup to get a CIDR Range to use in a Security group, cfn-lint raised an E2004 error.
```yaml
Parameters:
VpcID:
Description: ID of VPC
Type: "AWS::SSM::Parameter::Value<String>"
Default: VpcId
AllowedValues:
- VpcId
VpcCidrRange:
Description: CIDR range of VPC
Type: "AWS::SSM::Parameter::Value<String>"
Default: VpcCIDRRange
AllowedValues:
- VpcCIDRRange
Resources:
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow HTTP egress
VpcId: !Ref VpcID
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: !Ref VpcCidrRange
```
Error is `E2004 Cidr should be a Cidr Range based string for VpcCIDRRange product.template.yaml:9:3`
If I'm not misunderstanding, cfn-lint seems to be complaining that the String `VpcCIDRRange` is not a valid CIDR, where in fact during runtime CloudFormation will lookup the value of `VpcCIDRRange` in SSM Parameter Store and use that instead.
I guess a sensible approach here would be to disable E2004 for parameters that have a type of `AWS::SSM::Parameter::Value<String>` maybe?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/parameters/CidrAllowedValues.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 from cfnlint.rules import CloudFormationLintRule
7 from cfnlint.rules import RuleMatch
8
9 from cfnlint.helpers import REGEX_CIDR
10
11
12 class CidrAllowedValues(CloudFormationLintRule):
13 """CIDR checks"""
14 id = 'E2004'
15 shortdesc = 'CIDR Allowed Values should be a Cidr Range'
16 description = 'Check if a parameter is being used as a CIDR. ' \
17 'If it is make sure allowed values are proper CIDRs'
18 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'
19 tags = ['parameters', 'cidr']
20
21 def __init__(self):
22 """Init"""
23 super(CidrAllowedValues, self).__init__()
24 resource_type_specs = [
25 'AWS::EC2::ClientVpnAuthorizationRule',
26 'AWS::EC2::ClientVpnEndpoint',
27 'AWS::EC2::ClientVpnRoute',
28 'AWS::EC2::NetworkAclEntry',
29 'AWS::EC2::Route',
30 'AWS::EC2::SecurityGroupEgress',
31 'AWS::EC2::SecurityGroupIngress',
32 'AWS::EC2::Subnet',
33 'AWS::EC2::TransitGatewayRoute',
34 'AWS::EC2::VPC',
35 'AWS::EC2::VPCCidrBlock',
36 'AWS::EC2::VPNConnectionRoute',
37 'AWS::RDS::DBSecurityGroupIngress',
38 'AWS::Redshift::ClusterSecurityGroupIngress',
39 ]
40
41 property_type_specs = [
42 'AWS::EC2::SecurityGroup.Egress',
43 'AWS::EC2::SecurityGroup.Ingress',
44 'AWS::EC2::VPNConnection.VpnTunnelOptionsSpecification',
45 'AWS::MediaLive::InputSecurityGroup.InputWhitelistRuleCidr',
46 'AWS::RDS::DBSecurityGroup.Ingress',
47 'AWS::SES::ReceiptFilter.IpFilter',
48 ]
49
50 for resource_type_spec in resource_type_specs:
51 self.resource_property_types.append(resource_type_spec)
52 for property_type_spec in property_type_specs:
53 self.resource_sub_property_types.append(property_type_spec)
54
55 # pylint: disable=W0613
56 def check_cidr_ref(self, value, path, parameters, resources):
57 matches = []
58
59 if value in parameters:
60 parameter = parameters.get(value, {})
61 allowed_values = parameter.get('AllowedValues', None)
62 if allowed_values:
63 for cidr in allowed_values:
64 if not re.match(REGEX_CIDR, cidr):
65 cidr_path = ['Parameters', value]
66 message = 'Cidr should be a Cidr Range based string for {0}'
67 matches.append(RuleMatch(cidr_path, message.format(cidr)))
68
69 return matches
70
71 def check(self, properties, resource_type, path, cfn):
72 """Check itself"""
73 matches = []
74
75 for cidrString in [
76 'CIDRIP',
77 'Cidr',
78 'CidrBlock',
79 'CidrIp',
80 'ClientCidrBlock',
81 'DestinationCidrBlock',
82 'TargetNetworkCidr',
83 'TunnelInsideCidr',
84 ]:
85 matches.extend(
86 cfn.check_value(
87 properties, cidrString, path,
88 check_value=None, check_ref=self.check_cidr_ref,
89 check_find_in_map=None, check_split=None, check_join=None
90 )
91 )
92
93 return matches
94
95 def match_resource_sub_properties(self, properties, property_type, path, cfn):
96 """Match for sub properties"""
97 matches = []
98
99 matches.extend(self.check(properties, property_type, path, cfn))
100
101 return matches
102
103 def match_resource_properties(self, properties, resource_type, path, cfn):
104 """Check CloudFormation Properties"""
105 matches = []
106
107 matches.extend(self.check(properties, resource_type, path, cfn))
108
109 return matches
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/parameters/CidrAllowedValues.py b/src/cfnlint/rules/parameters/CidrAllowedValues.py
--- a/src/cfnlint/rules/parameters/CidrAllowedValues.py
+++ b/src/cfnlint/rules/parameters/CidrAllowedValues.py
@@ -58,6 +58,11 @@
if value in parameters:
parameter = parameters.get(value, {})
+ parameter_type = parameters.get(value, {}).get('Type', '')
+ if parameter_type == 'AWS::SSM::Parameter::Value<String>':
+ # If we are reading from the Parameter store we can no longer
+ # check the value of the allowed values
+ return matches
allowed_values = parameter.get('AllowedValues', None)
if allowed_values:
for cidr in allowed_values:
| {"golden_diff": "diff --git a/src/cfnlint/rules/parameters/CidrAllowedValues.py b/src/cfnlint/rules/parameters/CidrAllowedValues.py\n--- a/src/cfnlint/rules/parameters/CidrAllowedValues.py\n+++ b/src/cfnlint/rules/parameters/CidrAllowedValues.py\n@@ -58,6 +58,11 @@\n \n if value in parameters:\n parameter = parameters.get(value, {})\n+ parameter_type = parameters.get(value, {}).get('Type', '')\n+ if parameter_type == 'AWS::SSM::Parameter::Value<String>':\n+ # If we are reading from the Parameter store we can no longer\n+ # check the value of the allowed values\n+ return matches\n allowed_values = parameter.get('AllowedValues', None)\n if allowed_values:\n for cidr in allowed_values:\n", "issue": "E2004 issue with AWS::SSM::Parameter::Value<String> Parameters\n*cfn-lint version: (`cfn-lint --version`)*\r\n0.33.0\r\n\r\n*Description of issue.*\r\nWhen using an SSM paramter store lookup to get a CIDR Range to use in a Security group, cfn-lint raised an E2004 error.\r\n\r\n```yaml\r\nParameters:\r\n VpcID:\r\n Description: ID of VPC\r\n Type: \"AWS::SSM::Parameter::Value<String>\"\r\n Default: VpcId\r\n AllowedValues:\r\n - VpcId\r\n\r\n VpcCidrRange:\r\n Description: CIDR range of VPC\r\n Type: \"AWS::SSM::Parameter::Value<String>\"\r\n Default: VpcCIDRRange\r\n AllowedValues:\r\n - VpcCIDRRange\r\n\r\nResources:\r\n SecurityGroup:\r\n Type: AWS::EC2::SecurityGroup\r\n Properties:\r\n GroupDescription: Allow HTTP egress\r\n VpcId: !Ref VpcID\r\n SecurityGroupEgress:\r\n - IpProtocol: tcp\r\n FromPort: 80\r\n ToPort: 80\r\n CidrIp: !Ref VpcCidrRange\r\n```\r\n\r\nError is `E2004 Cidr should be a Cidr Range based string for VpcCIDRRange product.template.yaml:9:3`\r\n\r\nIf I'm not misunderstanding, cfn-lint seems to be complaining that the String `VpcCIDRRange` is not a valid CIDR, where in fact during runtime CloudFormation will lookup the value of `VpcCIDRRange` in SSM Parameter Store and use that instead.\r\n\r\nI guess a sensible approach here would be to disable E2004 for parameters that have a type of `AWS::SSM::Parameter::Value<String>` maybe?\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import REGEX_CIDR\n\n\nclass CidrAllowedValues(CloudFormationLintRule):\n \"\"\"CIDR checks\"\"\"\n id = 'E2004'\n shortdesc = 'CIDR Allowed Values should be a Cidr Range'\n description = 'Check if a parameter is being used as a CIDR. ' \\\n 'If it is make sure allowed values are proper CIDRs'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'\n tags = ['parameters', 'cidr']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(CidrAllowedValues, self).__init__()\n resource_type_specs = [\n 'AWS::EC2::ClientVpnAuthorizationRule',\n 'AWS::EC2::ClientVpnEndpoint',\n 'AWS::EC2::ClientVpnRoute',\n 'AWS::EC2::NetworkAclEntry',\n 'AWS::EC2::Route',\n 'AWS::EC2::SecurityGroupEgress',\n 'AWS::EC2::SecurityGroupIngress',\n 'AWS::EC2::Subnet',\n 'AWS::EC2::TransitGatewayRoute',\n 'AWS::EC2::VPC',\n 'AWS::EC2::VPCCidrBlock',\n 'AWS::EC2::VPNConnectionRoute',\n 'AWS::RDS::DBSecurityGroupIngress',\n 'AWS::Redshift::ClusterSecurityGroupIngress',\n ]\n\n property_type_specs = [\n 'AWS::EC2::SecurityGroup.Egress',\n 'AWS::EC2::SecurityGroup.Ingress',\n 'AWS::EC2::VPNConnection.VpnTunnelOptionsSpecification',\n 'AWS::MediaLive::InputSecurityGroup.InputWhitelistRuleCidr',\n 'AWS::RDS::DBSecurityGroup.Ingress',\n 'AWS::SES::ReceiptFilter.IpFilter',\n ]\n\n for resource_type_spec in resource_type_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in property_type_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n # pylint: disable=W0613\n def check_cidr_ref(self, value, path, parameters, resources):\n matches = []\n\n if value in parameters:\n parameter = parameters.get(value, {})\n allowed_values = parameter.get('AllowedValues', None)\n if allowed_values:\n for cidr in allowed_values:\n if not re.match(REGEX_CIDR, cidr):\n cidr_path = ['Parameters', value]\n message = 'Cidr should be a Cidr Range based string for {0}'\n matches.append(RuleMatch(cidr_path, message.format(cidr)))\n\n return matches\n\n def check(self, properties, resource_type, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n\n for cidrString in [\n 'CIDRIP',\n 'Cidr',\n 'CidrBlock',\n 'CidrIp',\n 'ClientCidrBlock',\n 'DestinationCidrBlock',\n 'TargetNetworkCidr',\n 'TunnelInsideCidr',\n ]:\n matches.extend(\n cfn.check_value(\n properties, cidrString, path,\n check_value=None, check_ref=self.check_cidr_ref,\n check_find_in_map=None, check_split=None, check_join=None\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n matches.extend(self.check(properties, property_type, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(self.check(properties, resource_type, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/CidrAllowedValues.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import REGEX_CIDR\n\n\nclass CidrAllowedValues(CloudFormationLintRule):\n \"\"\"CIDR checks\"\"\"\n id = 'E2004'\n shortdesc = 'CIDR Allowed Values should be a Cidr Range'\n description = 'Check if a parameter is being used as a CIDR. ' \\\n 'If it is make sure allowed values are proper CIDRs'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html'\n tags = ['parameters', 'cidr']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(CidrAllowedValues, self).__init__()\n resource_type_specs = [\n 'AWS::EC2::ClientVpnAuthorizationRule',\n 'AWS::EC2::ClientVpnEndpoint',\n 'AWS::EC2::ClientVpnRoute',\n 'AWS::EC2::NetworkAclEntry',\n 'AWS::EC2::Route',\n 'AWS::EC2::SecurityGroupEgress',\n 'AWS::EC2::SecurityGroupIngress',\n 'AWS::EC2::Subnet',\n 'AWS::EC2::TransitGatewayRoute',\n 'AWS::EC2::VPC',\n 'AWS::EC2::VPCCidrBlock',\n 'AWS::EC2::VPNConnectionRoute',\n 'AWS::RDS::DBSecurityGroupIngress',\n 'AWS::Redshift::ClusterSecurityGroupIngress',\n ]\n\n property_type_specs = [\n 'AWS::EC2::SecurityGroup.Egress',\n 'AWS::EC2::SecurityGroup.Ingress',\n 'AWS::EC2::VPNConnection.VpnTunnelOptionsSpecification',\n 'AWS::MediaLive::InputSecurityGroup.InputWhitelistRuleCidr',\n 'AWS::RDS::DBSecurityGroup.Ingress',\n 'AWS::SES::ReceiptFilter.IpFilter',\n ]\n\n for resource_type_spec in resource_type_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in property_type_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n # pylint: disable=W0613\n def check_cidr_ref(self, value, path, parameters, resources):\n matches = []\n\n if value in parameters:\n parameter = parameters.get(value, {})\n parameter_type = parameters.get(value, {}).get('Type', '')\n if parameter_type == 'AWS::SSM::Parameter::Value<String>':\n # If we are reading from the Parameter store we can no longer\n # check the value of the allowed values\n return matches\n allowed_values = parameter.get('AllowedValues', None)\n if allowed_values:\n for cidr in allowed_values:\n if not re.match(REGEX_CIDR, cidr):\n cidr_path = ['Parameters', value]\n message = 'Cidr should be a Cidr Range based string for {0}'\n matches.append(RuleMatch(cidr_path, message.format(cidr)))\n\n return matches\n\n def check(self, properties, resource_type, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n\n for cidrString in [\n 'CIDRIP',\n 'Cidr',\n 'CidrBlock',\n 'CidrIp',\n 'ClientCidrBlock',\n 'DestinationCidrBlock',\n 'TargetNetworkCidr',\n 'TunnelInsideCidr',\n ]:\n matches.extend(\n cfn.check_value(\n properties, cidrString, path,\n check_value=None, check_ref=self.check_cidr_ref,\n check_find_in_map=None, check_split=None, check_join=None\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n matches.extend(self.check(properties, property_type, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(self.check(properties, resource_type, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/CidrAllowedValues.py"}]} | 1,780 | 182 |
gh_patches_debug_19104 | rasdani/github-patches | git_diff | fossasia__open-event-server-5678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace all 3rd party links to reliable resource
**Describe the bug**
Many links which have either been used for testing/mainstream access using `urllib` are often blocked due to some issue with the site security or some other reason.
**Expected behavior**
All links must point to a certain reliable resource like FOSSASIA's website to ensure reliable access.
**Stacktrace**
One of the plausible errors which might occur `urllib.error.HTTPError: HTTP Error 403: Forbidden`
**Additional Info**:
Working on it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/helpers/utilities.py`
Content:
```
1 # PLEASE PUT ALL FUNCTIONS WHICH PERFORM GENERAL FORMATTING ON ANY DATATYPE WITHOUT USING ANY
2 # MODULES RELATED TO THE EVENT-SYSTEM i.e FUNCTIONS SPECIFIC TO DB MODELS E.G A FUNCTION JUST FOR ROLE_INVITES
3 import random
4 import string
5 import sys
6
7 import bleach
8 from itsdangerous import Serializer
9 import requests
10 import re
11
12 from app.api.helpers.exceptions import UnprocessableEntity
13
14 from flask import current_app
15
16
17 def dasherize(text):
18 return text.replace('_', '-')
19
20
21 def require_relationship(resource_list, data):
22 for resource in resource_list:
23 if resource not in data:
24 raise UnprocessableEntity({'pointer': '/data/relationships/{}'.format(resource)},
25 "A valid relationship with {} resource is required".format(resource))
26
27
28 def string_empty(value):
29 is_not_str_type = type(value) is not str
30 if sys.version_info[0] < 3:
31 is_not_str_type = is_not_str_type and type(value) is not unicode
32 if type(value) is not value and is_not_str_type:
33 return False
34 return not (value and value.strip() and value != u'' and value != u' ')
35
36
37 def strip_tags(html):
38 if html is None:
39 return None
40 return bleach.clean(html, tags=[], attributes={}, styles=[], strip=True)
41
42
43 def get_serializer(secret_key='secret_key'):
44 return Serializer(secret_key)
45
46
47 def str_generator(size=6, chars=string.ascii_uppercase + string.digits):
48 return ''.join(random.choice(chars) for _ in range(size))
49
50
51 # From http://stackoverflow.com/a/3425124
52 def monthdelta(date, delta):
53 m, y = (date.month + delta) % 12, date.year + (date.month + delta - 1) // 12
54 if not m:
55 m = 12
56 d = min(date.day, [31,
57 29 if y % 4 == 0 and not y % 400 == 0 else 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31][m - 1])
58 return date.replace(day=d, month=m, year=y)
59
60
61 def represents_int(value):
62 try:
63 int(value)
64 return True
65 except:
66 return False
67
68
69 def is_downloadable(url):
70 """
71 Does the url contain a downloadable resource
72 """
73 h = requests.head(url, allow_redirects=True)
74 header = h.headers
75 content_type = header.get('content-type')
76 # content_length = header.get('content-length', 1e10)
77 if 'text' in content_type.lower():
78 return False
79 if 'html' in content_type.lower():
80 return False
81 return True
82
83
84 def get_filename_from_cd(cd):
85 """
86 Get filename and ext from content-disposition
87 """
88 if not cd:
89 return '', ''
90 fname = re.findall('filename=(.+)', cd)
91 if len(fname) == 0:
92 return '', ''
93 fn = fname[0].rsplit('.', 1)
94 return fn[0], '' if len(fn) == 1 else ('.' + fn[1])
95
96
97 def write_file(file, data):
98 """simple write to file"""
99 fp = open(file, 'w')
100 fp.write(str(data, 'utf-8'))
101 fp.close()
102
103
104 def update_state(task_handle, state, result=None):
105 """
106 Update state of celery task
107 """
108 if result is None:
109 result = {}
110 if not current_app.config.get('CELERY_ALWAYS_EAGER'):
111 task_handle.update_state(
112 state=state, meta=result
113 )
114
115
116 # store task results in case of testing
117 # state and info
118 TASK_RESULTS = {}
119
120
121 class EmptyObject(object):
122 pass
123
```
Path: `app/factories/common.py`
Content:
```
1 import factory
2
3 # use camelCase for naming variables
4
5 string_ = 'example'
6 email_ = factory.Sequence(lambda n: 'user{0}@example.com'.format(n))
7 integer_ = 25
8 url_ = 'http://example.com'
9 imageUrl_ = 'https://www.w3schools.com/html/pic_mountain.jpg'
10 date_ = '2016-12-13T23:59:59.123456+00:00'
11 dateFuture_ = '2099-12-13T23:59:59.123456+00:00'
12 dateEndFuture_ = '2099-12-14T23:59:59.123456+00:00'
13 dateEnd_ = '2020-12-14T23:59:59.123456+00:00'
14 country_ = 'US'
15 currency_ = 'USD'
16 int_ = '1'
17 float_ = '1.23456789'
18 timezone_ = 'UTC'
19 environment_ = 'production'
20 secret_ = 'ABCDefghIJKLmnop'
21 fee_ = '1.23'
22 slug_ = factory.Sequence(lambda n: 'example_slug{0}'.format(n))
23
24
25 def socialUrl_(name):
26 return 'https://{}.com/{}'.format(name, name)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/helpers/utilities.py b/app/api/helpers/utilities.py
--- a/app/api/helpers/utilities.py
+++ b/app/api/helpers/utilities.py
@@ -112,6 +112,8 @@
state=state, meta=result
)
+static_page = 'https://eventyay.com/'
+image_link = 'https://www.gstatic.com/webp/gallery/1.jpg'
# store task results in case of testing
# state and info
diff --git a/app/factories/common.py b/app/factories/common.py
--- a/app/factories/common.py
+++ b/app/factories/common.py
@@ -1,12 +1,12 @@
import factory
-
+from app.api.helpers.utilities import static_page, image_link
# use camelCase for naming variables
string_ = 'example'
email_ = factory.Sequence(lambda n: 'user{0}@example.com'.format(n))
integer_ = 25
-url_ = 'http://example.com'
-imageUrl_ = 'https://www.w3schools.com/html/pic_mountain.jpg'
+url_ = static_page
+imageUrl_ = image_link
date_ = '2016-12-13T23:59:59.123456+00:00'
dateFuture_ = '2099-12-13T23:59:59.123456+00:00'
dateEndFuture_ = '2099-12-14T23:59:59.123456+00:00'
| {"golden_diff": "diff --git a/app/api/helpers/utilities.py b/app/api/helpers/utilities.py\n--- a/app/api/helpers/utilities.py\n+++ b/app/api/helpers/utilities.py\n@@ -112,6 +112,8 @@\n state=state, meta=result\n )\n \n+static_page = 'https://eventyay.com/'\n+image_link = 'https://www.gstatic.com/webp/gallery/1.jpg'\n \n # store task results in case of testing\n # state and info\ndiff --git a/app/factories/common.py b/app/factories/common.py\n--- a/app/factories/common.py\n+++ b/app/factories/common.py\n@@ -1,12 +1,12 @@\n import factory\n-\n+from app.api.helpers.utilities import static_page, image_link\n # use camelCase for naming variables\n \n string_ = 'example'\n email_ = factory.Sequence(lambda n: 'user{0}@example.com'.format(n))\n integer_ = 25\n-url_ = 'http://example.com'\n-imageUrl_ = 'https://www.w3schools.com/html/pic_mountain.jpg'\n+url_ = static_page\n+imageUrl_ = image_link\n date_ = '2016-12-13T23:59:59.123456+00:00'\n dateFuture_ = '2099-12-13T23:59:59.123456+00:00'\n dateEndFuture_ = '2099-12-14T23:59:59.123456+00:00'\n", "issue": "Replace all 3rd party links to reliable resource\n**Describe the bug**\r\nMany links which have either been used for testing/mainstream access using `urllib` are often blocked due to some issue with the site security or some other reason. \r\n\r\n**Expected behavior**\r\nAll links must point to a certain reliable resource like FOSSASIA's website to ensure reliable access.\r\n\r\n**Stacktrace**\r\nOne of the plausible errors which might occur `urllib.error.HTTPError: HTTP Error 403: Forbidden`\r\n\r\n**Additional Info**:\r\nWorking on it\n", "before_files": [{"content": "# PLEASE PUT ALL FUNCTIONS WHICH PERFORM GENERAL FORMATTING ON ANY DATATYPE WITHOUT USING ANY\n# MODULES RELATED TO THE EVENT-SYSTEM i.e FUNCTIONS SPECIFIC TO DB MODELS E.G A FUNCTION JUST FOR ROLE_INVITES\nimport random\nimport string\nimport sys\n\nimport bleach\nfrom itsdangerous import Serializer\nimport requests\nimport re\n\nfrom app.api.helpers.exceptions import UnprocessableEntity\n\nfrom flask import current_app\n\n\ndef dasherize(text):\n return text.replace('_', '-')\n\n\ndef require_relationship(resource_list, data):\n for resource in resource_list:\n if resource not in data:\n raise UnprocessableEntity({'pointer': '/data/relationships/{}'.format(resource)},\n \"A valid relationship with {} resource is required\".format(resource))\n\n\ndef string_empty(value):\n is_not_str_type = type(value) is not str\n if sys.version_info[0] < 3:\n is_not_str_type = is_not_str_type and type(value) is not unicode\n if type(value) is not value and is_not_str_type:\n return False\n return not (value and value.strip() and value != u'' and value != u' ')\n\n\ndef strip_tags(html):\n if html is None:\n return None\n return bleach.clean(html, tags=[], attributes={}, styles=[], strip=True)\n\n\ndef get_serializer(secret_key='secret_key'):\n return Serializer(secret_key)\n\n\ndef str_generator(size=6, chars=string.ascii_uppercase + string.digits):\n return ''.join(random.choice(chars) for _ in range(size))\n\n\n# From http://stackoverflow.com/a/3425124\ndef monthdelta(date, delta):\n m, y = (date.month + delta) % 12, date.year + (date.month + delta - 1) // 12\n if not m:\n m = 12\n d = min(date.day, [31,\n 29 if y % 4 == 0 and not y % 400 == 0 else 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31][m - 1])\n return date.replace(day=d, month=m, year=y)\n\n\ndef represents_int(value):\n try:\n int(value)\n return True\n except:\n return False\n\n\ndef is_downloadable(url):\n \"\"\"\n Does the url contain a downloadable resource\n \"\"\"\n h = requests.head(url, allow_redirects=True)\n header = h.headers\n content_type = header.get('content-type')\n # content_length = header.get('content-length', 1e10)\n if 'text' in content_type.lower():\n return False\n if 'html' in content_type.lower():\n return False\n return True\n\n\ndef get_filename_from_cd(cd):\n \"\"\"\n Get filename and ext from content-disposition\n \"\"\"\n if not cd:\n return '', ''\n fname = re.findall('filename=(.+)', cd)\n if len(fname) == 0:\n return '', ''\n fn = fname[0].rsplit('.', 1)\n return fn[0], '' if len(fn) == 1 else ('.' + fn[1])\n\n\ndef write_file(file, data):\n \"\"\"simple write to file\"\"\"\n fp = open(file, 'w')\n fp.write(str(data, 'utf-8'))\n fp.close()\n\n\ndef update_state(task_handle, state, result=None):\n \"\"\"\n Update state of celery task\n \"\"\"\n if result is None:\n result = {}\n if not current_app.config.get('CELERY_ALWAYS_EAGER'):\n task_handle.update_state(\n state=state, meta=result\n )\n\n\n# store task results in case of testing\n# state and info\nTASK_RESULTS = {}\n\n\nclass EmptyObject(object):\n pass\n", "path": "app/api/helpers/utilities.py"}, {"content": "import factory\n\n# use camelCase for naming variables\n\nstring_ = 'example'\nemail_ = factory.Sequence(lambda n: 'user{0}@example.com'.format(n))\ninteger_ = 25\nurl_ = 'http://example.com'\nimageUrl_ = 'https://www.w3schools.com/html/pic_mountain.jpg'\ndate_ = '2016-12-13T23:59:59.123456+00:00'\ndateFuture_ = '2099-12-13T23:59:59.123456+00:00'\ndateEndFuture_ = '2099-12-14T23:59:59.123456+00:00'\ndateEnd_ = '2020-12-14T23:59:59.123456+00:00'\ncountry_ = 'US'\ncurrency_ = 'USD'\nint_ = '1'\nfloat_ = '1.23456789'\ntimezone_ = 'UTC'\nenvironment_ = 'production'\nsecret_ = 'ABCDefghIJKLmnop'\nfee_ = '1.23'\nslug_ = factory.Sequence(lambda n: 'example_slug{0}'.format(n))\n\n\ndef socialUrl_(name):\n return 'https://{}.com/{}'.format(name, name)\n", "path": "app/factories/common.py"}], "after_files": [{"content": "# PLEASE PUT ALL FUNCTIONS WHICH PERFORM GENERAL FORMATTING ON ANY DATATYPE WITHOUT USING ANY\n# MODULES RELATED TO THE EVENT-SYSTEM i.e FUNCTIONS SPECIFIC TO DB MODELS E.G A FUNCTION JUST FOR ROLE_INVITES\nimport random\nimport string\nimport sys\n\nimport bleach\nfrom itsdangerous import Serializer\nimport requests\nimport re\n\nfrom app.api.helpers.exceptions import UnprocessableEntity\n\nfrom flask import current_app\n\n\ndef dasherize(text):\n return text.replace('_', '-')\n\n\ndef require_relationship(resource_list, data):\n for resource in resource_list:\n if resource not in data:\n raise UnprocessableEntity({'pointer': '/data/relationships/{}'.format(resource)},\n \"A valid relationship with {} resource is required\".format(resource))\n\n\ndef string_empty(value):\n is_not_str_type = type(value) is not str\n if sys.version_info[0] < 3:\n is_not_str_type = is_not_str_type and type(value) is not unicode\n if type(value) is not value and is_not_str_type:\n return False\n return not (value and value.strip() and value != u'' and value != u' ')\n\n\ndef strip_tags(html):\n if html is None:\n return None\n return bleach.clean(html, tags=[], attributes={}, styles=[], strip=True)\n\n\ndef get_serializer(secret_key='secret_key'):\n return Serializer(secret_key)\n\n\ndef str_generator(size=6, chars=string.ascii_uppercase + string.digits):\n return ''.join(random.choice(chars) for _ in range(size))\n\n\n# From http://stackoverflow.com/a/3425124\ndef monthdelta(date, delta):\n m, y = (date.month + delta) % 12, date.year + (date.month + delta - 1) // 12\n if not m:\n m = 12\n d = min(date.day, [31,\n 29 if y % 4 == 0 and not y % 400 == 0 else 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31][m - 1])\n return date.replace(day=d, month=m, year=y)\n\n\ndef represents_int(value):\n try:\n int(value)\n return True\n except:\n return False\n\n\ndef is_downloadable(url):\n \"\"\"\n Does the url contain a downloadable resource\n \"\"\"\n h = requests.head(url, allow_redirects=True)\n header = h.headers\n content_type = header.get('content-type')\n # content_length = header.get('content-length', 1e10)\n if 'text' in content_type.lower():\n return False\n if 'html' in content_type.lower():\n return False\n return True\n\n\ndef get_filename_from_cd(cd):\n \"\"\"\n Get filename and ext from content-disposition\n \"\"\"\n if not cd:\n return '', ''\n fname = re.findall('filename=(.+)', cd)\n if len(fname) == 0:\n return '', ''\n fn = fname[0].rsplit('.', 1)\n return fn[0], '' if len(fn) == 1 else ('.' + fn[1])\n\n\ndef write_file(file, data):\n \"\"\"simple write to file\"\"\"\n fp = open(file, 'w')\n fp.write(str(data, 'utf-8'))\n fp.close()\n\n\ndef update_state(task_handle, state, result=None):\n \"\"\"\n Update state of celery task\n \"\"\"\n if result is None:\n result = {}\n if not current_app.config.get('CELERY_ALWAYS_EAGER'):\n task_handle.update_state(\n state=state, meta=result\n )\n\nstatic_page = 'https://eventyay.com/'\nimage_link = 'https://www.gstatic.com/webp/gallery/1.jpg'\n\n# store task results in case of testing\n# state and info\nTASK_RESULTS = {}\n\n\nclass EmptyObject(object):\n pass\n", "path": "app/api/helpers/utilities.py"}, {"content": "import factory\nfrom app.api.helpers.utilities import static_page, image_link\n# use camelCase for naming variables\n\nstring_ = 'example'\nemail_ = factory.Sequence(lambda n: 'user{0}@example.com'.format(n))\ninteger_ = 25\nurl_ = static_page\nimageUrl_ = image_link\ndate_ = '2016-12-13T23:59:59.123456+00:00'\ndateFuture_ = '2099-12-13T23:59:59.123456+00:00'\ndateEndFuture_ = '2099-12-14T23:59:59.123456+00:00'\ndateEnd_ = '2020-12-14T23:59:59.123456+00:00'\ncountry_ = 'US'\ncurrency_ = 'USD'\nint_ = '1'\nfloat_ = '1.23456789'\ntimezone_ = 'UTC'\nenvironment_ = 'production'\nsecret_ = 'ABCDefghIJKLmnop'\nfee_ = '1.23'\nslug_ = factory.Sequence(lambda n: 'example_slug{0}'.format(n))\n\n\ndef socialUrl_(name):\n return 'https://{}.com/{}'.format(name, name)\n", "path": "app/factories/common.py"}]} | 1,874 | 362 |
gh_patches_debug_33773 | rasdani/github-patches | git_diff | netbox-community__netbox-14131 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing Module Bays with Children Power Ports breaks Power Ports count
### NetBox version
v3.6.4
### Python version
3.8
### Steps to Reproduce
1. Create module types for device (for example PWR-MX960-4100-AC with 2 power ports):

2. In Devices add to chassis device (for example mx960) PEM module:

3. Check count Power Ports:

4. Delete module from device:

### Expected Behavior
For the Power Ports count to be correct
### Observed Behavior
The port counter has not decreased
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/counters.py`
Content:
```
1 from django.apps import apps
2 from django.db.models import F, Count, OuterRef, Subquery
3 from django.db.models.signals import post_delete, post_save
4
5 from netbox.registry import registry
6 from .fields import CounterCacheField
7
8
9 def get_counters_for_model(model):
10 """
11 Return field mappings for all counters registered to the given model.
12 """
13 return registry['counter_fields'][model].items()
14
15
16 def update_counter(model, pk, counter_name, value):
17 """
18 Increment or decrement a counter field on an object identified by its model and primary key (PK). Positive values
19 will increment; negative values will decrement.
20 """
21 model.objects.filter(pk=pk).update(
22 **{counter_name: F(counter_name) + value}
23 )
24
25
26 def update_counts(model, field_name, related_query):
27 """
28 Perform a bulk update for the given model and counter field. For example,
29
30 update_counts(Device, '_interface_count', 'interfaces')
31
32 will effectively set
33
34 Device.objects.update(_interface_count=Count('interfaces'))
35 """
36 subquery = Subquery(
37 model.objects.filter(pk=OuterRef('pk')).annotate(_count=Count(related_query)).values('_count')
38 )
39 return model.objects.update(**{
40 field_name: subquery
41 })
42
43
44 #
45 # Signal handlers
46 #
47
48 def post_save_receiver(sender, instance, created, **kwargs):
49 """
50 Update counter fields on related objects when a TrackingModelMixin subclass is created or modified.
51 """
52 for field_name, counter_name in get_counters_for_model(sender):
53 parent_model = sender._meta.get_field(field_name).related_model
54 new_pk = getattr(instance, field_name, None)
55 has_old_field = field_name in instance.tracker
56 old_pk = instance.tracker.get(field_name) if has_old_field else None
57
58 # Update the counters on the old and/or new parents as needed
59 if old_pk is not None:
60 update_counter(parent_model, old_pk, counter_name, -1)
61 if new_pk is not None and (has_old_field or created):
62 update_counter(parent_model, new_pk, counter_name, 1)
63
64
65 def post_delete_receiver(sender, instance, origin, **kwargs):
66 """
67 Update counter fields on related objects when a TrackingModelMixin subclass is deleted.
68 """
69 for field_name, counter_name in get_counters_for_model(sender):
70 parent_model = sender._meta.get_field(field_name).related_model
71 parent_pk = getattr(instance, field_name, None)
72
73 # Decrement the parent's counter by one
74 if parent_pk is not None:
75 # MPTT sends two delete signals for child elements so guard against multiple decrements
76 if not origin or origin == instance:
77 update_counter(parent_model, parent_pk, counter_name, -1)
78
79
80 #
81 # Registration
82 #
83
84 def connect_counters(*models):
85 """
86 Register counter fields and connect post_save & post_delete signal handlers for the affected models.
87 """
88 for model in models:
89
90 # Find all CounterCacheFields on the model
91 counter_fields = [
92 field for field in model._meta.get_fields() if type(field) is CounterCacheField
93 ]
94
95 for field in counter_fields:
96 to_model = apps.get_model(field.to_model_name)
97
98 # Register the counter in the registry
99 change_tracking_fields = registry['counter_fields'][to_model]
100 change_tracking_fields[f"{field.to_field_name}_id"] = field.name
101
102 # Connect the post_save and post_delete handlers
103 post_save.connect(
104 post_save_receiver,
105 sender=to_model,
106 weak=False,
107 dispatch_uid=f'{model._meta.label}.{field.name}'
108 )
109 post_delete.connect(
110 post_delete_receiver,
111 sender=to_model,
112 weak=False,
113 dispatch_uid=f'{model._meta.label}.{field.name}'
114 )
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/counters.py b/netbox/utilities/counters.py
--- a/netbox/utilities/counters.py
+++ b/netbox/utilities/counters.py
@@ -1,6 +1,6 @@
from django.apps import apps
from django.db.models import F, Count, OuterRef, Subquery
-from django.db.models.signals import post_delete, post_save
+from django.db.models.signals import post_delete, post_save, pre_delete
from netbox.registry import registry
from .fields import CounterCacheField
@@ -62,6 +62,12 @@
update_counter(parent_model, new_pk, counter_name, 1)
+def pre_delete_receiver(sender, instance, origin, **kwargs):
+ model = instance._meta.model
+ if not model.objects.filter(pk=instance.pk).exists():
+ instance._previously_removed = True
+
+
def post_delete_receiver(sender, instance, origin, **kwargs):
"""
Update counter fields on related objects when a TrackingModelMixin subclass is deleted.
@@ -71,10 +77,8 @@
parent_pk = getattr(instance, field_name, None)
# Decrement the parent's counter by one
- if parent_pk is not None:
- # MPTT sends two delete signals for child elements so guard against multiple decrements
- if not origin or origin == instance:
- update_counter(parent_model, parent_pk, counter_name, -1)
+ if parent_pk is not None and not hasattr(instance, "_previously_removed"):
+ update_counter(parent_model, parent_pk, counter_name, -1)
#
@@ -106,6 +110,12 @@
weak=False,
dispatch_uid=f'{model._meta.label}.{field.name}'
)
+ pre_delete.connect(
+ pre_delete_receiver,
+ sender=to_model,
+ weak=False,
+ dispatch_uid=f'{model._meta.label}.{field.name}'
+ )
post_delete.connect(
post_delete_receiver,
sender=to_model,
| {"golden_diff": "diff --git a/netbox/utilities/counters.py b/netbox/utilities/counters.py\n--- a/netbox/utilities/counters.py\n+++ b/netbox/utilities/counters.py\n@@ -1,6 +1,6 @@\n from django.apps import apps\n from django.db.models import F, Count, OuterRef, Subquery\n-from django.db.models.signals import post_delete, post_save\n+from django.db.models.signals import post_delete, post_save, pre_delete\n \n from netbox.registry import registry\n from .fields import CounterCacheField\n@@ -62,6 +62,12 @@\n update_counter(parent_model, new_pk, counter_name, 1)\n \n \n+def pre_delete_receiver(sender, instance, origin, **kwargs):\n+ model = instance._meta.model\n+ if not model.objects.filter(pk=instance.pk).exists():\n+ instance._previously_removed = True\n+\n+\n def post_delete_receiver(sender, instance, origin, **kwargs):\n \"\"\"\n Update counter fields on related objects when a TrackingModelMixin subclass is deleted.\n@@ -71,10 +77,8 @@\n parent_pk = getattr(instance, field_name, None)\n \n # Decrement the parent's counter by one\n- if parent_pk is not None:\n- # MPTT sends two delete signals for child elements so guard against multiple decrements\n- if not origin or origin == instance:\n- update_counter(parent_model, parent_pk, counter_name, -1)\n+ if parent_pk is not None and not hasattr(instance, \"_previously_removed\"):\n+ update_counter(parent_model, parent_pk, counter_name, -1)\n \n \n #\n@@ -106,6 +110,12 @@\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n+ pre_delete.connect(\n+ pre_delete_receiver,\n+ sender=to_model,\n+ weak=False,\n+ dispatch_uid=f'{model._meta.label}.{field.name}'\n+ )\n post_delete.connect(\n post_delete_receiver,\n sender=to_model,\n", "issue": "Removing Module Bays with Children Power Ports breaks Power Ports count\n### NetBox version\r\n\r\nv3.6.4\r\n\r\n### Python version\r\n\r\n3.8\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create module types for device (for example PWR-MX960-4100-AC with 2 power ports):\r\n\r\n2. In Devices add to chassis device (for example mx960) PEM module:\r\n\r\n3. Check count Power Ports:\r\n\r\n4. Delete module from device:\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\nFor the Power Ports count to be correct\r\n\r\n### Observed Behavior\r\n\r\nThe port counter has not decreased\n", "before_files": [{"content": "from django.apps import apps\nfrom django.db.models import F, Count, OuterRef, Subquery\nfrom django.db.models.signals import post_delete, post_save\n\nfrom netbox.registry import registry\nfrom .fields import CounterCacheField\n\n\ndef get_counters_for_model(model):\n \"\"\"\n Return field mappings for all counters registered to the given model.\n \"\"\"\n return registry['counter_fields'][model].items()\n\n\ndef update_counter(model, pk, counter_name, value):\n \"\"\"\n Increment or decrement a counter field on an object identified by its model and primary key (PK). Positive values\n will increment; negative values will decrement.\n \"\"\"\n model.objects.filter(pk=pk).update(\n **{counter_name: F(counter_name) + value}\n )\n\n\ndef update_counts(model, field_name, related_query):\n \"\"\"\n Perform a bulk update for the given model and counter field. For example,\n\n update_counts(Device, '_interface_count', 'interfaces')\n\n will effectively set\n\n Device.objects.update(_interface_count=Count('interfaces'))\n \"\"\"\n subquery = Subquery(\n model.objects.filter(pk=OuterRef('pk')).annotate(_count=Count(related_query)).values('_count')\n )\n return model.objects.update(**{\n field_name: subquery\n })\n\n\n#\n# Signal handlers\n#\n\ndef post_save_receiver(sender, instance, created, **kwargs):\n \"\"\"\n Update counter fields on related objects when a TrackingModelMixin subclass is created or modified.\n \"\"\"\n for field_name, counter_name in get_counters_for_model(sender):\n parent_model = sender._meta.get_field(field_name).related_model\n new_pk = getattr(instance, field_name, None)\n has_old_field = field_name in instance.tracker\n old_pk = instance.tracker.get(field_name) if has_old_field else None\n\n # Update the counters on the old and/or new parents as needed\n if old_pk is not None:\n update_counter(parent_model, old_pk, counter_name, -1)\n if new_pk is not None and (has_old_field or created):\n update_counter(parent_model, new_pk, counter_name, 1)\n\n\ndef post_delete_receiver(sender, instance, origin, **kwargs):\n \"\"\"\n Update counter fields on related objects when a TrackingModelMixin subclass is deleted.\n \"\"\"\n for field_name, counter_name in get_counters_for_model(sender):\n parent_model = sender._meta.get_field(field_name).related_model\n parent_pk = getattr(instance, field_name, None)\n\n # Decrement the parent's counter by one\n if parent_pk is not None:\n # MPTT sends two delete signals for child elements so guard against multiple decrements\n if not origin or origin == instance:\n update_counter(parent_model, parent_pk, counter_name, -1)\n\n\n#\n# Registration\n#\n\ndef connect_counters(*models):\n \"\"\"\n Register counter fields and connect post_save & post_delete signal handlers for the affected models.\n \"\"\"\n for model in models:\n\n # Find all CounterCacheFields on the model\n counter_fields = [\n field for field in model._meta.get_fields() if type(field) is CounterCacheField\n ]\n\n for field in counter_fields:\n to_model = apps.get_model(field.to_model_name)\n\n # Register the counter in the registry\n change_tracking_fields = registry['counter_fields'][to_model]\n change_tracking_fields[f\"{field.to_field_name}_id\"] = field.name\n\n # Connect the post_save and post_delete handlers\n post_save.connect(\n post_save_receiver,\n sender=to_model,\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n post_delete.connect(\n post_delete_receiver,\n sender=to_model,\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n", "path": "netbox/utilities/counters.py"}], "after_files": [{"content": "from django.apps import apps\nfrom django.db.models import F, Count, OuterRef, Subquery\nfrom django.db.models.signals import post_delete, post_save, pre_delete\n\nfrom netbox.registry import registry\nfrom .fields import CounterCacheField\n\n\ndef get_counters_for_model(model):\n \"\"\"\n Return field mappings for all counters registered to the given model.\n \"\"\"\n return registry['counter_fields'][model].items()\n\n\ndef update_counter(model, pk, counter_name, value):\n \"\"\"\n Increment or decrement a counter field on an object identified by its model and primary key (PK). Positive values\n will increment; negative values will decrement.\n \"\"\"\n model.objects.filter(pk=pk).update(\n **{counter_name: F(counter_name) + value}\n )\n\n\ndef update_counts(model, field_name, related_query):\n \"\"\"\n Perform a bulk update for the given model and counter field. For example,\n\n update_counts(Device, '_interface_count', 'interfaces')\n\n will effectively set\n\n Device.objects.update(_interface_count=Count('interfaces'))\n \"\"\"\n subquery = Subquery(\n model.objects.filter(pk=OuterRef('pk')).annotate(_count=Count(related_query)).values('_count')\n )\n return model.objects.update(**{\n field_name: subquery\n })\n\n\n#\n# Signal handlers\n#\n\ndef post_save_receiver(sender, instance, created, **kwargs):\n \"\"\"\n Update counter fields on related objects when a TrackingModelMixin subclass is created or modified.\n \"\"\"\n for field_name, counter_name in get_counters_for_model(sender):\n parent_model = sender._meta.get_field(field_name).related_model\n new_pk = getattr(instance, field_name, None)\n has_old_field = field_name in instance.tracker\n old_pk = instance.tracker.get(field_name) if has_old_field else None\n\n # Update the counters on the old and/or new parents as needed\n if old_pk is not None:\n update_counter(parent_model, old_pk, counter_name, -1)\n if new_pk is not None and (has_old_field or created):\n update_counter(parent_model, new_pk, counter_name, 1)\n\n\ndef pre_delete_receiver(sender, instance, origin, **kwargs):\n model = instance._meta.model\n if not model.objects.filter(pk=instance.pk).exists():\n instance._previously_removed = True\n\n\ndef post_delete_receiver(sender, instance, origin, **kwargs):\n \"\"\"\n Update counter fields on related objects when a TrackingModelMixin subclass is deleted.\n \"\"\"\n for field_name, counter_name in get_counters_for_model(sender):\n parent_model = sender._meta.get_field(field_name).related_model\n parent_pk = getattr(instance, field_name, None)\n\n # Decrement the parent's counter by one\n if parent_pk is not None and not hasattr(instance, \"_previously_removed\"):\n update_counter(parent_model, parent_pk, counter_name, -1)\n\n\n#\n# Registration\n#\n\ndef connect_counters(*models):\n \"\"\"\n Register counter fields and connect post_save & post_delete signal handlers for the affected models.\n \"\"\"\n for model in models:\n\n # Find all CounterCacheFields on the model\n counter_fields = [\n field for field in model._meta.get_fields() if type(field) is CounterCacheField\n ]\n\n for field in counter_fields:\n to_model = apps.get_model(field.to_model_name)\n\n # Register the counter in the registry\n change_tracking_fields = registry['counter_fields'][to_model]\n change_tracking_fields[f\"{field.to_field_name}_id\"] = field.name\n\n # Connect the post_save and post_delete handlers\n post_save.connect(\n post_save_receiver,\n sender=to_model,\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n pre_delete.connect(\n pre_delete_receiver,\n sender=to_model,\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n post_delete.connect(\n post_delete_receiver,\n sender=to_model,\n weak=False,\n dispatch_uid=f'{model._meta.label}.{field.name}'\n )\n", "path": "netbox/utilities/counters.py"}]} | 1,676 | 442 |
gh_patches_debug_1630 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Execfile does not exist in py3k
<!--
Thanks for reporting issues of python-telegram-bot!
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. Use python 3
2. Try to install from git:
`$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram`
### Expected behaviour
The library should be installed.
### Actual behaviour
NameError due to `execfile` not being a thing in python 3.
See here for alternatives: https://stackoverflow.com/a/437857
I would fix it myself, but I am unable to actually find the execfile call anywhere .-.
### Configuration
**Operating System:**
Windows 10 Education
**Version of Python, python-telegram-bot & dependencies:**
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]
### Logs
``````
$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram
Obtaining telegram from git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram
Skipping because already up-to-date.
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Development\telegram\VocaBot2\src\telegram\setup.py", line 20, in <module>
execfile(os.path.join('telegram', 'version.py'))
NameError: name 'execfile' is not defined
Command "python setup.py egg_info" failed with error code 1```
``````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """The setup and build script for the python-telegram-bot library."""
3
4 import codecs
5 import os
6 from setuptools import setup, find_packages
7
8
9 def requirements():
10 """Build the requirements list for this project"""
11 requirements_list = []
12
13 with open('requirements.txt') as requirements:
14 for install in requirements:
15 requirements_list.append(install.strip())
16
17 return requirements_list
18
19 with codecs.open('README.rst', 'r', 'utf-8') as fd:
20 execfile(os.path.join('telegram', 'version.py'))
21
22 setup(name='python-telegram-bot',
23 version=__version__,
24 author='Leandro Toledo',
25 author_email='[email protected]',
26 license='LGPLv3',
27 url='https://github.com/python-telegram-bot/python-telegram-bot',
28 keywords='python telegram bot api wrapper',
29 description='Not just a Python wrapper around the Telegram Bot API',
30 long_description=fd.read(),
31 packages=find_packages(exclude=['tests*']),
32 install_requires=requirements(),
33 include_package_data=True,
34 classifiers=[
35 'Development Status :: 5 - Production/Stable',
36 'Intended Audience :: Developers',
37 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',
38 'Operating System :: OS Independent',
39 'Topic :: Software Development :: Libraries :: Python Modules',
40 'Topic :: Communications :: Chat',
41 'Topic :: Internet',
42 'Programming Language :: Python',
43 'Programming Language :: Python :: 2',
44 'Programming Language :: Python :: 2.6',
45 'Programming Language :: Python :: 2.7',
46 'Programming Language :: Python :: 3',
47 'Programming Language :: Python :: 3.3',
48 'Programming Language :: Python :: 3.4',
49 'Programming Language :: Python :: 3.5',
50 ],)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,6 +16,13 @@
return requirements_list
+
+def execfile(fn):
+ with open(fn) as f:
+ code = compile(f.read(), fn, 'exec')
+ exec(code)
+
+
with codecs.open('README.rst', 'r', 'utf-8') as fd:
execfile(os.path.join('telegram', 'version.py'))
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,6 +16,13 @@\n \n return requirements_list\n \n+\n+def execfile(fn):\n+ with open(fn) as f:\n+ code = compile(f.read(), fn, 'exec')\n+ exec(code)\n+\n+\n with codecs.open('README.rst', 'r', 'utf-8') as fd:\n execfile(os.path.join('telegram', 'version.py'))\n", "issue": "Execfile does not exist in py3k\n<!--\nThanks for reporting issues of python-telegram-bot!\nTo make it easier for us to help you please enter detailed information below.\n\nPlease note, we only support the latest version of python-telegram-bot and\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\nversion prior to opening an issue.\n-->\n### Steps to reproduce\n1. Use python 3\n2. Try to install from git:\n `$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram`\n### Expected behaviour\n\nThe library should be installed.\n### Actual behaviour\n\nNameError due to `execfile` not being a thing in python 3.\nSee here for alternatives: https://stackoverflow.com/a/437857\nI would fix it myself, but I am unable to actually find the execfile call anywhere .-.\n### Configuration\n\n**Operating System:**\nWindows 10 Education\n\n**Version of Python, python-telegram-bot & dependencies:**\nPython 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]\n### Logs\n\n``````\n$ pip install -e git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram\nObtaining telegram from git+https://github.com/python-telegram-bot/python-telegram-bot.git@555e36ee8036a179f157f60dcb0c3fcf958146f4#egg=telegram\n Skipping because already up-to-date.\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"C:\\Development\\telegram\\VocaBot2\\src\\telegram\\setup.py\", line 20, in <module>\n execfile(os.path.join('telegram', 'version.py'))\n NameError: name 'execfile' is not defined\nCommand \"python setup.py egg_info\" failed with error code 1```\n``````\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n execfile(os.path.join('telegram', 'version.py'))\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://github.com/python-telegram-bot/python-telegram-bot',\n keywords='python telegram bot api wrapper',\n description='Not just a Python wrapper around the Telegram Bot API',\n long_description=fd.read(),\n packages=find_packages(exclude=['tests*']),\n install_requires=requirements(),\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\n\ndef execfile(fn):\n with open(fn) as f:\n code = compile(f.read(), fn, 'exec')\n exec(code)\n\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n execfile(os.path.join('telegram', 'version.py'))\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://github.com/python-telegram-bot/python-telegram-bot',\n keywords='python telegram bot api wrapper',\n description='Not just a Python wrapper around the Telegram Bot API',\n long_description=fd.read(),\n packages=find_packages(exclude=['tests*']),\n install_requires=requirements(),\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],)\n", "path": "setup.py"}]} | 1,305 | 107 |
gh_patches_debug_32541 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-324 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Signer client does not support files with '.' in filenames
Many of our artifacts contain '.' in the artifact names.
`client-benchmarks-1.0.0-javadoc.jar`
These files need to be signed properly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/src/sign.py`
Content:
```
1 #!/usr/bin/env python
2
3 # SPDX-License-Identifier: Apache-2.0
4 #
5 # The OpenSearch Contributors require contributions made to
6 # this file be licensed under the Apache-2.0 license or a
7 # compatible open source license.
8
9 import argparse
10
11 from manifests.build_manifest import BuildManifest
12 from signing_workflow.signer import Signer
13
14 parser = argparse.ArgumentParser(description="Sign artifacts")
15 parser.add_argument(
16 "manifest", type=argparse.FileType("r"), help="Path to local manifest file."
17 )
18 parser.add_argument("--component", nargs="?", help="Component name")
19 parser.add_argument("--type", nargs="?", help="Artifact type")
20 args = parser.parse_args()
21
22 manifest = BuildManifest.from_file(args.manifest)
23 signer = Signer()
24
25 for component in manifest.components:
26
27 if args.component and args.component != component.name:
28 print(f"\nSkipping {component.name}")
29 continue
30
31 print(f"\nSigning {component.name}")
32 for artifact_type in component.artifacts:
33
34 if args.type and args.type != artifact_type:
35 continue
36
37 signer.sign(component.artifacts[artifact_type])
38
39 print("Done.")
40
```
Path: `bundle-workflow/src/signing_workflow/signer.py`
Content:
```
1 #!/usr/bin/env python
2
3 # SPDX-License-Identifier: Apache-2.0
4 #
5 # The OpenSearch Contributors require contributions made to
6 # this file be licensed under the Apache-2.0 license or a
7 # compatible open source license.
8
9 import os
10 import pathlib
11
12 from git.git_repository import GitRepository
13
14 """
15 This class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.
16 The signed artifacts will be found in the same location as the original artifacts.
17 """
18
19
20 class Signer:
21
22 ACCEPTED_FILE_TYPES = [".zip", ".jar", ".war", ".pom", ".module", ".tar.gz"]
23
24 def __init__(self):
25 self.git_repo = GitRepository(self.get_repo_url(), "HEAD")
26 self.git_repo.execute("./bootstrap", subdirname="src")
27 self.git_repo.execute("rm config.cfg", subdirname="src")
28
29 def sign_artifacts(self, artifacts, basepath):
30 for artifact in artifacts:
31 if self.is_invalid_file_type(artifact):
32 print(f"Skipping signing of file ${artifact}")
33 continue
34 location = os.path.join(basepath, artifact)
35 self.sign(location)
36 self.verify(location + ".asc")
37
38 def is_invalid_file_type(self, file_name):
39 return (
40 "".join(pathlib.Path(file_name).suffixes) not in Signer.ACCEPTED_FILE_TYPES
41 )
42
43 def get_repo_url(self):
44 if "GITHUB_TOKEN" in os.environ:
45 return "https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git"
46 return "https://github.com/opensearch-project/opensearch-signer-client.git"
47
48 def sign(self, filename):
49 signature_file = filename + ".asc"
50 signing_cmd = [
51 "./opensearch-signer-client",
52 "-i",
53 filename,
54 "-o",
55 signature_file,
56 "-p",
57 "pgp",
58 ]
59 self.git_repo.execute(" ".join(signing_cmd), subdirname="src")
60
61 def verify(self, filename):
62 verify_cmd = ["gpg", "--verify-files", filename]
63 self.git_repo.execute(" ".join(verify_cmd))
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/src/sign.py b/bundle-workflow/src/sign.py
--- a/bundle-workflow/src/sign.py
+++ b/bundle-workflow/src/sign.py
@@ -7,6 +7,7 @@
# compatible open source license.
import argparse
+import os
from manifests.build_manifest import BuildManifest
from signing_workflow.signer import Signer
@@ -20,6 +21,7 @@
args = parser.parse_args()
manifest = BuildManifest.from_file(args.manifest)
+basepath = os.path.dirname(os.path.abspath(manifest.name))
signer = Signer()
for component in manifest.components:
@@ -34,6 +36,6 @@
if args.type and args.type != artifact_type:
continue
- signer.sign(component.artifacts[artifact_type])
+ signer.sign_artifacts(component.artifacts[artifact_type], basepath)
print("Done.")
diff --git a/bundle-workflow/src/signing_workflow/signer.py b/bundle-workflow/src/signing_workflow/signer.py
--- a/bundle-workflow/src/signing_workflow/signer.py
+++ b/bundle-workflow/src/signing_workflow/signer.py
@@ -28,17 +28,15 @@
def sign_artifacts(self, artifacts, basepath):
for artifact in artifacts:
- if self.is_invalid_file_type(artifact):
+ if not self.is_valid_file_type(artifact):
print(f"Skipping signing of file ${artifact}")
continue
location = os.path.join(basepath, artifact)
self.sign(location)
self.verify(location + ".asc")
- def is_invalid_file_type(self, file_name):
- return (
- "".join(pathlib.Path(file_name).suffixes) not in Signer.ACCEPTED_FILE_TYPES
- )
+ def is_valid_file_type(self, file_name):
+ return any(x in [pathlib.Path(file_name).suffix, "".join(pathlib.Path(file_name).suffixes)] for x in Signer.ACCEPTED_FILE_TYPES)
def get_repo_url(self):
if "GITHUB_TOKEN" in os.environ:
| {"golden_diff": "diff --git a/bundle-workflow/src/sign.py b/bundle-workflow/src/sign.py\n--- a/bundle-workflow/src/sign.py\n+++ b/bundle-workflow/src/sign.py\n@@ -7,6 +7,7 @@\n # compatible open source license.\n \n import argparse\n+import os\n \n from manifests.build_manifest import BuildManifest\n from signing_workflow.signer import Signer\n@@ -20,6 +21,7 @@\n args = parser.parse_args()\n \n manifest = BuildManifest.from_file(args.manifest)\n+basepath = os.path.dirname(os.path.abspath(manifest.name))\n signer = Signer()\n \n for component in manifest.components:\n@@ -34,6 +36,6 @@\n if args.type and args.type != artifact_type:\n continue\n \n- signer.sign(component.artifacts[artifact_type])\n+ signer.sign_artifacts(component.artifacts[artifact_type], basepath)\n \n print(\"Done.\")\ndiff --git a/bundle-workflow/src/signing_workflow/signer.py b/bundle-workflow/src/signing_workflow/signer.py\n--- a/bundle-workflow/src/signing_workflow/signer.py\n+++ b/bundle-workflow/src/signing_workflow/signer.py\n@@ -28,17 +28,15 @@\n \n def sign_artifacts(self, artifacts, basepath):\n for artifact in artifacts:\n- if self.is_invalid_file_type(artifact):\n+ if not self.is_valid_file_type(artifact):\n print(f\"Skipping signing of file ${artifact}\")\n continue\n location = os.path.join(basepath, artifact)\n self.sign(location)\n self.verify(location + \".asc\")\n \n- def is_invalid_file_type(self, file_name):\n- return (\n- \"\".join(pathlib.Path(file_name).suffixes) not in Signer.ACCEPTED_FILE_TYPES\n- )\n+ def is_valid_file_type(self, file_name):\n+ return any(x in [pathlib.Path(file_name).suffix, \"\".join(pathlib.Path(file_name).suffixes)] for x in Signer.ACCEPTED_FILE_TYPES)\n \n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n", "issue": "Signer client does not support files with '.' in filenames\nMany of our artifacts contain '.' in the artifact names.\r\n\r\n`client-benchmarks-1.0.0-javadoc.jar`\r\n\r\nThese files need to be signed properly.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport argparse\n\nfrom manifests.build_manifest import BuildManifest\nfrom signing_workflow.signer import Signer\n\nparser = argparse.ArgumentParser(description=\"Sign artifacts\")\nparser.add_argument(\n \"manifest\", type=argparse.FileType(\"r\"), help=\"Path to local manifest file.\"\n)\nparser.add_argument(\"--component\", nargs=\"?\", help=\"Component name\")\nparser.add_argument(\"--type\", nargs=\"?\", help=\"Artifact type\")\nargs = parser.parse_args()\n\nmanifest = BuildManifest.from_file(args.manifest)\nsigner = Signer()\n\nfor component in manifest.components:\n\n if args.component and args.component != component.name:\n print(f\"\\nSkipping {component.name}\")\n continue\n\n print(f\"\\nSigning {component.name}\")\n for artifact_type in component.artifacts:\n\n if args.type and args.type != artifact_type:\n continue\n\n signer.sign(component.artifacts[artifact_type])\n\nprint(\"Done.\")\n", "path": "bundle-workflow/src/sign.py"}, {"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport pathlib\n\nfrom git.git_repository import GitRepository\n\n\"\"\"\nThis class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.\nThe signed artifacts will be found in the same location as the original artifacts.\n\"\"\"\n\n\nclass Signer:\n\n ACCEPTED_FILE_TYPES = [\".zip\", \".jar\", \".war\", \".pom\", \".module\", \".tar.gz\"]\n\n def __init__(self):\n self.git_repo = GitRepository(self.get_repo_url(), \"HEAD\")\n self.git_repo.execute(\"./bootstrap\", subdirname=\"src\")\n self.git_repo.execute(\"rm config.cfg\", subdirname=\"src\")\n\n def sign_artifacts(self, artifacts, basepath):\n for artifact in artifacts:\n if self.is_invalid_file_type(artifact):\n print(f\"Skipping signing of file ${artifact}\")\n continue\n location = os.path.join(basepath, artifact)\n self.sign(location)\n self.verify(location + \".asc\")\n\n def is_invalid_file_type(self, file_name):\n return (\n \"\".join(pathlib.Path(file_name).suffixes) not in Signer.ACCEPTED_FILE_TYPES\n )\n\n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n\n def sign(self, filename):\n signature_file = filename + \".asc\"\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n filename,\n \"-o\",\n signature_file,\n \"-p\",\n \"pgp\",\n ]\n self.git_repo.execute(\" \".join(signing_cmd), subdirname=\"src\")\n\n def verify(self, filename):\n verify_cmd = [\"gpg\", \"--verify-files\", filename]\n self.git_repo.execute(\" \".join(verify_cmd))\n", "path": "bundle-workflow/src/signing_workflow/signer.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport argparse\nimport os\n\nfrom manifests.build_manifest import BuildManifest\nfrom signing_workflow.signer import Signer\n\nparser = argparse.ArgumentParser(description=\"Sign artifacts\")\nparser.add_argument(\n \"manifest\", type=argparse.FileType(\"r\"), help=\"Path to local manifest file.\"\n)\nparser.add_argument(\"--component\", nargs=\"?\", help=\"Component name\")\nparser.add_argument(\"--type\", nargs=\"?\", help=\"Artifact type\")\nargs = parser.parse_args()\n\nmanifest = BuildManifest.from_file(args.manifest)\nbasepath = os.path.dirname(os.path.abspath(manifest.name))\nsigner = Signer()\n\nfor component in manifest.components:\n\n if args.component and args.component != component.name:\n print(f\"\\nSkipping {component.name}\")\n continue\n\n print(f\"\\nSigning {component.name}\")\n for artifact_type in component.artifacts:\n\n if args.type and args.type != artifact_type:\n continue\n\n signer.sign_artifacts(component.artifacts[artifact_type], basepath)\n\nprint(\"Done.\")\n", "path": "bundle-workflow/src/sign.py"}, {"content": "#!/usr/bin/env python\n\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport pathlib\n\nfrom git.git_repository import GitRepository\n\n\"\"\"\nThis class is responsible for signing an artifact using the OpenSearch-signer-client and verifying its signature.\nThe signed artifacts will be found in the same location as the original artifacts.\n\"\"\"\n\n\nclass Signer:\n\n ACCEPTED_FILE_TYPES = [\".zip\", \".jar\", \".war\", \".pom\", \".module\", \".tar.gz\"]\n\n def __init__(self):\n self.git_repo = GitRepository(self.get_repo_url(), \"HEAD\")\n self.git_repo.execute(\"./bootstrap\", subdirname=\"src\")\n self.git_repo.execute(\"rm config.cfg\", subdirname=\"src\")\n\n def sign_artifacts(self, artifacts, basepath):\n for artifact in artifacts:\n if not self.is_valid_file_type(artifact):\n print(f\"Skipping signing of file ${artifact}\")\n continue\n location = os.path.join(basepath, artifact)\n self.sign(location)\n self.verify(location + \".asc\")\n\n def is_valid_file_type(self, file_name):\n return any(x in [pathlib.Path(file_name).suffix, \"\".join(pathlib.Path(file_name).suffixes)] for x in Signer.ACCEPTED_FILE_TYPES)\n\n def get_repo_url(self):\n if \"GITHUB_TOKEN\" in os.environ:\n return \"https://${GITHUB_TOKEN}@github.com/opensearch-project/opensearch-signer-client.git\"\n return \"https://github.com/opensearch-project/opensearch-signer-client.git\"\n\n def sign(self, filename):\n signature_file = filename + \".asc\"\n signing_cmd = [\n \"./opensearch-signer-client\",\n \"-i\",\n filename,\n \"-o\",\n signature_file,\n \"-p\",\n \"pgp\",\n ]\n self.git_repo.execute(\" \".join(signing_cmd), subdirname=\"src\")\n\n def verify(self, filename):\n verify_cmd = [\"gpg\", \"--verify-files\", filename]\n self.git_repo.execute(\" \".join(verify_cmd))\n", "path": "bundle-workflow/src/signing_workflow/signer.py"}]} | 1,227 | 455 |
gh_patches_debug_5178 | rasdani/github-patches | git_diff | pytorch__pytorch-1087 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProgressMonitor (Trainer plugin) prints incorrect progress value and percentage
Code to replicate behaviour, extract of [jupyter notebook](https://github.com/recastrodiaz/courses/blob/master/deeplearning1/nbs/lesson1-pytorch.ipynb) In [9]:
```PYTHON
t = trainer.Trainer(model, criterion, optimizer, train_loader)
t.register_plugin(trainer.plugins.ProgressMonitor())
t.register_plugin(trainer.plugins.AccuracyMonitor())
t.register_plugin(trainer.plugins.LossMonitor())
t.register_plugin(trainer.plugins.TimeMonitor())
t.register_plugin(trainer.plugins.Logger(['progress', 'accuracy', 'loss', 'time']))
epochs = 1
t.run(epochs)
```
Prints:
```LOG
progress: 23000/360 (6388.89%) accuracy: 100.00% (99.31%) loss: 0.0058 (0.0368) time: 705ms (1249ms)
```
Should print instead:
```
progress: 360/360 (100.00%) accuracy: 100.00% (99.31%) loss: 0.0058 (0.0368) time: 705ms (1249ms)
```
I'll send over a PR for this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/utils/trainer/plugins/progress.py`
Content:
```
1 from .plugin import Plugin
2
3
4 class ProgressMonitor(Plugin):
5 stat_name = 'progress'
6
7 def __init__(self):
8 super(ProgressMonitor, self).__init__([(1, 'iteration'), (1, 'epoch')])
9
10 def register(self, trainer):
11 self.trainer = trainer
12 stats = self.trainer.stats.setdefault(self.stat_name, {})
13 stats['samples_used'] = 0
14 stats['epoch_size'] = len(trainer.dataset)
15 stats['log_iter_fields'] = [
16 '{samples_used}/{epoch_size}',
17 '({percent:.2f}%)'
18 ]
19
20 def iteration(self, iteration, input, *args):
21 stats = self.trainer.stats.setdefault(self.stat_name, {})
22 stats['samples_used'] += input.size(0)
23 stats['percent'] = 100. * stats['samples_used'] / stats['epoch_size']
24
25 def epoch(self, *args):
26 stats = self.trainer.stats.setdefault(self.stat_name, {})
27 stats['samples_used'] = 0
28 stats['percent'] = 0
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torch/utils/trainer/plugins/progress.py b/torch/utils/trainer/plugins/progress.py
--- a/torch/utils/trainer/plugins/progress.py
+++ b/torch/utils/trainer/plugins/progress.py
@@ -19,7 +19,7 @@
def iteration(self, iteration, input, *args):
stats = self.trainer.stats.setdefault(self.stat_name, {})
- stats['samples_used'] += input.size(0)
+ stats['samples_used'] += 1
stats['percent'] = 100. * stats['samples_used'] / stats['epoch_size']
def epoch(self, *args):
| {"golden_diff": "diff --git a/torch/utils/trainer/plugins/progress.py b/torch/utils/trainer/plugins/progress.py\n--- a/torch/utils/trainer/plugins/progress.py\n+++ b/torch/utils/trainer/plugins/progress.py\n@@ -19,7 +19,7 @@\n \n def iteration(self, iteration, input, *args):\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n- stats['samples_used'] += input.size(0)\n+ stats['samples_used'] += 1\n stats['percent'] = 100. * stats['samples_used'] / stats['epoch_size']\n \n def epoch(self, *args):\n", "issue": "ProgressMonitor (Trainer plugin) prints incorrect progress value and percentage\nCode to replicate behaviour, extract of [jupyter notebook](https://github.com/recastrodiaz/courses/blob/master/deeplearning1/nbs/lesson1-pytorch.ipynb) In [9]:\r\n\r\n```PYTHON\r\nt = trainer.Trainer(model, criterion, optimizer, train_loader)\r\nt.register_plugin(trainer.plugins.ProgressMonitor())\r\nt.register_plugin(trainer.plugins.AccuracyMonitor())\r\nt.register_plugin(trainer.plugins.LossMonitor())\r\nt.register_plugin(trainer.plugins.TimeMonitor())\r\nt.register_plugin(trainer.plugins.Logger(['progress', 'accuracy', 'loss', 'time']))\r\n\r\nepochs = 1\r\nt.run(epochs)\r\n```\r\n\r\nPrints:\r\n```LOG\r\nprogress: 23000/360 (6388.89%)\taccuracy: 100.00% (99.31%)\tloss: 0.0058 (0.0368)\ttime: 705ms (1249ms)\r\n```\r\n\r\nShould print instead:\r\n```\r\nprogress: 360/360 (100.00%)\taccuracy: 100.00% (99.31%)\tloss: 0.0058 (0.0368)\ttime: 705ms (1249ms)\r\n```\r\n\r\nI'll send over a PR for this.\n", "before_files": [{"content": "from .plugin import Plugin\n\n\nclass ProgressMonitor(Plugin):\n stat_name = 'progress'\n\n def __init__(self):\n super(ProgressMonitor, self).__init__([(1, 'iteration'), (1, 'epoch')])\n\n def register(self, trainer):\n self.trainer = trainer\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] = 0\n stats['epoch_size'] = len(trainer.dataset)\n stats['log_iter_fields'] = [\n '{samples_used}/{epoch_size}',\n '({percent:.2f}%)'\n ]\n\n def iteration(self, iteration, input, *args):\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] += input.size(0)\n stats['percent'] = 100. * stats['samples_used'] / stats['epoch_size']\n\n def epoch(self, *args):\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] = 0\n stats['percent'] = 0\n", "path": "torch/utils/trainer/plugins/progress.py"}], "after_files": [{"content": "from .plugin import Plugin\n\n\nclass ProgressMonitor(Plugin):\n stat_name = 'progress'\n\n def __init__(self):\n super(ProgressMonitor, self).__init__([(1, 'iteration'), (1, 'epoch')])\n\n def register(self, trainer):\n self.trainer = trainer\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] = 0\n stats['epoch_size'] = len(trainer.dataset)\n stats['log_iter_fields'] = [\n '{samples_used}/{epoch_size}',\n '({percent:.2f}%)'\n ]\n\n def iteration(self, iteration, input, *args):\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] += 1\n stats['percent'] = 100. * stats['samples_used'] / stats['epoch_size']\n\n def epoch(self, *args):\n stats = self.trainer.stats.setdefault(self.stat_name, {})\n stats['samples_used'] = 0\n stats['percent'] = 0\n", "path": "torch/utils/trainer/plugins/progress.py"}]} | 860 | 142 |
gh_patches_debug_6362 | rasdani/github-patches | git_diff | CTFd__CTFd-826 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can we condense the url_for calls for themes further?
url_for for themes files currently doesn't know what theme is being used and so we pass it in. We should probably default it inside the view. But we would need to default the parameters to None and then set it in the function itself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/initialization/__init__.py`
Content:
```
1 from flask import Flask, current_app as app, request, session, redirect, url_for, abort, render_template
2 from werkzeug.wsgi import DispatcherMiddleware
3 from CTFd.models import db, Tracking
4
5 from CTFd.utils import markdown, get_config
6 from CTFd.utils.dates import unix_time_millis, unix_time, isoformat
7
8 from CTFd.utils import config
9 from CTFd.utils.config import can_send_mail, ctf_logo, ctf_name, ctf_theme
10 from CTFd.utils.config.pages import get_pages
11
12 from CTFd.utils.plugins import (
13 get_registered_stylesheets,
14 get_registered_scripts,
15 get_configurable_plugins,
16 get_registered_admin_scripts,
17 get_registered_admin_stylesheets
18 )
19
20 from CTFd.utils.countries import get_countries, lookup_country_code
21 from CTFd.utils.user import authed, get_ip, get_current_user, get_current_team
22 from CTFd.utils.modes import generate_account_url
23 from CTFd.utils.config import is_setup
24 from CTFd.utils.security.csrf import generate_nonce
25
26 from CTFd.utils.config.visibility import (
27 accounts_visible,
28 challenges_visible,
29 registration_visible,
30 scores_visible
31 )
32
33 from sqlalchemy.exc import InvalidRequestError, IntegrityError
34
35 import datetime
36
37
38 def init_template_filters(app):
39 app.jinja_env.filters['markdown'] = markdown
40 app.jinja_env.filters['unix_time'] = unix_time
41 app.jinja_env.filters['unix_time_millis'] = unix_time_millis
42 app.jinja_env.filters['isoformat'] = isoformat
43
44
45 def init_template_globals(app):
46 app.jinja_env.globals.update(config=config)
47 app.jinja_env.globals.update(get_pages=get_pages)
48 app.jinja_env.globals.update(can_send_mail=can_send_mail)
49 app.jinja_env.globals.update(get_ctf_name=ctf_name)
50 app.jinja_env.globals.update(get_ctf_logo=ctf_logo)
51 app.jinja_env.globals.update(get_ctf_theme=ctf_theme)
52 app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)
53 app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)
54 app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)
55 app.jinja_env.globals.update(get_registered_admin_scripts=get_registered_admin_scripts)
56 app.jinja_env.globals.update(get_registered_admin_stylesheets=get_registered_admin_stylesheets)
57 app.jinja_env.globals.update(get_config=get_config)
58 app.jinja_env.globals.update(generate_account_url=generate_account_url)
59 app.jinja_env.globals.update(get_countries=get_countries)
60 app.jinja_env.globals.update(lookup_country_code=lookup_country_code)
61 app.jinja_env.globals.update(accounts_visible=accounts_visible)
62 app.jinja_env.globals.update(challenges_visible=challenges_visible)
63 app.jinja_env.globals.update(registration_visible=registration_visible)
64 app.jinja_env.globals.update(scores_visible=scores_visible)
65
66
67 def init_request_processors(app):
68 @app.context_processor
69 def inject_user():
70 if session:
71 return dict(session)
72 return dict()
73
74 @app.before_request
75 def needs_setup():
76 if request.path == url_for('views.setup') or request.path.startswith('/themes'):
77 return
78 if not is_setup():
79 return redirect(url_for('views.setup'))
80
81 @app.before_request
82 def tracker():
83 # TODO: This function shouldn't cause a DB hit for lookups if possible
84 if authed():
85 track = Tracking.query.filter_by(ip=get_ip(), user_id=session['id']).first()
86 if not track:
87 visit = Tracking(ip=get_ip(), user_id=session['id'])
88 db.session.add(visit)
89 else:
90 track.date = datetime.datetime.utcnow()
91
92 try:
93 db.session.commit()
94 except (InvalidRequestError, IntegrityError) as e:
95 print(e.message)
96 db.session.rollback()
97 session.clear()
98
99 if authed():
100 user = get_current_user()
101 team = get_current_team()
102
103 if request.path.startswith('/themes') is False:
104 if user and user.banned:
105 return render_template('errors/403.html', error='You have been banned from this CTF'), 403
106
107 if team and team.banned:
108 return render_template('errors/403.html', error='Your team has been banned from this CTF'), 403
109
110 db.session.close()
111
112 @app.before_request
113 def csrf():
114 try:
115 func = app.view_functions[request.endpoint]
116 except KeyError:
117 abort(404)
118 if hasattr(func, '_bypass_csrf'):
119 return
120 if not session.get('nonce'):
121 session['nonce'] = generate_nonce()
122 if request.method == "POST":
123 if request.content_type != 'application/json':
124 if session['nonce'] != request.form.get('nonce'):
125 abort(403)
126
127 application_root = app.config.get('APPLICATION_ROOT')
128 if application_root != '/':
129 @app.before_request
130 def force_subdirectory_redirect():
131 if request.path.startswith(application_root) is False:
132 return redirect(application_root + request.script_root + request.full_path)
133
134 app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {
135 application_root: app,
136 })
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py
--- a/CTFd/utils/initialization/__init__.py
+++ b/CTFd/utils/initialization/__init__.py
@@ -71,6 +71,11 @@
return dict(session)
return dict()
+ @app.url_defaults
+ def inject_theme(endpoint, values):
+ if 'theme' not in values and app.url_map.is_endpoint_expecting(endpoint, 'theme'):
+ values['theme'] = ctf_theme()
+
@app.before_request
def needs_setup():
if request.path == url_for('views.setup') or request.path.startswith('/themes'):
| {"golden_diff": "diff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py\n--- a/CTFd/utils/initialization/__init__.py\n+++ b/CTFd/utils/initialization/__init__.py\n@@ -71,6 +71,11 @@\n return dict(session)\n return dict()\n \n+ @app.url_defaults\n+ def inject_theme(endpoint, values):\n+ if 'theme' not in values and app.url_map.is_endpoint_expecting(endpoint, 'theme'):\n+ values['theme'] = ctf_theme()\n+\n @app.before_request\n def needs_setup():\n if request.path == url_for('views.setup') or request.path.startswith('/themes'):\n", "issue": "Can we condense the url_for calls for themes further? \nurl_for for themes files currently doesn't know what theme is being used and so we pass it in. We should probably default it inside the view. But we would need to default the parameters to None and then set it in the function itself. \n", "before_files": [{"content": "from flask import Flask, current_app as app, request, session, redirect, url_for, abort, render_template\nfrom werkzeug.wsgi import DispatcherMiddleware\nfrom CTFd.models import db, Tracking\n\nfrom CTFd.utils import markdown, get_config\nfrom CTFd.utils.dates import unix_time_millis, unix_time, isoformat\n\nfrom CTFd.utils import config\nfrom CTFd.utils.config import can_send_mail, ctf_logo, ctf_name, ctf_theme\nfrom CTFd.utils.config.pages import get_pages\n\nfrom CTFd.utils.plugins import (\n get_registered_stylesheets,\n get_registered_scripts,\n get_configurable_plugins,\n get_registered_admin_scripts,\n get_registered_admin_stylesheets\n)\n\nfrom CTFd.utils.countries import get_countries, lookup_country_code\nfrom CTFd.utils.user import authed, get_ip, get_current_user, get_current_team\nfrom CTFd.utils.modes import generate_account_url\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.security.csrf import generate_nonce\n\nfrom CTFd.utils.config.visibility import (\n accounts_visible,\n challenges_visible,\n registration_visible,\n scores_visible\n)\n\nfrom sqlalchemy.exc import InvalidRequestError, IntegrityError\n\nimport datetime\n\n\ndef init_template_filters(app):\n app.jinja_env.filters['markdown'] = markdown\n app.jinja_env.filters['unix_time'] = unix_time\n app.jinja_env.filters['unix_time_millis'] = unix_time_millis\n app.jinja_env.filters['isoformat'] = isoformat\n\n\ndef init_template_globals(app):\n app.jinja_env.globals.update(config=config)\n app.jinja_env.globals.update(get_pages=get_pages)\n app.jinja_env.globals.update(can_send_mail=can_send_mail)\n app.jinja_env.globals.update(get_ctf_name=ctf_name)\n app.jinja_env.globals.update(get_ctf_logo=ctf_logo)\n app.jinja_env.globals.update(get_ctf_theme=ctf_theme)\n app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)\n app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)\n app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)\n app.jinja_env.globals.update(get_registered_admin_scripts=get_registered_admin_scripts)\n app.jinja_env.globals.update(get_registered_admin_stylesheets=get_registered_admin_stylesheets)\n app.jinja_env.globals.update(get_config=get_config)\n app.jinja_env.globals.update(generate_account_url=generate_account_url)\n app.jinja_env.globals.update(get_countries=get_countries)\n app.jinja_env.globals.update(lookup_country_code=lookup_country_code)\n app.jinja_env.globals.update(accounts_visible=accounts_visible)\n app.jinja_env.globals.update(challenges_visible=challenges_visible)\n app.jinja_env.globals.update(registration_visible=registration_visible)\n app.jinja_env.globals.update(scores_visible=scores_visible)\n\n\ndef init_request_processors(app):\n @app.context_processor\n def inject_user():\n if session:\n return dict(session)\n return dict()\n\n @app.before_request\n def needs_setup():\n if request.path == url_for('views.setup') or request.path.startswith('/themes'):\n return\n if not is_setup():\n return redirect(url_for('views.setup'))\n\n @app.before_request\n def tracker():\n # TODO: This function shouldn't cause a DB hit for lookups if possible\n if authed():\n track = Tracking.query.filter_by(ip=get_ip(), user_id=session['id']).first()\n if not track:\n visit = Tracking(ip=get_ip(), user_id=session['id'])\n db.session.add(visit)\n else:\n track.date = datetime.datetime.utcnow()\n\n try:\n db.session.commit()\n except (InvalidRequestError, IntegrityError) as e:\n print(e.message)\n db.session.rollback()\n session.clear()\n\n if authed():\n user = get_current_user()\n team = get_current_team()\n\n if request.path.startswith('/themes') is False:\n if user and user.banned:\n return render_template('errors/403.html', error='You have been banned from this CTF'), 403\n\n if team and team.banned:\n return render_template('errors/403.html', error='Your team has been banned from this CTF'), 403\n\n db.session.close()\n\n @app.before_request\n def csrf():\n try:\n func = app.view_functions[request.endpoint]\n except KeyError:\n abort(404)\n if hasattr(func, '_bypass_csrf'):\n return\n if not session.get('nonce'):\n session['nonce'] = generate_nonce()\n if request.method == \"POST\":\n if request.content_type != 'application/json':\n if session['nonce'] != request.form.get('nonce'):\n abort(403)\n\n application_root = app.config.get('APPLICATION_ROOT')\n if application_root != '/':\n @app.before_request\n def force_subdirectory_redirect():\n if request.path.startswith(application_root) is False:\n return redirect(application_root + request.script_root + request.full_path)\n\n app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {\n application_root: app,\n })\n", "path": "CTFd/utils/initialization/__init__.py"}], "after_files": [{"content": "from flask import Flask, current_app as app, request, session, redirect, url_for, abort, render_template\nfrom werkzeug.wsgi import DispatcherMiddleware\nfrom CTFd.models import db, Tracking\n\nfrom CTFd.utils import markdown, get_config\nfrom CTFd.utils.dates import unix_time_millis, unix_time, isoformat\n\nfrom CTFd.utils import config\nfrom CTFd.utils.config import can_send_mail, ctf_logo, ctf_name, ctf_theme\nfrom CTFd.utils.config.pages import get_pages\n\nfrom CTFd.utils.plugins import (\n get_registered_stylesheets,\n get_registered_scripts,\n get_configurable_plugins,\n get_registered_admin_scripts,\n get_registered_admin_stylesheets\n)\n\nfrom CTFd.utils.countries import get_countries, lookup_country_code\nfrom CTFd.utils.user import authed, get_ip, get_current_user, get_current_team\nfrom CTFd.utils.modes import generate_account_url\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.security.csrf import generate_nonce\n\nfrom CTFd.utils.config.visibility import (\n accounts_visible,\n challenges_visible,\n registration_visible,\n scores_visible\n)\n\nfrom sqlalchemy.exc import InvalidRequestError, IntegrityError\n\nimport datetime\n\n\ndef init_template_filters(app):\n app.jinja_env.filters['markdown'] = markdown\n app.jinja_env.filters['unix_time'] = unix_time\n app.jinja_env.filters['unix_time_millis'] = unix_time_millis\n app.jinja_env.filters['isoformat'] = isoformat\n\n\ndef init_template_globals(app):\n app.jinja_env.globals.update(config=config)\n app.jinja_env.globals.update(get_pages=get_pages)\n app.jinja_env.globals.update(can_send_mail=can_send_mail)\n app.jinja_env.globals.update(get_ctf_name=ctf_name)\n app.jinja_env.globals.update(get_ctf_logo=ctf_logo)\n app.jinja_env.globals.update(get_ctf_theme=ctf_theme)\n app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)\n app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)\n app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)\n app.jinja_env.globals.update(get_registered_admin_scripts=get_registered_admin_scripts)\n app.jinja_env.globals.update(get_registered_admin_stylesheets=get_registered_admin_stylesheets)\n app.jinja_env.globals.update(get_config=get_config)\n app.jinja_env.globals.update(generate_account_url=generate_account_url)\n app.jinja_env.globals.update(get_countries=get_countries)\n app.jinja_env.globals.update(lookup_country_code=lookup_country_code)\n app.jinja_env.globals.update(accounts_visible=accounts_visible)\n app.jinja_env.globals.update(challenges_visible=challenges_visible)\n app.jinja_env.globals.update(registration_visible=registration_visible)\n app.jinja_env.globals.update(scores_visible=scores_visible)\n\n\ndef init_request_processors(app):\n @app.context_processor\n def inject_user():\n if session:\n return dict(session)\n return dict()\n\n @app.url_defaults\n def inject_theme(endpoint, values):\n if 'theme' not in values and app.url_map.is_endpoint_expecting(endpoint, 'theme'):\n values['theme'] = ctf_theme()\n\n @app.before_request\n def needs_setup():\n if request.path == url_for('views.setup') or request.path.startswith('/themes'):\n return\n if not is_setup():\n return redirect(url_for('views.setup'))\n\n @app.before_request\n def tracker():\n # TODO: This function shouldn't cause a DB hit for lookups if possible\n if authed():\n track = Tracking.query.filter_by(ip=get_ip(), user_id=session['id']).first()\n if not track:\n visit = Tracking(ip=get_ip(), user_id=session['id'])\n db.session.add(visit)\n else:\n track.date = datetime.datetime.utcnow()\n\n try:\n db.session.commit()\n except (InvalidRequestError, IntegrityError) as e:\n print(e.message)\n db.session.rollback()\n session.clear()\n\n if authed():\n user = get_current_user()\n team = get_current_team()\n\n if request.path.startswith('/themes') is False:\n if user and user.banned:\n return render_template('errors/403.html', error='You have been banned from this CTF'), 403\n\n if team and team.banned:\n return render_template('errors/403.html', error='Your team has been banned from this CTF'), 403\n\n db.session.close()\n\n @app.before_request\n def csrf():\n try:\n func = app.view_functions[request.endpoint]\n except KeyError:\n abort(404)\n if hasattr(func, '_bypass_csrf'):\n return\n if not session.get('nonce'):\n session['nonce'] = generate_nonce()\n if request.method == \"POST\":\n if request.content_type != 'application/json':\n if session['nonce'] != request.form.get('nonce'):\n abort(403)\n\n application_root = app.config.get('APPLICATION_ROOT')\n if application_root != '/':\n @app.before_request\n def force_subdirectory_redirect():\n if request.path.startswith(application_root) is False:\n return redirect(application_root + request.script_root + request.full_path)\n\n app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {\n application_root: app,\n })\n", "path": "CTFd/utils/initialization/__init__.py"}]} | 1,771 | 158 |
gh_patches_debug_17900 | rasdani/github-patches | git_diff | DataBiosphere__toil-4728 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Toil is marking its inputs executable
I think that Toil is marking input files from the local disk executable. It is resulting in weird changes to the file permissions in my Git tree when I run Toil tests.
βIssue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/TOIL-1462)
βIssue Number: TOIL-1462
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/toil/fileStores/__init__.py`
Content:
```
1 # Copyright (C) 2015-2021 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 import stat
16 from typing import Any
17
18
19 class FileID(str):
20 """
21 A small wrapper around Python's builtin string class.
22
23 It is used to represent a file's ID in the file store, and has a size attribute
24 that is the file's size in bytes. This object is returned by importFile and
25 writeGlobalFile.
26
27 Calls into the file store can use bare strings; size will be queried from
28 the job store if unavailable in the ID.
29 """
30
31 def __new__(cls, fileStoreID: str, *args: Any) -> 'FileID':
32 return super().__new__(cls, fileStoreID)
33
34 def __init__(self, fileStoreID: str, size: int, executable: bool = False) -> None:
35 # Don't pass an argument to parent class's __init__.
36 # In Python 3 we can have super(FileID, self) hand us object's __init__ which chokes on any arguments.
37 super().__init__()
38 self.size = size
39 self.executable = executable
40
41 def pack(self) -> str:
42 """Pack the FileID into a string so it can be passed through external code."""
43 return f'{self.size}:{int(self.executable)}:{self}'
44
45 @classmethod
46 def forPath(cls, fileStoreID: str, filePath: str) -> 'FileID':
47 executable = os.stat(filePath).st_mode & stat.S_IXUSR != 0
48 return cls(fileStoreID, os.stat(filePath).st_size, executable)
49
50 @classmethod
51 def unpack(cls, packedFileStoreID: str) -> 'FileID':
52 """Unpack the result of pack() into a FileID object."""
53 # Only separate twice in case the FileID itself has colons in it
54 vals = packedFileStoreID.split(':', 2)
55 # Break up the packed value
56 size = int(vals[0])
57 executable = bool(vals[1])
58 value = vals[2]
59 # Create the FileID
60 return cls(value, size, executable)
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/toil/fileStores/__init__.py b/src/toil/fileStores/__init__.py
--- a/src/toil/fileStores/__init__.py
+++ b/src/toil/fileStores/__init__.py
@@ -40,7 +40,7 @@
def pack(self) -> str:
"""Pack the FileID into a string so it can be passed through external code."""
- return f'{self.size}:{int(self.executable)}:{self}'
+ return f'{self.size}:{"1" if self.executable else "0"}:{self}'
@classmethod
def forPath(cls, fileStoreID: str, filePath: str) -> 'FileID':
@@ -54,7 +54,7 @@
vals = packedFileStoreID.split(':', 2)
# Break up the packed value
size = int(vals[0])
- executable = bool(vals[1])
+ executable = (vals[1] == "1")
value = vals[2]
# Create the FileID
return cls(value, size, executable)
| {"golden_diff": "diff --git a/src/toil/fileStores/__init__.py b/src/toil/fileStores/__init__.py\n--- a/src/toil/fileStores/__init__.py\n+++ b/src/toil/fileStores/__init__.py\n@@ -40,7 +40,7 @@\n \n def pack(self) -> str:\n \"\"\"Pack the FileID into a string so it can be passed through external code.\"\"\"\n- return f'{self.size}:{int(self.executable)}:{self}'\n+ return f'{self.size}:{\"1\" if self.executable else \"0\"}:{self}'\n \n @classmethod\n def forPath(cls, fileStoreID: str, filePath: str) -> 'FileID':\n@@ -54,7 +54,7 @@\n vals = packedFileStoreID.split(':', 2)\n # Break up the packed value\n size = int(vals[0])\n- executable = bool(vals[1])\n+ executable = (vals[1] == \"1\")\n value = vals[2]\n # Create the FileID\n return cls(value, size, executable)\n", "issue": "Toil is marking its inputs executable\nI think that Toil is marking input files from the local disk executable. It is resulting in weird changes to the file permissions in my Git tree when I run Toil tests.\n\n\n\n\u2506Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/TOIL-1462)\n\u2506Issue Number: TOIL-1462\n\n", "before_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nimport stat\nfrom typing import Any\n\n\nclass FileID(str):\n \"\"\"\n A small wrapper around Python's builtin string class.\n\n It is used to represent a file's ID in the file store, and has a size attribute\n that is the file's size in bytes. This object is returned by importFile and\n writeGlobalFile.\n\n Calls into the file store can use bare strings; size will be queried from\n the job store if unavailable in the ID.\n \"\"\"\n\n def __new__(cls, fileStoreID: str, *args: Any) -> 'FileID':\n return super().__new__(cls, fileStoreID)\n\n def __init__(self, fileStoreID: str, size: int, executable: bool = False) -> None:\n # Don't pass an argument to parent class's __init__.\n # In Python 3 we can have super(FileID, self) hand us object's __init__ which chokes on any arguments.\n super().__init__()\n self.size = size\n self.executable = executable\n\n def pack(self) -> str:\n \"\"\"Pack the FileID into a string so it can be passed through external code.\"\"\"\n return f'{self.size}:{int(self.executable)}:{self}'\n\n @classmethod\n def forPath(cls, fileStoreID: str, filePath: str) -> 'FileID':\n executable = os.stat(filePath).st_mode & stat.S_IXUSR != 0\n return cls(fileStoreID, os.stat(filePath).st_size, executable)\n\n @classmethod\n def unpack(cls, packedFileStoreID: str) -> 'FileID':\n \"\"\"Unpack the result of pack() into a FileID object.\"\"\"\n # Only separate twice in case the FileID itself has colons in it\n vals = packedFileStoreID.split(':', 2)\n # Break up the packed value\n size = int(vals[0])\n executable = bool(vals[1])\n value = vals[2]\n # Create the FileID\n return cls(value, size, executable)\n", "path": "src/toil/fileStores/__init__.py"}], "after_files": [{"content": "# Copyright (C) 2015-2021 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nimport stat\nfrom typing import Any\n\n\nclass FileID(str):\n \"\"\"\n A small wrapper around Python's builtin string class.\n\n It is used to represent a file's ID in the file store, and has a size attribute\n that is the file's size in bytes. This object is returned by importFile and\n writeGlobalFile.\n\n Calls into the file store can use bare strings; size will be queried from\n the job store if unavailable in the ID.\n \"\"\"\n\n def __new__(cls, fileStoreID: str, *args: Any) -> 'FileID':\n return super().__new__(cls, fileStoreID)\n\n def __init__(self, fileStoreID: str, size: int, executable: bool = False) -> None:\n # Don't pass an argument to parent class's __init__.\n # In Python 3 we can have super(FileID, self) hand us object's __init__ which chokes on any arguments.\n super().__init__()\n self.size = size\n self.executable = executable\n\n def pack(self) -> str:\n \"\"\"Pack the FileID into a string so it can be passed through external code.\"\"\"\n return f'{self.size}:{\"1\" if self.executable else \"0\"}:{self}'\n\n @classmethod\n def forPath(cls, fileStoreID: str, filePath: str) -> 'FileID':\n executable = os.stat(filePath).st_mode & stat.S_IXUSR != 0\n return cls(fileStoreID, os.stat(filePath).st_size, executable)\n\n @classmethod\n def unpack(cls, packedFileStoreID: str) -> 'FileID':\n \"\"\"Unpack the result of pack() into a FileID object.\"\"\"\n # Only separate twice in case the FileID itself has colons in it\n vals = packedFileStoreID.split(':', 2)\n # Break up the packed value\n size = int(vals[0])\n executable = (vals[1] == \"1\")\n value = vals[2]\n # Create the FileID\n return cls(value, size, executable)\n", "path": "src/toil/fileStores/__init__.py"}]} | 1,054 | 236 |
gh_patches_debug_42968 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-2588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
self.remove_fallback(handler) it does not work
i need the remove.fallback function after recording in my wakeword skill. I acted according to instructions https://mycroft-ai.gitbook.io/docs/skill-development/skill-types/fallback-skill. however, I always get the message. "mycroft.skills.fallback_skill:remove_fallback:139 | Could not remove fallback!". the fallback function is retained and is not deactivated. During the skill shutdown, the function is executed, the error message remains and fallback is deactivated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/skills/fallback_skill.py`
Content:
```
1 # Copyright 2019 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 """The fallback skill implements a special type of skill handling
16 utterances not handled by the intent system.
17 """
18 import operator
19 from mycroft.metrics import report_timing, Stopwatch
20 from mycroft.util.log import LOG
21
22
23 from .mycroft_skill import MycroftSkill, get_handler_name
24
25
26 class FallbackSkill(MycroftSkill):
27 """Fallbacks come into play when no skill matches an Adapt or closely with
28 a Padatious intent. All Fallback skills work together to give them a
29 view of the user's utterance. Fallback handlers are called in an order
30 determined the priority provided when the the handler is registered.
31
32 ======== ======== ================================================
33 Priority Who? Purpose
34 ======== ======== ================================================
35 1-4 RESERVED Unused for now, slot for pre-Padatious if needed
36 5 MYCROFT Padatious near match (conf > 0.8)
37 6-88 USER General
38 89 MYCROFT Padatious loose match (conf > 0.5)
39 90-99 USER Uncaught intents
40 100+ MYCROFT Fallback Unknown or other future use
41 ======== ======== ================================================
42
43 Handlers with the numerically lowest priority are invoked first.
44 Multiple fallbacks can exist at the same priority, but no order is
45 guaranteed.
46
47 A Fallback can either observe or consume an utterance. A consumed
48 utterance will not be see by any other Fallback handlers.
49 """
50 fallback_handlers = {}
51
52 def __init__(self, name=None, bus=None, use_settings=True):
53 super().__init__(name, bus, use_settings)
54
55 # list of fallback handlers registered by this instance
56 self.instance_fallback_handlers = []
57
58 @classmethod
59 def make_intent_failure_handler(cls, bus):
60 """Goes through all fallback handlers until one returns True"""
61
62 def handler(message):
63 # indicate fallback handling start
64 bus.emit(message.forward("mycroft.skill.handler.start",
65 data={'handler': "fallback"}))
66
67 stopwatch = Stopwatch()
68 handler_name = None
69 with stopwatch:
70 for _, handler in sorted(cls.fallback_handlers.items(),
71 key=operator.itemgetter(0)):
72 try:
73 if handler(message):
74 # indicate completion
75 handler_name = get_handler_name(handler)
76 bus.emit(message.forward(
77 'mycroft.skill.handler.complete',
78 data={'handler': "fallback",
79 "fallback_handler": handler_name}))
80 break
81 except Exception:
82 LOG.exception('Exception in fallback.')
83 else: # No fallback could handle the utterance
84 bus.emit(message.forward('complete_intent_failure'))
85 warning = "No fallback could handle intent."
86 LOG.warning(warning)
87 # indicate completion with exception
88 bus.emit(message.forward('mycroft.skill.handler.complete',
89 data={'handler': "fallback",
90 'exception': warning}))
91
92 # Send timing metric
93 if message.context.get('ident'):
94 ident = message.context['ident']
95 report_timing(ident, 'fallback_handler', stopwatch,
96 {'handler': handler_name})
97
98 return handler
99
100 @classmethod
101 def _register_fallback(cls, handler, priority):
102 """Register a function to be called as a general info fallback
103 Fallback should receive message and return
104 a boolean (True if succeeded or False if failed)
105
106 Lower priority gets run first
107 0 for high priority 100 for low priority
108 """
109 while priority in cls.fallback_handlers:
110 priority += 1
111
112 cls.fallback_handlers[priority] = handler
113
114 def register_fallback(self, handler, priority):
115 """Register a fallback with the list of fallback handlers and with the
116 list of handlers registered by this instance
117 """
118
119 def wrapper(*args, **kwargs):
120 if handler(*args, **kwargs):
121 self.make_active()
122 return True
123 return False
124
125 self.instance_fallback_handlers.append(wrapper)
126 self._register_fallback(wrapper, priority)
127
128 @classmethod
129 def remove_fallback(cls, handler_to_del):
130 """Remove a fallback handler.
131
132 Arguments:
133 handler_to_del: reference to handler
134 """
135 for priority, handler in cls.fallback_handlers.items():
136 if handler == handler_to_del:
137 del cls.fallback_handlers[priority]
138 return
139 LOG.warning('Could not remove fallback!')
140
141 def remove_instance_handlers(self):
142 """Remove all fallback handlers registered by the fallback skill."""
143 while len(self.instance_fallback_handlers):
144 handler = self.instance_fallback_handlers.pop()
145 self.remove_fallback(handler)
146
147 def default_shutdown(self):
148 """Remove all registered handlers and perform skill shutdown."""
149 self.remove_instance_handlers()
150 super(FallbackSkill, self).default_shutdown()
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mycroft/skills/fallback_skill.py b/mycroft/skills/fallback_skill.py
--- a/mycroft/skills/fallback_skill.py
+++ b/mycroft/skills/fallback_skill.py
@@ -48,6 +48,7 @@
utterance will not be see by any other Fallback handlers.
"""
fallback_handlers = {}
+ wrapper_map = [] # Map containing (handler, wrapper) tuples
def __init__(self, name=None, bus=None, use_settings=True):
super().__init__(name, bus, use_settings)
@@ -98,18 +99,25 @@
return handler
@classmethod
- def _register_fallback(cls, handler, priority):
+ def _register_fallback(cls, handler, wrapper, priority):
"""Register a function to be called as a general info fallback
Fallback should receive message and return
a boolean (True if succeeded or False if failed)
Lower priority gets run first
0 for high priority 100 for low priority
+
+ Arguments:
+ handler (callable): original handler, used as a reference when
+ removing
+ wrapper (callable): wrapped version of handler
+ priority (int): fallback priority
"""
while priority in cls.fallback_handlers:
priority += 1
- cls.fallback_handlers[priority] = handler
+ cls.fallback_handlers[priority] = wrapper
+ cls.wrapper_map.append((handler, wrapper))
def register_fallback(self, handler, priority):
"""Register a fallback with the list of fallback handlers and with the
@@ -122,8 +130,28 @@
return True
return False
- self.instance_fallback_handlers.append(wrapper)
- self._register_fallback(wrapper, priority)
+ self.instance_fallback_handlers.append(handler)
+ self._register_fallback(handler, wrapper, priority)
+
+ @classmethod
+ def _remove_registered_handler(cls, wrapper_to_del):
+ """Remove a registered wrapper.
+
+ Arguments:
+ wrapper_to_del (callable): wrapped handler to be removed
+
+ Returns:
+ (bool) True if one or more handlers were removed, otherwise False.
+ """
+ found_handler = False
+ for priority, handler in list(cls.fallback_handlers.items()):
+ if handler == wrapper_to_del:
+ found_handler = True
+ del cls.fallback_handlers[priority]
+
+ if not found_handler:
+ LOG.warning('No fallback matching {}'.format(wrapper_to_del))
+ return found_handler
@classmethod
def remove_fallback(cls, handler_to_del):
@@ -131,15 +159,27 @@
Arguments:
handler_to_del: reference to handler
+ Returns:
+ (bool) True if at least one handler was removed, otherwise False
"""
- for priority, handler in cls.fallback_handlers.items():
- if handler == handler_to_del:
- del cls.fallback_handlers[priority]
- return
- LOG.warning('Could not remove fallback!')
+ # Find wrapper from handler or wrapper
+ wrapper_to_del = None
+ for h, w in cls.wrapper_map:
+ if handler_to_del in (h, w):
+ wrapper_to_del = w
+ break
+
+ if wrapper_to_del:
+ cls.wrapper_map.remove((h, w))
+ remove_ok = cls._remove_registered_handler(wrapper_to_del)
+ else:
+ LOG.warning('Could not find matching fallback handler')
+ remove_ok = False
+ return remove_ok
def remove_instance_handlers(self):
"""Remove all fallback handlers registered by the fallback skill."""
+ self.log.info('Removing all handlers...')
while len(self.instance_fallback_handlers):
handler = self.instance_fallback_handlers.pop()
self.remove_fallback(handler)
| {"golden_diff": "diff --git a/mycroft/skills/fallback_skill.py b/mycroft/skills/fallback_skill.py\n--- a/mycroft/skills/fallback_skill.py\n+++ b/mycroft/skills/fallback_skill.py\n@@ -48,6 +48,7 @@\n utterance will not be see by any other Fallback handlers.\n \"\"\"\n fallback_handlers = {}\n+ wrapper_map = [] # Map containing (handler, wrapper) tuples\n \n def __init__(self, name=None, bus=None, use_settings=True):\n super().__init__(name, bus, use_settings)\n@@ -98,18 +99,25 @@\n return handler\n \n @classmethod\n- def _register_fallback(cls, handler, priority):\n+ def _register_fallback(cls, handler, wrapper, priority):\n \"\"\"Register a function to be called as a general info fallback\n Fallback should receive message and return\n a boolean (True if succeeded or False if failed)\n \n Lower priority gets run first\n 0 for high priority 100 for low priority\n+\n+ Arguments:\n+ handler (callable): original handler, used as a reference when\n+ removing\n+ wrapper (callable): wrapped version of handler\n+ priority (int): fallback priority\n \"\"\"\n while priority in cls.fallback_handlers:\n priority += 1\n \n- cls.fallback_handlers[priority] = handler\n+ cls.fallback_handlers[priority] = wrapper\n+ cls.wrapper_map.append((handler, wrapper))\n \n def register_fallback(self, handler, priority):\n \"\"\"Register a fallback with the list of fallback handlers and with the\n@@ -122,8 +130,28 @@\n return True\n return False\n \n- self.instance_fallback_handlers.append(wrapper)\n- self._register_fallback(wrapper, priority)\n+ self.instance_fallback_handlers.append(handler)\n+ self._register_fallback(handler, wrapper, priority)\n+\n+ @classmethod\n+ def _remove_registered_handler(cls, wrapper_to_del):\n+ \"\"\"Remove a registered wrapper.\n+\n+ Arguments:\n+ wrapper_to_del (callable): wrapped handler to be removed\n+\n+ Returns:\n+ (bool) True if one or more handlers were removed, otherwise False.\n+ \"\"\"\n+ found_handler = False\n+ for priority, handler in list(cls.fallback_handlers.items()):\n+ if handler == wrapper_to_del:\n+ found_handler = True\n+ del cls.fallback_handlers[priority]\n+\n+ if not found_handler:\n+ LOG.warning('No fallback matching {}'.format(wrapper_to_del))\n+ return found_handler\n \n @classmethod\n def remove_fallback(cls, handler_to_del):\n@@ -131,15 +159,27 @@\n \n Arguments:\n handler_to_del: reference to handler\n+ Returns:\n+ (bool) True if at least one handler was removed, otherwise False\n \"\"\"\n- for priority, handler in cls.fallback_handlers.items():\n- if handler == handler_to_del:\n- del cls.fallback_handlers[priority]\n- return\n- LOG.warning('Could not remove fallback!')\n+ # Find wrapper from handler or wrapper\n+ wrapper_to_del = None\n+ for h, w in cls.wrapper_map:\n+ if handler_to_del in (h, w):\n+ wrapper_to_del = w\n+ break\n+\n+ if wrapper_to_del:\n+ cls.wrapper_map.remove((h, w))\n+ remove_ok = cls._remove_registered_handler(wrapper_to_del)\n+ else:\n+ LOG.warning('Could not find matching fallback handler')\n+ remove_ok = False\n+ return remove_ok\n \n def remove_instance_handlers(self):\n \"\"\"Remove all fallback handlers registered by the fallback skill.\"\"\"\n+ self.log.info('Removing all handlers...')\n while len(self.instance_fallback_handlers):\n handler = self.instance_fallback_handlers.pop()\n self.remove_fallback(handler)\n", "issue": "self.remove_fallback(handler) it does not work\ni need the remove.fallback function after recording in my wakeword skill. I acted according to instructions https://mycroft-ai.gitbook.io/docs/skill-development/skill-types/fallback-skill. however, I always get the message. \"mycroft.skills.fallback_skill:remove_fallback:139 | Could not remove fallback!\". the fallback function is retained and is not deactivated. During the skill shutdown, the function is executed, the error message remains and fallback is deactivated.\r\n\n", "before_files": [{"content": "# Copyright 2019 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"The fallback skill implements a special type of skill handling\nutterances not handled by the intent system.\n\"\"\"\nimport operator\nfrom mycroft.metrics import report_timing, Stopwatch\nfrom mycroft.util.log import LOG\n\n\nfrom .mycroft_skill import MycroftSkill, get_handler_name\n\n\nclass FallbackSkill(MycroftSkill):\n \"\"\"Fallbacks come into play when no skill matches an Adapt or closely with\n a Padatious intent. All Fallback skills work together to give them a\n view of the user's utterance. Fallback handlers are called in an order\n determined the priority provided when the the handler is registered.\n\n ======== ======== ================================================\n Priority Who? Purpose\n ======== ======== ================================================\n 1-4 RESERVED Unused for now, slot for pre-Padatious if needed\n 5 MYCROFT Padatious near match (conf > 0.8)\n 6-88 USER General\n 89 MYCROFT Padatious loose match (conf > 0.5)\n 90-99 USER Uncaught intents\n 100+ MYCROFT Fallback Unknown or other future use\n ======== ======== ================================================\n\n Handlers with the numerically lowest priority are invoked first.\n Multiple fallbacks can exist at the same priority, but no order is\n guaranteed.\n\n A Fallback can either observe or consume an utterance. A consumed\n utterance will not be see by any other Fallback handlers.\n \"\"\"\n fallback_handlers = {}\n\n def __init__(self, name=None, bus=None, use_settings=True):\n super().__init__(name, bus, use_settings)\n\n # list of fallback handlers registered by this instance\n self.instance_fallback_handlers = []\n\n @classmethod\n def make_intent_failure_handler(cls, bus):\n \"\"\"Goes through all fallback handlers until one returns True\"\"\"\n\n def handler(message):\n # indicate fallback handling start\n bus.emit(message.forward(\"mycroft.skill.handler.start\",\n data={'handler': \"fallback\"}))\n\n stopwatch = Stopwatch()\n handler_name = None\n with stopwatch:\n for _, handler in sorted(cls.fallback_handlers.items(),\n key=operator.itemgetter(0)):\n try:\n if handler(message):\n # indicate completion\n handler_name = get_handler_name(handler)\n bus.emit(message.forward(\n 'mycroft.skill.handler.complete',\n data={'handler': \"fallback\",\n \"fallback_handler\": handler_name}))\n break\n except Exception:\n LOG.exception('Exception in fallback.')\n else: # No fallback could handle the utterance\n bus.emit(message.forward('complete_intent_failure'))\n warning = \"No fallback could handle intent.\"\n LOG.warning(warning)\n # indicate completion with exception\n bus.emit(message.forward('mycroft.skill.handler.complete',\n data={'handler': \"fallback\",\n 'exception': warning}))\n\n # Send timing metric\n if message.context.get('ident'):\n ident = message.context['ident']\n report_timing(ident, 'fallback_handler', stopwatch,\n {'handler': handler_name})\n\n return handler\n\n @classmethod\n def _register_fallback(cls, handler, priority):\n \"\"\"Register a function to be called as a general info fallback\n Fallback should receive message and return\n a boolean (True if succeeded or False if failed)\n\n Lower priority gets run first\n 0 for high priority 100 for low priority\n \"\"\"\n while priority in cls.fallback_handlers:\n priority += 1\n\n cls.fallback_handlers[priority] = handler\n\n def register_fallback(self, handler, priority):\n \"\"\"Register a fallback with the list of fallback handlers and with the\n list of handlers registered by this instance\n \"\"\"\n\n def wrapper(*args, **kwargs):\n if handler(*args, **kwargs):\n self.make_active()\n return True\n return False\n\n self.instance_fallback_handlers.append(wrapper)\n self._register_fallback(wrapper, priority)\n\n @classmethod\n def remove_fallback(cls, handler_to_del):\n \"\"\"Remove a fallback handler.\n\n Arguments:\n handler_to_del: reference to handler\n \"\"\"\n for priority, handler in cls.fallback_handlers.items():\n if handler == handler_to_del:\n del cls.fallback_handlers[priority]\n return\n LOG.warning('Could not remove fallback!')\n\n def remove_instance_handlers(self):\n \"\"\"Remove all fallback handlers registered by the fallback skill.\"\"\"\n while len(self.instance_fallback_handlers):\n handler = self.instance_fallback_handlers.pop()\n self.remove_fallback(handler)\n\n def default_shutdown(self):\n \"\"\"Remove all registered handlers and perform skill shutdown.\"\"\"\n self.remove_instance_handlers()\n super(FallbackSkill, self).default_shutdown()\n", "path": "mycroft/skills/fallback_skill.py"}], "after_files": [{"content": "# Copyright 2019 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"The fallback skill implements a special type of skill handling\nutterances not handled by the intent system.\n\"\"\"\nimport operator\nfrom mycroft.metrics import report_timing, Stopwatch\nfrom mycroft.util.log import LOG\n\n\nfrom .mycroft_skill import MycroftSkill, get_handler_name\n\n\nclass FallbackSkill(MycroftSkill):\n \"\"\"Fallbacks come into play when no skill matches an Adapt or closely with\n a Padatious intent. All Fallback skills work together to give them a\n view of the user's utterance. Fallback handlers are called in an order\n determined the priority provided when the the handler is registered.\n\n ======== ======== ================================================\n Priority Who? Purpose\n ======== ======== ================================================\n 1-4 RESERVED Unused for now, slot for pre-Padatious if needed\n 5 MYCROFT Padatious near match (conf > 0.8)\n 6-88 USER General\n 89 MYCROFT Padatious loose match (conf > 0.5)\n 90-99 USER Uncaught intents\n 100+ MYCROFT Fallback Unknown or other future use\n ======== ======== ================================================\n\n Handlers with the numerically lowest priority are invoked first.\n Multiple fallbacks can exist at the same priority, but no order is\n guaranteed.\n\n A Fallback can either observe or consume an utterance. A consumed\n utterance will not be see by any other Fallback handlers.\n \"\"\"\n fallback_handlers = {}\n wrapper_map = [] # Map containing (handler, wrapper) tuples\n\n def __init__(self, name=None, bus=None, use_settings=True):\n super().__init__(name, bus, use_settings)\n\n # list of fallback handlers registered by this instance\n self.instance_fallback_handlers = []\n\n @classmethod\n def make_intent_failure_handler(cls, bus):\n \"\"\"Goes through all fallback handlers until one returns True\"\"\"\n\n def handler(message):\n # indicate fallback handling start\n bus.emit(message.forward(\"mycroft.skill.handler.start\",\n data={'handler': \"fallback\"}))\n\n stopwatch = Stopwatch()\n handler_name = None\n with stopwatch:\n for _, handler in sorted(cls.fallback_handlers.items(),\n key=operator.itemgetter(0)):\n try:\n if handler(message):\n # indicate completion\n handler_name = get_handler_name(handler)\n bus.emit(message.forward(\n 'mycroft.skill.handler.complete',\n data={'handler': \"fallback\",\n \"fallback_handler\": handler_name}))\n break\n except Exception:\n LOG.exception('Exception in fallback.')\n else: # No fallback could handle the utterance\n bus.emit(message.forward('complete_intent_failure'))\n warning = \"No fallback could handle intent.\"\n LOG.warning(warning)\n # indicate completion with exception\n bus.emit(message.forward('mycroft.skill.handler.complete',\n data={'handler': \"fallback\",\n 'exception': warning}))\n\n # Send timing metric\n if message.context.get('ident'):\n ident = message.context['ident']\n report_timing(ident, 'fallback_handler', stopwatch,\n {'handler': handler_name})\n\n return handler\n\n @classmethod\n def _register_fallback(cls, handler, wrapper, priority):\n \"\"\"Register a function to be called as a general info fallback\n Fallback should receive message and return\n a boolean (True if succeeded or False if failed)\n\n Lower priority gets run first\n 0 for high priority 100 for low priority\n\n Arguments:\n handler (callable): original handler, used as a reference when\n removing\n wrapper (callable): wrapped version of handler\n priority (int): fallback priority\n \"\"\"\n while priority in cls.fallback_handlers:\n priority += 1\n\n cls.fallback_handlers[priority] = wrapper\n cls.wrapper_map.append((handler, wrapper))\n\n def register_fallback(self, handler, priority):\n \"\"\"Register a fallback with the list of fallback handlers and with the\n list of handlers registered by this instance\n \"\"\"\n\n def wrapper(*args, **kwargs):\n if handler(*args, **kwargs):\n self.make_active()\n return True\n return False\n\n self.instance_fallback_handlers.append(handler)\n self._register_fallback(handler, wrapper, priority)\n\n @classmethod\n def _remove_registered_handler(cls, wrapper_to_del):\n \"\"\"Remove a registered wrapper.\n\n Arguments:\n wrapper_to_del (callable): wrapped handler to be removed\n\n Returns:\n (bool) True if one or more handlers were removed, otherwise False.\n \"\"\"\n found_handler = False\n for priority, handler in list(cls.fallback_handlers.items()):\n if handler == wrapper_to_del:\n found_handler = True\n del cls.fallback_handlers[priority]\n\n if not found_handler:\n LOG.warning('No fallback matching {}'.format(wrapper_to_del))\n return found_handler\n\n @classmethod\n def remove_fallback(cls, handler_to_del):\n \"\"\"Remove a fallback handler.\n\n Arguments:\n handler_to_del: reference to handler\n Returns:\n (bool) True if at least one handler was removed, otherwise False\n \"\"\"\n # Find wrapper from handler or wrapper\n wrapper_to_del = None\n for h, w in cls.wrapper_map:\n if handler_to_del in (h, w):\n wrapper_to_del = w\n break\n\n if wrapper_to_del:\n cls.wrapper_map.remove((h, w))\n remove_ok = cls._remove_registered_handler(wrapper_to_del)\n else:\n LOG.warning('Could not find matching fallback handler')\n remove_ok = False\n return remove_ok\n\n def remove_instance_handlers(self):\n \"\"\"Remove all fallback handlers registered by the fallback skill.\"\"\"\n self.log.info('Removing all handlers...')\n while len(self.instance_fallback_handlers):\n handler = self.instance_fallback_handlers.pop()\n self.remove_fallback(handler)\n\n def default_shutdown(self):\n \"\"\"Remove all registered handlers and perform skill shutdown.\"\"\"\n self.remove_instance_handlers()\n super(FallbackSkill, self).default_shutdown()\n", "path": "mycroft/skills/fallback_skill.py"}]} | 1,900 | 849 |
gh_patches_debug_54 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
v0.43.0 requires pydantic, but is marked optional
Attempting to start a very simple server results in:
```
$ strawberry server app
Traceback (most recent call last):
File "/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/bin/strawberry", line 5, in <module>
from strawberry.cli import run
File "/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/__init__.py", line 1, in <module>
from . import experimental, federation
File "/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/__init__.py", line 1, in <module>
from . import pydantic
File "/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/pydantic/__init__.py", line 1, in <module>
from .error_type import error_type
File "/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/pydantic/error_type.py", line 4, in <module>
from pydantic import BaseModel
ModuleNotFoundError: No module named 'pydantic'
```
Even though pyproject.toml has pydantic marked as optional: https://github.com/strawberry-graphql/strawberry/blob/master/pyproject.toml#L37
app.py is:
```
import asyncio
import strawberry
async def resolve_hello(root, info, name: str) -> str:
await asyncio.sleep(1)
return f"Hello {name}"
@strawberry.type
class Query:
hello: str = strawberry.field(resolver=resolve_hello)
schema = strawberry.Schema(query=Query)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/experimental/__init__.py`
Content:
```
1 from . import pydantic
2
3
4 __all__ = ["pydantic"]
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/experimental/__init__.py b/strawberry/experimental/__init__.py
--- a/strawberry/experimental/__init__.py
+++ b/strawberry/experimental/__init__.py
@@ -1,4 +1,6 @@
-from . import pydantic
-
-
-__all__ = ["pydantic"]
+try:
+ from . import pydantic
+except ImportError:
+ pass
+else:
+ __all__ = ["pydantic"]
| {"golden_diff": "diff --git a/strawberry/experimental/__init__.py b/strawberry/experimental/__init__.py\n--- a/strawberry/experimental/__init__.py\n+++ b/strawberry/experimental/__init__.py\n@@ -1,4 +1,6 @@\n-from . import pydantic\n-\n-\n-__all__ = [\"pydantic\"]\n+try:\n+ from . import pydantic\n+except ImportError:\n+ pass\n+else:\n+ __all__ = [\"pydantic\"]\n", "issue": "v0.43.0 requires pydantic, but is marked optional\nAttempting to start a very simple server results in:\r\n\r\n```\r\n$ strawberry server app\r\nTraceback (most recent call last):\r\n File \"/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/bin/strawberry\", line 5, in <module>\r\n from strawberry.cli import run\r\n File \"/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/__init__.py\", line 1, in <module>\r\n from . import experimental, federation\r\n File \"/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/__init__.py\", line 1, in <module>\r\n from . import pydantic\r\n File \"/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/pydantic/__init__.py\", line 1, in <module>\r\n from .error_type import error_type\r\n File \"/home/ossareh/.local/share/virtualenvs/vega-IIh8dIOy/lib/python3.9/site-packages/strawberry/experimental/pydantic/error_type.py\", line 4, in <module>\r\n from pydantic import BaseModel\r\nModuleNotFoundError: No module named 'pydantic'\r\n```\r\n\r\nEven though pyproject.toml has pydantic marked as optional: https://github.com/strawberry-graphql/strawberry/blob/master/pyproject.toml#L37\r\n\r\napp.py is:\r\n\r\n```\r\nimport asyncio\r\n\r\nimport strawberry\r\n\r\n\r\nasync def resolve_hello(root, info, name: str) -> str:\r\n await asyncio.sleep(1)\r\n return f\"Hello {name}\"\r\n\r\n\r\[email protected]\r\nclass Query:\r\n hello: str = strawberry.field(resolver=resolve_hello)\r\n\r\n\r\nschema = strawberry.Schema(query=Query)\r\n```\n", "before_files": [{"content": "from . import pydantic\n\n\n__all__ = [\"pydantic\"]\n", "path": "strawberry/experimental/__init__.py"}], "after_files": [{"content": "try:\n from . import pydantic\nexcept ImportError:\n pass\nelse:\n __all__ = [\"pydantic\"]\n", "path": "strawberry/experimental/__init__.py"}]} | 736 | 115 |
gh_patches_debug_11265 | rasdani/github-patches | git_diff | kubeflow__pipelines-1936 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handling of boolean input types
I have a pipeline that has a boolean parameter:
```
def test(
param: dsl.PipelineParam = dsl.PipelineParam(name='param', value=True, param_type=TypeMeta('bool')),
) -> None:
pass
```
When starting a new pipeline via the UI, the input field is a textbox, prefilled with the input string 'True'. Given that the input value is evaluated using `bool(value)`, the only way to start the pipeline with a `False` value is to leave the input textbox empty.
It seems that currently the pipeline does not gracefully handle boolean parameters?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/kfp/components/_data_passing.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 __all__ = [
16 'type_to_type_name',
17 'type_name_to_type',
18 'type_to_deserializer',
19 'type_name_to_deserializer',
20 'type_name_to_serializer',
21 ]
22
23
24 import inspect
25 from typing import Any, Callable, NamedTuple, Sequence
26 import warnings
27
28
29 Converter = NamedTuple('Converter', [
30 ('types', Sequence[str]),
31 ('type_names', Sequence[str]),
32 ('serializer', Callable[[Any], str]),
33 ('deserializer_code', str),
34 ('definitions', str),
35 ])
36
37
38 _converters = [
39 Converter([str], ['String', 'str'], str, 'str', None),
40 Converter([int], ['Integer', 'int'], str, 'int', None),
41 Converter([float], ['Float', 'float'], str, 'float', None),
42 ]
43
44
45 type_to_type_name = {typ: converter.type_names[0] for converter in _converters for typ in converter.types}
46 type_name_to_type = {type_name: converter.types[0] for converter in _converters for type_name in converter.type_names if converter.types}
47 type_to_deserializer = {typ: (converter.deserializer_code, converter.definitions) for converter in _converters for typ in converter.types}
48 type_name_to_deserializer = {type_name: (converter.deserializer_code, converter.definitions) for converter in _converters for type_name in converter.type_names}
49 type_name_to_serializer = {type_name: converter.serializer for converter in _converters for type_name in converter.type_names}
50
51
52 def serialize_value(value, type_name: str) -> str:
53 '''serialize_value converts the passed value to string based on the serializer associated with the passed type_name'''
54 if isinstance(value, str):
55 return value # The value is supposedly already serialized
56
57 if type_name is None:
58 type_name = type_to_type_name.get(type(value), type(value).__name__)
59 warnings.warn('Missing type name was inferred as "{}" based on the value "{}".'.format(type_name, str(value)))
60
61 serializer = type_name_to_serializer.get(type_name, None)
62 if serializer:
63 try:
64 return serializer(value)
65 except Exception as e:
66 raise ValueError('Failed to serialize the value "{}" of type "{}" to type "{}". Exception: {}'.format(
67 str(value),
68 str(type(value).__name__),
69 str(type_name),
70 str(e),
71 ))
72
73 serialized_value = str(value)
74 warnings.warn('There are no registered serializers from type "{}" to type "{}", so the value will be serializers as string "{}".'.format(
75 str(type(value).__name__),
76 str(type_name),
77 serialized_value),
78 )
79 return serialized_value
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/kfp/components/_data_passing.py b/sdk/python/kfp/components/_data_passing.py
--- a/sdk/python/kfp/components/_data_passing.py
+++ b/sdk/python/kfp/components/_data_passing.py
@@ -35,10 +35,20 @@
])
+def _deserialize_bool(s) -> bool:
+ from distutils.util import strtobool
+ return strtobool(s) == 1
+
+
+_bool_deserializer_definitions = inspect.getsource(_deserialize_bool)
+_bool_deserializer_code = _deserialize_bool.__name__
+
+
_converters = [
Converter([str], ['String', 'str'], str, 'str', None),
Converter([int], ['Integer', 'int'], str, 'int', None),
Converter([float], ['Float', 'float'], str, 'float', None),
+ Converter([bool], ['Boolean', 'bool'], str, _bool_deserializer_code, _bool_deserializer_definitions),
]
| {"golden_diff": "diff --git a/sdk/python/kfp/components/_data_passing.py b/sdk/python/kfp/components/_data_passing.py\n--- a/sdk/python/kfp/components/_data_passing.py\n+++ b/sdk/python/kfp/components/_data_passing.py\n@@ -35,10 +35,20 @@\n ])\n \n \n+def _deserialize_bool(s) -> bool:\n+ from distutils.util import strtobool\n+ return strtobool(s) == 1\n+\n+\n+_bool_deserializer_definitions = inspect.getsource(_deserialize_bool)\n+_bool_deserializer_code = _deserialize_bool.__name__\n+\n+\n _converters = [\n Converter([str], ['String', 'str'], str, 'str', None),\n Converter([int], ['Integer', 'int'], str, 'int', None),\n Converter([float], ['Float', 'float'], str, 'float', None),\n+ Converter([bool], ['Boolean', 'bool'], str, _bool_deserializer_code, _bool_deserializer_definitions),\n ]\n", "issue": "Handling of boolean input types\nI have a pipeline that has a boolean parameter:\r\n\r\n```\r\ndef test(\r\n param: dsl.PipelineParam = dsl.PipelineParam(name='param', value=True, param_type=TypeMeta('bool')),\r\n) -> None:\r\n pass\r\n```\r\n\r\nWhen starting a new pipeline via the UI, the input field is a textbox, prefilled with the input string 'True'. Given that the input value is evaluated using `bool(value)`, the only way to start the pipeline with a `False` value is to leave the input textbox empty.\r\n\r\nIt seems that currently the pipeline does not gracefully handle boolean parameters?\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n__all__ = [\n 'type_to_type_name',\n 'type_name_to_type',\n 'type_to_deserializer',\n 'type_name_to_deserializer',\n 'type_name_to_serializer',\n]\n\n\nimport inspect\nfrom typing import Any, Callable, NamedTuple, Sequence\nimport warnings\n\n\nConverter = NamedTuple('Converter', [\n ('types', Sequence[str]),\n ('type_names', Sequence[str]),\n ('serializer', Callable[[Any], str]),\n ('deserializer_code', str),\n ('definitions', str),\n])\n\n\n_converters = [\n Converter([str], ['String', 'str'], str, 'str', None),\n Converter([int], ['Integer', 'int'], str, 'int', None),\n Converter([float], ['Float', 'float'], str, 'float', None),\n]\n\n\ntype_to_type_name = {typ: converter.type_names[0] for converter in _converters for typ in converter.types}\ntype_name_to_type = {type_name: converter.types[0] for converter in _converters for type_name in converter.type_names if converter.types}\ntype_to_deserializer = {typ: (converter.deserializer_code, converter.definitions) for converter in _converters for typ in converter.types}\ntype_name_to_deserializer = {type_name: (converter.deserializer_code, converter.definitions) for converter in _converters for type_name in converter.type_names}\ntype_name_to_serializer = {type_name: converter.serializer for converter in _converters for type_name in converter.type_names}\n\n\ndef serialize_value(value, type_name: str) -> str:\n '''serialize_value converts the passed value to string based on the serializer associated with the passed type_name'''\n if isinstance(value, str):\n return value # The value is supposedly already serialized\n\n if type_name is None:\n type_name = type_to_type_name.get(type(value), type(value).__name__)\n warnings.warn('Missing type name was inferred as \"{}\" based on the value \"{}\".'.format(type_name, str(value)))\n\n serializer = type_name_to_serializer.get(type_name, None)\n if serializer:\n try:\n return serializer(value)\n except Exception as e:\n raise ValueError('Failed to serialize the value \"{}\" of type \"{}\" to type \"{}\". Exception: {}'.format(\n str(value),\n str(type(value).__name__),\n str(type_name),\n str(e),\n ))\n\n serialized_value = str(value)\n warnings.warn('There are no registered serializers from type \"{}\" to type \"{}\", so the value will be serializers as string \"{}\".'.format(\n str(type(value).__name__),\n str(type_name),\n serialized_value),\n )\n return serialized_value\n", "path": "sdk/python/kfp/components/_data_passing.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n__all__ = [\n 'type_to_type_name',\n 'type_name_to_type',\n 'type_to_deserializer',\n 'type_name_to_deserializer',\n 'type_name_to_serializer',\n]\n\n\nimport inspect\nfrom typing import Any, Callable, NamedTuple, Sequence\nimport warnings\n\n\nConverter = NamedTuple('Converter', [\n ('types', Sequence[str]),\n ('type_names', Sequence[str]),\n ('serializer', Callable[[Any], str]),\n ('deserializer_code', str),\n ('definitions', str),\n])\n\n\ndef _deserialize_bool(s) -> bool:\n from distutils.util import strtobool\n return strtobool(s) == 1\n\n\n_bool_deserializer_definitions = inspect.getsource(_deserialize_bool)\n_bool_deserializer_code = _deserialize_bool.__name__\n\n\n_converters = [\n Converter([str], ['String', 'str'], str, 'str', None),\n Converter([int], ['Integer', 'int'], str, 'int', None),\n Converter([float], ['Float', 'float'], str, 'float', None),\n Converter([bool], ['Boolean', 'bool'], str, _bool_deserializer_code, _bool_deserializer_definitions),\n]\n\n\ntype_to_type_name = {typ: converter.type_names[0] for converter in _converters for typ in converter.types}\ntype_name_to_type = {type_name: converter.types[0] for converter in _converters for type_name in converter.type_names if converter.types}\ntype_to_deserializer = {typ: (converter.deserializer_code, converter.definitions) for converter in _converters for typ in converter.types}\ntype_name_to_deserializer = {type_name: (converter.deserializer_code, converter.definitions) for converter in _converters for type_name in converter.type_names}\ntype_name_to_serializer = {type_name: converter.serializer for converter in _converters for type_name in converter.type_names}\n\n\ndef serialize_value(value, type_name: str) -> str:\n '''serialize_value converts the passed value to string based on the serializer associated with the passed type_name'''\n if isinstance(value, str):\n return value # The value is supposedly already serialized\n\n if type_name is None:\n type_name = type_to_type_name.get(type(value), type(value).__name__)\n warnings.warn('Missing type name was inferred as \"{}\" based on the value \"{}\".'.format(type_name, str(value)))\n\n serializer = type_name_to_serializer.get(type_name, None)\n if serializer:\n try:\n return serializer(value)\n except Exception as e:\n raise ValueError('Failed to serialize the value \"{}\" of type \"{}\" to type \"{}\". Exception: {}'.format(\n str(value),\n str(type(value).__name__),\n str(type_name),\n str(e),\n ))\n\n serialized_value = str(value)\n warnings.warn('There are no registered serializers from type \"{}\" to type \"{}\", so the value will be serializers as string \"{}\".'.format(\n str(type(value).__name__),\n str(type_name),\n serialized_value),\n )\n return serialized_value\n", "path": "sdk/python/kfp/components/_data_passing.py"}]} | 1,246 | 216 |
gh_patches_debug_13531 | rasdani/github-patches | git_diff | mkdocs__mkdocs-417 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Publish personal GitHub pages
I've switched over to using mkdocs for [my personal page](http://jiahao.github.io) and am really liking the output.
I'd like to request that `gh-deploy` be extended to support personal GitHub pages also. Unlike project pages, personal pages must have repository names `<username>.github.io`, and the project is published to the `master` branch instead of `gh-pages`. Everything works fine if I do it manually so I presume supporting personal pages automatically should not be too difficult.
Edit: I have a hacky Makefile that emulates `gh-deploy` (assumes that the markdown source is on a branch called `mkdocs` and deploys to `master`):
``` make
site:
git checkout mkdocs
mkdocs build
git checkout master
cp -R site/* .
rm -rf site
echo Ready to commit and push
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/gh_deploy.py`
Content:
```
1 from __future__ import print_function
2 import subprocess
3 import os
4
5
6 def gh_deploy(config):
7 if not os.path.exists('.git'):
8 print('Cannot deploy - this directory does not appear to be a git repository')
9 return
10
11 print("Copying '%s' to `gh-pages` branch and pushing to GitHub." % config['site_dir'])
12 try:
13 subprocess.check_call(['ghp-import', '-p', config['site_dir']])
14 except:
15 return
16
17 # Does this repository have a CNAME set for GitHub pages?
18 if os.path.isfile('CNAME'):
19 # This GitHub pages repository has a CNAME configured.
20 with(open('CNAME', 'r')) as f:
21 cname_host = f.read().strip()
22 print('Based on your CNAME file, your documentation should be available shortly at: http://%s' % cname_host)
23 print('NOTE: Your DNS records must be configured appropriately for your CNAME URL to work.')
24 return
25
26 # No CNAME found. We will use the origin URL to determine the GitHub
27 # pages location.
28 url = subprocess.check_output(["git", "config", "--get", "remote.origin.url"])
29 url = url.decode('utf-8').strip()
30
31 host = None
32 path = None
33 if 'github.com/' in url:
34 host, path = url.split('github.com/', 1)
35 elif 'github.com:' in url:
36 host, path = url.split('github.com:', 1)
37
38 if host is None:
39 # This could be a GitHub Enterprise deployment.
40 print('Your documentation should be available shortly.')
41 else:
42 username, repo = path.split('/', 1)
43 if repo.endswith('.git'):
44 repo = repo[:-len('.git')]
45 url = 'http://%s.github.io/%s' % (username, repo)
46 print('Your documentation should shortly be available at: ' + url)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/gh_deploy.py b/mkdocs/gh_deploy.py
--- a/mkdocs/gh_deploy.py
+++ b/mkdocs/gh_deploy.py
@@ -4,13 +4,17 @@
def gh_deploy(config):
+
if not os.path.exists('.git'):
print('Cannot deploy - this directory does not appear to be a git repository')
return
print("Copying '%s' to `gh-pages` branch and pushing to GitHub." % config['site_dir'])
try:
- subprocess.check_call(['ghp-import', '-p', config['site_dir']])
+ command = ['ghp-import', '-p', config['site_dir']]
+ if 'remote_branch' in config:
+ command.extend(['-b', config['remote_branch']])
+ subprocess.check_call(command)
except:
return
| {"golden_diff": "diff --git a/mkdocs/gh_deploy.py b/mkdocs/gh_deploy.py\n--- a/mkdocs/gh_deploy.py\n+++ b/mkdocs/gh_deploy.py\n@@ -4,13 +4,17 @@\n \n \n def gh_deploy(config):\n+\n if not os.path.exists('.git'):\n print('Cannot deploy - this directory does not appear to be a git repository')\n return\n \n print(\"Copying '%s' to `gh-pages` branch and pushing to GitHub.\" % config['site_dir'])\n try:\n- subprocess.check_call(['ghp-import', '-p', config['site_dir']])\n+ command = ['ghp-import', '-p', config['site_dir']]\n+ if 'remote_branch' in config:\n+ command.extend(['-b', config['remote_branch']])\n+ subprocess.check_call(command)\n except:\n return\n", "issue": "Publish personal GitHub pages\nI've switched over to using mkdocs for [my personal page](http://jiahao.github.io) and am really liking the output.\n\nI'd like to request that `gh-deploy` be extended to support personal GitHub pages also. Unlike project pages, personal pages must have repository names `<username>.github.io`, and the project is published to the `master` branch instead of `gh-pages`. Everything works fine if I do it manually so I presume supporting personal pages automatically should not be too difficult.\n\nEdit: I have a hacky Makefile that emulates `gh-deploy` (assumes that the markdown source is on a branch called `mkdocs` and deploys to `master`):\n\n``` make\nsite:\n git checkout mkdocs\n mkdocs build\n git checkout master\n cp -R site/* .\n rm -rf site\n echo Ready to commit and push\n```\n\n", "before_files": [{"content": "from __future__ import print_function\nimport subprocess\nimport os\n\n\ndef gh_deploy(config):\n if not os.path.exists('.git'):\n print('Cannot deploy - this directory does not appear to be a git repository')\n return\n\n print(\"Copying '%s' to `gh-pages` branch and pushing to GitHub.\" % config['site_dir'])\n try:\n subprocess.check_call(['ghp-import', '-p', config['site_dir']])\n except:\n return\n\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile('CNAME'):\n # This GitHub pages repository has a CNAME configured.\n with(open('CNAME', 'r')) as f:\n cname_host = f.read().strip()\n print('Based on your CNAME file, your documentation should be available shortly at: http://%s' % cname_host)\n print('NOTE: Your DNS records must be configured appropriately for your CNAME URL to work.')\n return\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n url = subprocess.check_output([\"git\", \"config\", \"--get\", \"remote.origin.url\"])\n url = url.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n print('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = 'http://%s.github.io/%s' % (username, repo)\n print('Your documentation should shortly be available at: ' + url)\n", "path": "mkdocs/gh_deploy.py"}], "after_files": [{"content": "from __future__ import print_function\nimport subprocess\nimport os\n\n\ndef gh_deploy(config):\n\n if not os.path.exists('.git'):\n print('Cannot deploy - this directory does not appear to be a git repository')\n return\n\n print(\"Copying '%s' to `gh-pages` branch and pushing to GitHub.\" % config['site_dir'])\n try:\n command = ['ghp-import', '-p', config['site_dir']]\n if 'remote_branch' in config:\n command.extend(['-b', config['remote_branch']])\n subprocess.check_call(command)\n except:\n return\n\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile('CNAME'):\n # This GitHub pages repository has a CNAME configured.\n with(open('CNAME', 'r')) as f:\n cname_host = f.read().strip()\n print('Based on your CNAME file, your documentation should be available shortly at: http://%s' % cname_host)\n print('NOTE: Your DNS records must be configured appropriately for your CNAME URL to work.')\n return\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n url = subprocess.check_output([\"git\", \"config\", \"--get\", \"remote.origin.url\"])\n url = url.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n print('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = 'http://%s.github.io/%s' % (username, repo)\n print('Your documentation should shortly be available at: ' + url)\n", "path": "mkdocs/gh_deploy.py"}]} | 950 | 186 |
gh_patches_debug_37248 | rasdani/github-patches | git_diff | ipython__ipython-14256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
deprecate IPython.utils.tz
It is not used in IPython itself, and uses functions that are marked for removal in Python
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/utils/tz.py`
Content:
```
1 # encoding: utf-8
2 """
3 Timezone utilities
4
5 Just UTC-awareness right now
6 """
7
8 #-----------------------------------------------------------------------------
9 # Copyright (C) 2013 The IPython Development Team
10 #
11 # Distributed under the terms of the BSD License. The full license is in
12 # the file COPYING, distributed as part of this software.
13 #-----------------------------------------------------------------------------
14
15 #-----------------------------------------------------------------------------
16 # Imports
17 #-----------------------------------------------------------------------------
18
19 from datetime import tzinfo, timedelta, datetime
20
21 #-----------------------------------------------------------------------------
22 # Code
23 #-----------------------------------------------------------------------------
24 # constant for zero offset
25 ZERO = timedelta(0)
26
27 class tzUTC(tzinfo):
28 """tzinfo object for UTC (zero offset)"""
29
30 def utcoffset(self, d):
31 return ZERO
32
33 def dst(self, d):
34 return ZERO
35
36
37 UTC = tzUTC() # type: ignore[abstract]
38
39
40 def utc_aware(unaware):
41 """decorator for adding UTC tzinfo to datetime's utcfoo methods"""
42 def utc_method(*args, **kwargs):
43 dt = unaware(*args, **kwargs)
44 return dt.replace(tzinfo=UTC)
45 return utc_method
46
47 utcfromtimestamp = utc_aware(datetime.utcfromtimestamp)
48 utcnow = utc_aware(datetime.utcnow)
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/utils/tz.py b/IPython/utils/tz.py
--- a/IPython/utils/tz.py
+++ b/IPython/utils/tz.py
@@ -3,29 +3,56 @@
Timezone utilities
Just UTC-awareness right now
+
+Deprecated since IPython 8.19.0.
"""
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
# Copyright (C) 2013 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
# Imports
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
+import warnings
from datetime import tzinfo, timedelta, datetime
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
# Code
-#-----------------------------------------------------------------------------
+# -----------------------------------------------------------------------------
+__all__ = ["tzUTC", "utc_aware", "utcfromtimestamp", "utcnow"]
+
+
# constant for zero offset
ZERO = timedelta(0)
+
+def __getattr__(name):
+ if name not in __all__:
+ err = f"IPython.utils.tz is deprecated and has no attribute {name}"
+ raise AttributeError(err)
+
+ _warn_deprecated()
+
+ return getattr(name)
+
+
+def _warn_deprecated():
+ msg = "The module `IPython.utils.tz` is deprecated and will be completely removed."
+ warnings.warn(msg, category=DeprecationWarning, stacklevel=2)
+
+
class tzUTC(tzinfo):
- """tzinfo object for UTC (zero offset)"""
+ """tzinfo object for UTC (zero offset)
+
+ Deprecated since IPython 8.19.0.
+ """
+
+ _warn_deprecated()
def utcoffset(self, d):
return ZERO
@@ -38,11 +65,18 @@
def utc_aware(unaware):
- """decorator for adding UTC tzinfo to datetime's utcfoo methods"""
+ """decorator for adding UTC tzinfo to datetime's utcfoo methods
+
+ Deprecated since IPython 8.19.0.
+ """
+
def utc_method(*args, **kwargs):
+ _warn_deprecated()
dt = unaware(*args, **kwargs)
return dt.replace(tzinfo=UTC)
+
return utc_method
+
utcfromtimestamp = utc_aware(datetime.utcfromtimestamp)
utcnow = utc_aware(datetime.utcnow)
| {"golden_diff": "diff --git a/IPython/utils/tz.py b/IPython/utils/tz.py\n--- a/IPython/utils/tz.py\n+++ b/IPython/utils/tz.py\n@@ -3,29 +3,56 @@\n Timezone utilities\n \n Just UTC-awareness right now\n+\n+Deprecated since IPython 8.19.0.\n \"\"\"\n \n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n # Copyright (C) 2013 The IPython Development Team\n #\n # Distributed under the terms of the BSD License. The full license is in\n # the file COPYING, distributed as part of this software.\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n \n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n # Imports\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n \n+import warnings\n from datetime import tzinfo, timedelta, datetime\n \n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n # Code\n-#-----------------------------------------------------------------------------\n+# -----------------------------------------------------------------------------\n+__all__ = [\"tzUTC\", \"utc_aware\", \"utcfromtimestamp\", \"utcnow\"]\n+\n+\n # constant for zero offset\n ZERO = timedelta(0)\n \n+\n+def __getattr__(name):\n+ if name not in __all__:\n+ err = f\"IPython.utils.tz is deprecated and has no attribute {name}\"\n+ raise AttributeError(err)\n+\n+ _warn_deprecated()\n+\n+ return getattr(name)\n+\n+\n+def _warn_deprecated():\n+ msg = \"The module `IPython.utils.tz` is deprecated and will be completely removed.\"\n+ warnings.warn(msg, category=DeprecationWarning, stacklevel=2)\n+\n+\n class tzUTC(tzinfo):\n- \"\"\"tzinfo object for UTC (zero offset)\"\"\"\n+ \"\"\"tzinfo object for UTC (zero offset)\n+\n+ Deprecated since IPython 8.19.0.\n+ \"\"\"\n+\n+ _warn_deprecated()\n \n def utcoffset(self, d):\n return ZERO\n@@ -38,11 +65,18 @@\n \n \n def utc_aware(unaware):\n- \"\"\"decorator for adding UTC tzinfo to datetime's utcfoo methods\"\"\"\n+ \"\"\"decorator for adding UTC tzinfo to datetime's utcfoo methods\n+\n+ Deprecated since IPython 8.19.0.\n+ \"\"\"\n+\n def utc_method(*args, **kwargs):\n+ _warn_deprecated()\n dt = unaware(*args, **kwargs)\n return dt.replace(tzinfo=UTC)\n+\n return utc_method\n \n+\n utcfromtimestamp = utc_aware(datetime.utcfromtimestamp)\n utcnow = utc_aware(datetime.utcnow)\n", "issue": "deprecate IPython.utils.tz\nIt is not used in IPython itself, and uses functions that are marked for removal in Python\n", "before_files": [{"content": "# encoding: utf-8\n\"\"\"\nTimezone utilities\n\nJust UTC-awareness right now\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2013 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom datetime import tzinfo, timedelta, datetime\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n# constant for zero offset\nZERO = timedelta(0)\n\nclass tzUTC(tzinfo):\n \"\"\"tzinfo object for UTC (zero offset)\"\"\"\n\n def utcoffset(self, d):\n return ZERO\n\n def dst(self, d):\n return ZERO\n\n\nUTC = tzUTC() # type: ignore[abstract]\n\n\ndef utc_aware(unaware):\n \"\"\"decorator for adding UTC tzinfo to datetime's utcfoo methods\"\"\"\n def utc_method(*args, **kwargs):\n dt = unaware(*args, **kwargs)\n return dt.replace(tzinfo=UTC)\n return utc_method\n\nutcfromtimestamp = utc_aware(datetime.utcfromtimestamp)\nutcnow = utc_aware(datetime.utcnow)\n", "path": "IPython/utils/tz.py"}], "after_files": [{"content": "# encoding: utf-8\n\"\"\"\nTimezone utilities\n\nJust UTC-awareness right now\n\nDeprecated since IPython 8.19.0.\n\"\"\"\n\n# -----------------------------------------------------------------------------\n# Copyright (C) 2013 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n# -----------------------------------------------------------------------------\n\n# -----------------------------------------------------------------------------\n# Imports\n# -----------------------------------------------------------------------------\n\nimport warnings\nfrom datetime import tzinfo, timedelta, datetime\n\n# -----------------------------------------------------------------------------\n# Code\n# -----------------------------------------------------------------------------\n__all__ = [\"tzUTC\", \"utc_aware\", \"utcfromtimestamp\", \"utcnow\"]\n\n\n# constant for zero offset\nZERO = timedelta(0)\n\n\ndef __getattr__(name):\n if name not in __all__:\n err = f\"IPython.utils.tz is deprecated and has no attribute {name}\"\n raise AttributeError(err)\n\n _warn_deprecated()\n\n return getattr(name)\n\n\ndef _warn_deprecated():\n msg = \"The module `IPython.utils.tz` is deprecated and will be completely removed.\"\n warnings.warn(msg, category=DeprecationWarning, stacklevel=2)\n\n\nclass tzUTC(tzinfo):\n \"\"\"tzinfo object for UTC (zero offset)\n\n Deprecated since IPython 8.19.0.\n \"\"\"\n\n _warn_deprecated()\n\n def utcoffset(self, d):\n return ZERO\n\n def dst(self, d):\n return ZERO\n\n\nUTC = tzUTC() # type: ignore[abstract]\n\n\ndef utc_aware(unaware):\n \"\"\"decorator for adding UTC tzinfo to datetime's utcfoo methods\n\n Deprecated since IPython 8.19.0.\n \"\"\"\n\n def utc_method(*args, **kwargs):\n _warn_deprecated()\n dt = unaware(*args, **kwargs)\n return dt.replace(tzinfo=UTC)\n\n return utc_method\n\n\nutcfromtimestamp = utc_aware(datetime.utcfromtimestamp)\nutcnow = utc_aware(datetime.utcnow)\n", "path": "IPython/utils/tz.py"}]} | 641 | 536 |
gh_patches_debug_8314 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CKV_AWS_358] - false alarm triggered even when GH Organisation is specified
**Describe the issue**
I guess there is false alarm triggered by this check.
The GitHub Organisation is already included in the values of condition for the `token.actions.githubusercontent.com:sub` variable.
**Examples**
```terraform
data "aws_iam_policy_document" "assumerole" {
statement {
sid = "DefaultAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRoleWithWebIdentity",
]
condition {
test = "StringEquals"
variable = "token.actions.githubusercontent.com:aud"
values = [
"sts.amazonaws.com"
]
}
condition {
test = "StringLike"
variable = "token.actions.githubusercontent.com:sub"
values = [
"repo:GitHub-Organization/${var.github_repo}:*"
]
}
principals {
type = "Federated"
identifiers = [
"arn:aws:iam::${var.aws_account_id}:oidc-provider/token.actions.githubusercontent.com"
]
}
}
}
```
Checkov output:
```sh
Check: CKV_AWS_358: "Ensure GitHub Actions OIDC trust policies only allows actions from a specific known organization"
FAILED for resource: module.ecr_repo.aws_iam_policy_document.assumerole
File: /../../modules/ecr-repository/data_aws_iam_policy_document_assumerole.tf:1-35
Calling File: /module_ecr_repository.tf:2-24
```
**Version:**
- Checkov Version [2.3.359]
**Additional context**
See https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py`
Content:
```
1 from typing import Dict, List, Any
2 import re
3 from checkov.common.models.enums import CheckResult, CheckCategories
4 from checkov.common.util.type_forcers import force_list
5 from checkov.terraform.checks.data.base_check import BaseDataCheck
6
7 gh_repo_regex = re.compile(r'repo:[^/]+/[^/]+')
8
9
10 class GithubActionsOIDCTrustPolicy(BaseDataCheck):
11 def __init__(self):
12 name = 'Ensure GitHub Actions OIDC trust policies only allows actions from a specific known organization'
13 id = "CKV_AWS_358"
14 supported_data = ("aws_iam_policy_document",)
15 categories = [CheckCategories.IAM]
16 super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)
17
18 def scan_data_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:
19 statements = force_list(conf.get('statement'))
20 for statement in statements:
21 found_federated_gh_oidc = False
22 if isinstance(statement, dict):
23 if statement.get('principals'):
24 principals = statement['principals']
25 for principal in force_list(principals):
26 if 'type' not in principal and 'identifiers' not in principal:
27 continue
28 principal_type = principal['type']
29 principal_identifiers = principal['identifiers']
30 if isinstance(principal_type, list) and len(
31 principal_type) and 'Federated' in principal_type and isinstance(principal_identifiers,
32 list):
33 for identifier in principal_identifiers:
34 if isinstance(identifier,
35 list) and 'oidc-provider/token.actions.githubusercontent.com' in \
36 identifier[0]:
37 found_federated_gh_oidc = True
38 break
39 if not found_federated_gh_oidc:
40 return CheckResult.PASSED
41 if found_federated_gh_oidc and not statement.get('condition'):
42 return CheckResult.FAILED
43 found_sub_condition_variable = False
44 found_sub_condition_value = False
45 for condition in statement.get('condition'):
46 condition_variables = condition.get('variable')
47 condition_values = condition.get('values')
48 if isinstance(condition_variables, list):
49 for condition_variable in condition_variables:
50 if condition_variable == 'token.actions.githubusercontent.com:sub':
51 found_sub_condition_variable = True
52 break
53 for condition_value in condition_values:
54 if isinstance(condition_value, list) and gh_repo_regex.search(condition_value[0]):
55 found_sub_condition_value = True
56 break
57 if found_sub_condition_value and found_sub_condition_variable:
58 return CheckResult.PASSED
59 else:
60 return CheckResult.FAILED
61
62 return CheckResult.PASSED
63
64
65 check = GithubActionsOIDCTrustPolicy()
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py b/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py
--- a/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py
+++ b/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py
@@ -56,8 +56,9 @@
break
if found_sub_condition_value and found_sub_condition_variable:
return CheckResult.PASSED
- else:
- return CheckResult.FAILED
+
+ # Found a federated GitHub user, but no restirctions
+ return CheckResult.FAILED
return CheckResult.PASSED
| {"golden_diff": "diff --git a/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py b/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py\n--- a/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py\n+++ b/checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py\n@@ -56,8 +56,9 @@\n break\n if found_sub_condition_value and found_sub_condition_variable:\n return CheckResult.PASSED\n- else:\n- return CheckResult.FAILED\n+\n+ # Found a federated GitHub user, but no restirctions\n+ return CheckResult.FAILED\n \n return CheckResult.PASSED\n", "issue": "[CKV_AWS_358] - false alarm triggered even when GH Organisation is specified\n**Describe the issue**\r\n\r\nI guess there is false alarm triggered by this check.\r\nThe GitHub Organisation is already included in the values of condition for the `token.actions.githubusercontent.com:sub` variable.\r\n\r\n**Examples**\r\n```terraform\r\ndata \"aws_iam_policy_document\" \"assumerole\" {\r\n statement {\r\n sid = \"DefaultAssumeRole\"\r\n effect = \"Allow\"\r\n\r\n actions = [\r\n \"sts:AssumeRoleWithWebIdentity\",\r\n ]\r\n\r\n condition {\r\n test = \"StringEquals\"\r\n variable = \"token.actions.githubusercontent.com:aud\"\r\n\r\n values = [\r\n \"sts.amazonaws.com\"\r\n ]\r\n }\r\n\r\n condition {\r\n test = \"StringLike\"\r\n variable = \"token.actions.githubusercontent.com:sub\"\r\n\r\n values = [\r\n \"repo:GitHub-Organization/${var.github_repo}:*\"\r\n ]\r\n }\r\n\r\n principals {\r\n type = \"Federated\"\r\n identifiers = [\r\n \"arn:aws:iam::${var.aws_account_id}:oidc-provider/token.actions.githubusercontent.com\"\r\n ]\r\n }\r\n }\r\n}\r\n```\r\n\r\nCheckov output:\r\n```sh\r\nCheck: CKV_AWS_358: \"Ensure GitHub Actions OIDC trust policies only allows actions from a specific known organization\"\r\n\tFAILED for resource: module.ecr_repo.aws_iam_policy_document.assumerole\r\n\tFile: /../../modules/ecr-repository/data_aws_iam_policy_document_assumerole.tf:1-35\r\n\tCalling File: /module_ecr_repository.tf:2-24\r\n```\r\n\r\n**Version:**\r\n - Checkov Version [2.3.359]\r\n\r\n**Additional context**\r\nSee https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services\r\n\r\n\n", "before_files": [{"content": "from typing import Dict, List, Any\nimport re\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.checks.data.base_check import BaseDataCheck\n\ngh_repo_regex = re.compile(r'repo:[^/]+/[^/]+')\n\n\nclass GithubActionsOIDCTrustPolicy(BaseDataCheck):\n def __init__(self):\n name = 'Ensure GitHub Actions OIDC trust policies only allows actions from a specific known organization'\n id = \"CKV_AWS_358\"\n supported_data = (\"aws_iam_policy_document\",)\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:\n statements = force_list(conf.get('statement'))\n for statement in statements:\n found_federated_gh_oidc = False\n if isinstance(statement, dict):\n if statement.get('principals'):\n principals = statement['principals']\n for principal in force_list(principals):\n if 'type' not in principal and 'identifiers' not in principal:\n continue\n principal_type = principal['type']\n principal_identifiers = principal['identifiers']\n if isinstance(principal_type, list) and len(\n principal_type) and 'Federated' in principal_type and isinstance(principal_identifiers,\n list):\n for identifier in principal_identifiers:\n if isinstance(identifier,\n list) and 'oidc-provider/token.actions.githubusercontent.com' in \\\n identifier[0]:\n found_federated_gh_oidc = True\n break\n if not found_federated_gh_oidc:\n return CheckResult.PASSED\n if found_federated_gh_oidc and not statement.get('condition'):\n return CheckResult.FAILED\n found_sub_condition_variable = False\n found_sub_condition_value = False\n for condition in statement.get('condition'):\n condition_variables = condition.get('variable')\n condition_values = condition.get('values')\n if isinstance(condition_variables, list):\n for condition_variable in condition_variables:\n if condition_variable == 'token.actions.githubusercontent.com:sub':\n found_sub_condition_variable = True\n break\n for condition_value in condition_values:\n if isinstance(condition_value, list) and gh_repo_regex.search(condition_value[0]):\n found_sub_condition_value = True\n break\n if found_sub_condition_value and found_sub_condition_variable:\n return CheckResult.PASSED\n else:\n return CheckResult.FAILED\n\n return CheckResult.PASSED\n\n\ncheck = GithubActionsOIDCTrustPolicy()\n", "path": "checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py"}], "after_files": [{"content": "from typing import Dict, List, Any\nimport re\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.checks.data.base_check import BaseDataCheck\n\ngh_repo_regex = re.compile(r'repo:[^/]+/[^/]+')\n\n\nclass GithubActionsOIDCTrustPolicy(BaseDataCheck):\n def __init__(self):\n name = 'Ensure GitHub Actions OIDC trust policies only allows actions from a specific known organization'\n id = \"CKV_AWS_358\"\n supported_data = (\"aws_iam_policy_document\",)\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf: Dict[str, List[Any]], entity_type: str) -> CheckResult:\n statements = force_list(conf.get('statement'))\n for statement in statements:\n found_federated_gh_oidc = False\n if isinstance(statement, dict):\n if statement.get('principals'):\n principals = statement['principals']\n for principal in force_list(principals):\n if 'type' not in principal and 'identifiers' not in principal:\n continue\n principal_type = principal['type']\n principal_identifiers = principal['identifiers']\n if isinstance(principal_type, list) and len(\n principal_type) and 'Federated' in principal_type and isinstance(principal_identifiers,\n list):\n for identifier in principal_identifiers:\n if isinstance(identifier,\n list) and 'oidc-provider/token.actions.githubusercontent.com' in \\\n identifier[0]:\n found_federated_gh_oidc = True\n break\n if not found_federated_gh_oidc:\n return CheckResult.PASSED\n if found_federated_gh_oidc and not statement.get('condition'):\n return CheckResult.FAILED\n found_sub_condition_variable = False\n found_sub_condition_value = False\n for condition in statement.get('condition'):\n condition_variables = condition.get('variable')\n condition_values = condition.get('values')\n if isinstance(condition_variables, list):\n for condition_variable in condition_variables:\n if condition_variable == 'token.actions.githubusercontent.com:sub':\n found_sub_condition_variable = True\n break\n for condition_value in condition_values:\n if isinstance(condition_value, list) and gh_repo_regex.search(condition_value[0]):\n found_sub_condition_value = True\n break\n if found_sub_condition_value and found_sub_condition_variable:\n return CheckResult.PASSED\n\n # Found a federated GitHub user, but no restirctions\n return CheckResult.FAILED\n\n return CheckResult.PASSED\n\n\ncheck = GithubActionsOIDCTrustPolicy()\n", "path": "checkov/terraform/checks/data/aws/GithubActionsOIDCTrustPolicy.py"}]} | 1,378 | 158 |
gh_patches_debug_32747 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1967 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[gRPC] Does not work with stream-to-stream gRPC requests
**Describe the bug**: I'm trying to add elastic apm for my gRPC service with unary-to-unary requests and stream-to-stream. Everything is fine with unary-to-unary requests. But stream-to-stream requests stop working.
The service stops accepting streaming requests altogether.
I think I saw somewhere that you wrote elastic does not support stream requests for gRPC. But I didn't think it would be a service-blocking problem.
Is there any way to get around it, to make custom processing for stream specifically?
**To Reproduce**
1. Create a gRPC service with stream-to-stream and unary-to-unary endpoints.
```python
class TestService(test_pb2_grpc.TestService):
async def Streaming(self, request_iterator, context):
count_of_request = 0
async for request in request_iterator:
count_of_request += 1
yield test_pb2.Response(message=f"#{count_of_request } - ok")
async def Unary(self, request, context):
return test_pb2.Response(message="ok")
```
3. Connect elastic apm.
```python
elasticapm.instrument()
async def run_serve():
apm_client = GRPCApmClient(
{
"SERVICE_NAME": "grpc-test",
"SERVER_URL": "http://localhost:8200",
"ENVIRONMENT": "test",
"TRANSACTION_SAMPLE_RATE": 1.0,
"SECRET_TOKEN": "qwerty",
}
)
server = grpc.aio.server(
futures.ThreadPoolExecutor(max_workers=10),
)
test_pb2_grpc.add_TestServicer_to_server(
TestService(), server
)
server.add_insecure_port("[::]:50051")
await server.start()
await server.wait_for_termination()
if __name__ == "__main__":
asyncio.run(run_serve())
```
5. Make test requests
6. Result: unary - works, stream - returns nothing, no logs in the service
**Environment (please complete the following information)**
- OS: Linux
- Python version: 3.10
- Framework and version: gRPC 1.43.0
- APM Server version: 6.20.0
**Additional context**
Add any other context about the problem here.
- `requirements.txt`:
<details>
<summary>Click to expand</summary>
```
gcloud==0.18.3
google-api-python-client==2.39.0
grpcio-tools==1.43.0
grpcio-health-checking==1.43.0
setuptools==59.5.0
elastic-apm
sentry-sdk[grpcio]==1.31.0
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/contrib/grpc/async_server_interceptor.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2022, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import inspect
32
33 import grpc
34
35 import elasticapm
36 from elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, _wrap_rpc_behavior, get_trace_parent
37
38
39 class _AsyncServerInterceptor(grpc.aio.ServerInterceptor):
40 async def intercept_service(self, continuation, handler_call_details):
41 def transaction_wrapper(behavior, request_streaming, response_streaming):
42 async def _interceptor(request_or_iterator, context):
43 if request_streaming or response_streaming: # only unary-unary is supported
44 return behavior(request_or_iterator, context)
45 tp = get_trace_parent(handler_call_details)
46 client = elasticapm.get_client()
47 transaction = client.begin_transaction("request", trace_parent=tp)
48 try:
49 result = behavior(request_or_iterator, _ServicerContextWrapper(context, transaction))
50
51 # This is so we can support both sync and async rpc functions
52 if inspect.isawaitable(result):
53 result = await result
54
55 if transaction and not transaction.outcome:
56 transaction.set_success()
57 return result
58 except Exception:
59 if transaction:
60 transaction.set_failure()
61 client.capture_exception(handled=False)
62 raise
63 finally:
64 client.end_transaction(name=handler_call_details.method)
65
66 return _interceptor
67
68 return _wrap_rpc_behavior(await continuation(handler_call_details), transaction_wrapper)
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/contrib/grpc/async_server_interceptor.py b/elasticapm/contrib/grpc/async_server_interceptor.py
--- a/elasticapm/contrib/grpc/async_server_interceptor.py
+++ b/elasticapm/contrib/grpc/async_server_interceptor.py
@@ -33,20 +33,18 @@
import grpc
import elasticapm
-from elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, _wrap_rpc_behavior, get_trace_parent
+from elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, get_trace_parent
class _AsyncServerInterceptor(grpc.aio.ServerInterceptor):
async def intercept_service(self, continuation, handler_call_details):
- def transaction_wrapper(behavior, request_streaming, response_streaming):
- async def _interceptor(request_or_iterator, context):
- if request_streaming or response_streaming: # only unary-unary is supported
- return behavior(request_or_iterator, context)
+ def wrap_unary_unary(behavior):
+ async def _interceptor(request, context):
tp = get_trace_parent(handler_call_details)
client = elasticapm.get_client()
transaction = client.begin_transaction("request", trace_parent=tp)
try:
- result = behavior(request_or_iterator, _ServicerContextWrapper(context, transaction))
+ result = behavior(request, _ServicerContextWrapper(context, transaction))
# This is so we can support both sync and async rpc functions
if inspect.isawaitable(result):
@@ -65,4 +63,12 @@
return _interceptor
- return _wrap_rpc_behavior(await continuation(handler_call_details), transaction_wrapper)
+ handler = await continuation(handler_call_details)
+ if handler.request_streaming or handler.response_streaming:
+ return handler
+
+ return grpc.unary_unary_rpc_method_handler(
+ wrap_unary_unary(handler.unary_unary),
+ request_deserializer=handler.request_deserializer,
+ response_serializer=handler.response_serializer,
+ )
| {"golden_diff": "diff --git a/elasticapm/contrib/grpc/async_server_interceptor.py b/elasticapm/contrib/grpc/async_server_interceptor.py\n--- a/elasticapm/contrib/grpc/async_server_interceptor.py\n+++ b/elasticapm/contrib/grpc/async_server_interceptor.py\n@@ -33,20 +33,18 @@\n import grpc\n \n import elasticapm\n-from elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, _wrap_rpc_behavior, get_trace_parent\n+from elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, get_trace_parent\n \n \n class _AsyncServerInterceptor(grpc.aio.ServerInterceptor):\n async def intercept_service(self, continuation, handler_call_details):\n- def transaction_wrapper(behavior, request_streaming, response_streaming):\n- async def _interceptor(request_or_iterator, context):\n- if request_streaming or response_streaming: # only unary-unary is supported\n- return behavior(request_or_iterator, context)\n+ def wrap_unary_unary(behavior):\n+ async def _interceptor(request, context):\n tp = get_trace_parent(handler_call_details)\n client = elasticapm.get_client()\n transaction = client.begin_transaction(\"request\", trace_parent=tp)\n try:\n- result = behavior(request_or_iterator, _ServicerContextWrapper(context, transaction))\n+ result = behavior(request, _ServicerContextWrapper(context, transaction))\n \n # This is so we can support both sync and async rpc functions\n if inspect.isawaitable(result):\n@@ -65,4 +63,12 @@\n \n return _interceptor\n \n- return _wrap_rpc_behavior(await continuation(handler_call_details), transaction_wrapper)\n+ handler = await continuation(handler_call_details)\n+ if handler.request_streaming or handler.response_streaming:\n+ return handler\n+\n+ return grpc.unary_unary_rpc_method_handler(\n+ wrap_unary_unary(handler.unary_unary),\n+ request_deserializer=handler.request_deserializer,\n+ response_serializer=handler.response_serializer,\n+ )\n", "issue": "[gRPC] Does not work with stream-to-stream gRPC requests\n**Describe the bug**: I'm trying to add elastic apm for my gRPC service with unary-to-unary requests and stream-to-stream. Everything is fine with unary-to-unary requests. But stream-to-stream requests stop working.\r\n\r\nThe service stops accepting streaming requests altogether.\r\n\r\nI think I saw somewhere that you wrote elastic does not support stream requests for gRPC. But I didn't think it would be a service-blocking problem. \r\n\r\nIs there any way to get around it, to make custom processing for stream specifically?\r\n\r\n**To Reproduce**\r\n1. Create a gRPC service with stream-to-stream and unary-to-unary endpoints.\r\n```python\r\nclass TestService(test_pb2_grpc.TestService):\r\n async def Streaming(self, request_iterator, context):\r\n count_of_request = 0\r\n async for request in request_iterator:\r\n count_of_request += 1\r\n yield test_pb2.Response(message=f\"#{count_of_request } - ok\")\r\n\r\n async def Unary(self, request, context):\r\n return test_pb2.Response(message=\"ok\")\r\n```\r\n3. Connect elastic apm.\r\n```python\r\nelasticapm.instrument()\r\n\r\nasync def run_serve():\r\n apm_client = GRPCApmClient(\r\n {\r\n \"SERVICE_NAME\": \"grpc-test\",\r\n \"SERVER_URL\": \"http://localhost:8200\",\r\n \"ENVIRONMENT\": \"test\",\r\n \"TRANSACTION_SAMPLE_RATE\": 1.0,\r\n \"SECRET_TOKEN\": \"qwerty\",\r\n }\r\n )\r\n server = grpc.aio.server(\r\n futures.ThreadPoolExecutor(max_workers=10),\r\n )\r\n test_pb2_grpc.add_TestServicer_to_server(\r\n TestService(), server\r\n )\r\n server.add_insecure_port(\"[::]:50051\")\r\n await server.start()\r\n await server.wait_for_termination()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n asyncio.run(run_serve())\r\n```\r\n5. Make test requests\r\n6. Result: unary - works, stream - returns nothing, no logs in the service\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux\r\n- Python version: 3.10\r\n- Framework and version: gRPC 1.43.0\r\n- APM Server version: 6.20.0\r\n\r\n**Additional context**\r\n\r\nAdd any other context about the problem here.\r\n\r\n- `requirements.txt`:\r\n <details>\r\n <summary>Click to expand</summary>\r\n\r\n ```\r\n gcloud==0.18.3\r\n google-api-python-client==2.39.0\r\n grpcio-tools==1.43.0\r\n grpcio-health-checking==1.43.0\r\n setuptools==59.5.0\r\n elastic-apm\r\n sentry-sdk[grpcio]==1.31.0\r\n ```\r\n </details>\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2022, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport inspect\n\nimport grpc\n\nimport elasticapm\nfrom elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, _wrap_rpc_behavior, get_trace_parent\n\n\nclass _AsyncServerInterceptor(grpc.aio.ServerInterceptor):\n async def intercept_service(self, continuation, handler_call_details):\n def transaction_wrapper(behavior, request_streaming, response_streaming):\n async def _interceptor(request_or_iterator, context):\n if request_streaming or response_streaming: # only unary-unary is supported\n return behavior(request_or_iterator, context)\n tp = get_trace_parent(handler_call_details)\n client = elasticapm.get_client()\n transaction = client.begin_transaction(\"request\", trace_parent=tp)\n try:\n result = behavior(request_or_iterator, _ServicerContextWrapper(context, transaction))\n\n # This is so we can support both sync and async rpc functions\n if inspect.isawaitable(result):\n result = await result\n\n if transaction and not transaction.outcome:\n transaction.set_success()\n return result\n except Exception:\n if transaction:\n transaction.set_failure()\n client.capture_exception(handled=False)\n raise\n finally:\n client.end_transaction(name=handler_call_details.method)\n\n return _interceptor\n\n return _wrap_rpc_behavior(await continuation(handler_call_details), transaction_wrapper)\n", "path": "elasticapm/contrib/grpc/async_server_interceptor.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2022, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport inspect\n\nimport grpc\n\nimport elasticapm\nfrom elasticapm.contrib.grpc.server_interceptor import _ServicerContextWrapper, get_trace_parent\n\n\nclass _AsyncServerInterceptor(grpc.aio.ServerInterceptor):\n async def intercept_service(self, continuation, handler_call_details):\n def wrap_unary_unary(behavior):\n async def _interceptor(request, context):\n tp = get_trace_parent(handler_call_details)\n client = elasticapm.get_client()\n transaction = client.begin_transaction(\"request\", trace_parent=tp)\n try:\n result = behavior(request, _ServicerContextWrapper(context, transaction))\n\n # This is so we can support both sync and async rpc functions\n if inspect.isawaitable(result):\n result = await result\n\n if transaction and not transaction.outcome:\n transaction.set_success()\n return result\n except Exception:\n if transaction:\n transaction.set_failure()\n client.capture_exception(handled=False)\n raise\n finally:\n client.end_transaction(name=handler_call_details.method)\n\n return _interceptor\n\n handler = await continuation(handler_call_details)\n if handler.request_streaming or handler.response_streaming:\n return handler\n\n return grpc.unary_unary_rpc_method_handler(\n wrap_unary_unary(handler.unary_unary),\n request_deserializer=handler.request_deserializer,\n response_serializer=handler.response_serializer,\n )\n", "path": "elasticapm/contrib/grpc/async_server_interceptor.py"}]} | 1,630 | 446 |
gh_patches_debug_26968 | rasdani/github-patches | git_diff | Pyomo__pyomo-2385 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pyomo.environ doesn't initialize when called by cython
See full details on stack overflow:
https://stackoverflow.com/questions/71902579/pyomo-doesnt-load-when-called-via-cython
Also see
https://github.com/Pyomo/pyomo/issues/2374#issuecomment-1115424111_ :
"...(The linked question is probably either a problem with how the pyomo driver script interacts with argparse [it really shouldn't be doing that much when pyomo.environ is imported - that is almost certainly a bug], or with how cython and argparse interact)."
_Originally posted by @jsiirola in https://github.com/Pyomo/pyomo/issues/2374#issuecomment-1115424111_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyomo/scripting/pyomo_parser.py`
Content:
```
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright (c) 2008-2022
5 # National Technology and Engineering Solutions of Sandia, LLC
6 # Under the terms of Contract DE-NA0003525 with National Technology and
7 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
8 # rights in this software.
9 # This software is distributed under the 3-clause BSD License.
10 # ___________________________________________________________________________
11
12 __all__ = ['add_subparser', 'get_parser', 'subparsers']
13
14 import argparse
15 import sys
16
17 #
18 # Sort sub_parser names, since these are inserted throughout Pyomo
19 #
20 # NOTE: This may not be robust to different versions of argparse. We're
21 # mucking with a non-public API here ...
22 #
23 class CustomHelpFormatter(argparse.RawDescriptionHelpFormatter):
24
25 def _metavar_formatter(self, action, default_metavar):
26 if action.metavar is not None:
27 result = action.metavar
28 elif action.choices is not None:
29 choice_strs = sorted(str(choice) for choice in action.choices)
30 result = '{%s}' % ','.join(choice_strs)
31 else:
32 result = default_metavar
33
34 def format(tuple_size):
35 if isinstance(result, tuple):
36 return result
37 else:
38 return (result, ) * tuple_size
39 return format
40
41 def _iter_indented_subactions(self, action):
42 try:
43 get_subactions = action._get_subactions
44 except AttributeError:
45 pass
46 else:
47 self._indent()
48 if isinstance(action, argparse._SubParsersAction):
49 for subaction in sorted(get_subactions(), key=lambda x: x.dest):
50 yield subaction
51 else:
52 for subaction in get_subactions():
53 yield subaction
54 self._dedent()
55
56
57 def get_version():
58 from pyomo.version import version
59 import platform
60 return "Pyomo %s (%s %s on %s %s)" % (
61 version,
62 platform.python_implementation(),
63 '.'.join( str(x) for x in sys.version_info[:3] ),
64 platform.system(),
65 platform.release() )
66
67 #
68 # Create the argparse parser for Pyomo
69 #
70 doc="This is the main driver for the Pyomo optimization software."
71 epilog="""
72 -------------------------------------------------------------------------
73 Pyomo supports a variety of modeling and optimization capabilities,
74 which are executed either as subcommands of 'pyomo' or as separate
75 commands. Use the 'help' subcommand to get information about the
76 capabilities installed with Pyomo. Additionally, each subcommand
77 supports independent command-line options. Use the -h option to
78 print details for a subcommand. For example, type
79
80 pyomo solve -h
81
82 to print information about the `solve` subcommand.
83 """
84 _pyomo_parser = argparse.ArgumentParser(
85 description=doc, epilog=epilog, formatter_class=CustomHelpFormatter )
86 _pyomo_parser.add_argument("--version", action="version", version=get_version())
87 _pyomo_subparsers = _pyomo_parser.add_subparsers(
88 dest='subparser_name', title='subcommands' )
89
90 subparsers = []
91
92 def add_subparser(name, **args):
93 """
94 Add a subparser to the 'pyomo' command.
95 """
96 global subparsers
97 func = args.pop('func', None)
98 parser = _pyomo_subparsers.add_parser(name, **args)
99 subparsers.append(name)
100 if func is not None:
101 parser.set_defaults(func=func)
102 return parser
103
104 def get_parser():
105 """
106 Return the parser used by the 'pyomo' commmand.
107 """
108 return _pyomo_parser
109
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyomo/scripting/pyomo_parser.py b/pyomo/scripting/pyomo_parser.py
--- a/pyomo/scripting/pyomo_parser.py
+++ b/pyomo/scripting/pyomo_parser.py
@@ -81,11 +81,8 @@
to print information about the `solve` subcommand.
"""
-_pyomo_parser = argparse.ArgumentParser(
- description=doc, epilog=epilog, formatter_class=CustomHelpFormatter )
-_pyomo_parser.add_argument("--version", action="version", version=get_version())
-_pyomo_subparsers = _pyomo_parser.add_subparsers(
- dest='subparser_name', title='subcommands' )
+_pyomo_parser = None
+_pyomo_subparsers = None
subparsers = []
@@ -93,7 +90,9 @@
"""
Add a subparser to the 'pyomo' command.
"""
- global subparsers
+ if _pyomo_subparsers is None:
+ get_parser()
+
func = args.pop('func', None)
parser = _pyomo_subparsers.add_parser(name, **args)
subparsers.append(name)
@@ -105,5 +104,16 @@
"""
Return the parser used by the 'pyomo' commmand.
"""
+ global _pyomo_parser
+ if _pyomo_parser is None:
+ _pyomo_parser = argparse.ArgumentParser(
+ description=doc,
+ epilog=epilog,
+ formatter_class=CustomHelpFormatter
+ )
+ _pyomo_parser.add_argument(
+ "--version", action="version", version=get_version())
+ global _pyomo_subparsers
+ _pyomo_subparsers = _pyomo_parser.add_subparsers(
+ dest='subparser_name', title='subcommands' )
return _pyomo_parser
-
| {"golden_diff": "diff --git a/pyomo/scripting/pyomo_parser.py b/pyomo/scripting/pyomo_parser.py\n--- a/pyomo/scripting/pyomo_parser.py\n+++ b/pyomo/scripting/pyomo_parser.py\n@@ -81,11 +81,8 @@\n \n to print information about the `solve` subcommand.\n \"\"\"\n-_pyomo_parser = argparse.ArgumentParser(\n- description=doc, epilog=epilog, formatter_class=CustomHelpFormatter )\n-_pyomo_parser.add_argument(\"--version\", action=\"version\", version=get_version())\n-_pyomo_subparsers = _pyomo_parser.add_subparsers(\n- dest='subparser_name', title='subcommands' )\n+_pyomo_parser = None\n+_pyomo_subparsers = None\n \n subparsers = []\n \n@@ -93,7 +90,9 @@\n \"\"\"\n Add a subparser to the 'pyomo' command.\n \"\"\"\n- global subparsers\n+ if _pyomo_subparsers is None:\n+ get_parser()\n+\n func = args.pop('func', None)\n parser = _pyomo_subparsers.add_parser(name, **args)\n subparsers.append(name)\n@@ -105,5 +104,16 @@\n \"\"\"\n Return the parser used by the 'pyomo' commmand.\n \"\"\"\n+ global _pyomo_parser\n+ if _pyomo_parser is None:\n+ _pyomo_parser = argparse.ArgumentParser(\n+ description=doc,\n+ epilog=epilog,\n+ formatter_class=CustomHelpFormatter\n+ )\n+ _pyomo_parser.add_argument(\n+ \"--version\", action=\"version\", version=get_version())\n+ global _pyomo_subparsers\n+ _pyomo_subparsers = _pyomo_parser.add_subparsers(\n+ dest='subparser_name', title='subcommands' )\n return _pyomo_parser\n-\n", "issue": "Pyomo.environ doesn't initialize when called by cython\nSee full details on stack overflow:\r\nhttps://stackoverflow.com/questions/71902579/pyomo-doesnt-load-when-called-via-cython\r\n\r\nAlso see\r\n https://github.com/Pyomo/pyomo/issues/2374#issuecomment-1115424111_ :\r\n\r\n\"...(The linked question is probably either a problem with how the pyomo driver script interacts with argparse [it really shouldn't be doing that much when pyomo.environ is imported - that is almost certainly a bug], or with how cython and argparse interact).\"\r\n\r\n_Originally posted by @jsiirola in https://github.com/Pyomo/pyomo/issues/2374#issuecomment-1115424111_\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright (c) 2008-2022\n# National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\n__all__ = ['add_subparser', 'get_parser', 'subparsers']\n\nimport argparse\nimport sys\n\n#\n# Sort sub_parser names, since these are inserted throughout Pyomo\n#\n# NOTE: This may not be robust to different versions of argparse. We're\n# mucking with a non-public API here ...\n#\nclass CustomHelpFormatter(argparse.RawDescriptionHelpFormatter):\n\n def _metavar_formatter(self, action, default_metavar):\n if action.metavar is not None:\n result = action.metavar\n elif action.choices is not None:\n choice_strs = sorted(str(choice) for choice in action.choices)\n result = '{%s}' % ','.join(choice_strs)\n else:\n result = default_metavar\n\n def format(tuple_size):\n if isinstance(result, tuple):\n return result\n else:\n return (result, ) * tuple_size\n return format\n\n def _iter_indented_subactions(self, action):\n try:\n get_subactions = action._get_subactions\n except AttributeError:\n pass\n else:\n self._indent()\n if isinstance(action, argparse._SubParsersAction):\n for subaction in sorted(get_subactions(), key=lambda x: x.dest):\n yield subaction\n else:\n for subaction in get_subactions():\n yield subaction\n self._dedent()\n\n\ndef get_version():\n from pyomo.version import version\n import platform\n return \"Pyomo %s (%s %s on %s %s)\" % (\n version,\n platform.python_implementation(),\n '.'.join( str(x) for x in sys.version_info[:3] ),\n platform.system(),\n platform.release() )\n\n#\n# Create the argparse parser for Pyomo\n#\ndoc=\"This is the main driver for the Pyomo optimization software.\"\nepilog=\"\"\"\n-------------------------------------------------------------------------\nPyomo supports a variety of modeling and optimization capabilities,\nwhich are executed either as subcommands of 'pyomo' or as separate\ncommands. Use the 'help' subcommand to get information about the\ncapabilities installed with Pyomo. Additionally, each subcommand\nsupports independent command-line options. Use the -h option to\nprint details for a subcommand. For example, type\n\n pyomo solve -h\n\nto print information about the `solve` subcommand.\n\"\"\"\n_pyomo_parser = argparse.ArgumentParser(\n description=doc, epilog=epilog, formatter_class=CustomHelpFormatter )\n_pyomo_parser.add_argument(\"--version\", action=\"version\", version=get_version())\n_pyomo_subparsers = _pyomo_parser.add_subparsers(\n dest='subparser_name', title='subcommands' )\n\nsubparsers = []\n\ndef add_subparser(name, **args):\n \"\"\"\n Add a subparser to the 'pyomo' command.\n \"\"\"\n global subparsers\n func = args.pop('func', None)\n parser = _pyomo_subparsers.add_parser(name, **args)\n subparsers.append(name)\n if func is not None:\n parser.set_defaults(func=func)\n return parser\n\ndef get_parser():\n \"\"\"\n Return the parser used by the 'pyomo' commmand.\n \"\"\"\n return _pyomo_parser\n\n", "path": "pyomo/scripting/pyomo_parser.py"}], "after_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright (c) 2008-2022\n# National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\n__all__ = ['add_subparser', 'get_parser', 'subparsers']\n\nimport argparse\nimport sys\n\n#\n# Sort sub_parser names, since these are inserted throughout Pyomo\n#\n# NOTE: This may not be robust to different versions of argparse. We're\n# mucking with a non-public API here ...\n#\nclass CustomHelpFormatter(argparse.RawDescriptionHelpFormatter):\n\n def _metavar_formatter(self, action, default_metavar):\n if action.metavar is not None:\n result = action.metavar\n elif action.choices is not None:\n choice_strs = sorted(str(choice) for choice in action.choices)\n result = '{%s}' % ','.join(choice_strs)\n else:\n result = default_metavar\n\n def format(tuple_size):\n if isinstance(result, tuple):\n return result\n else:\n return (result, ) * tuple_size\n return format\n\n def _iter_indented_subactions(self, action):\n try:\n get_subactions = action._get_subactions\n except AttributeError:\n pass\n else:\n self._indent()\n if isinstance(action, argparse._SubParsersAction):\n for subaction in sorted(get_subactions(), key=lambda x: x.dest):\n yield subaction\n else:\n for subaction in get_subactions():\n yield subaction\n self._dedent()\n\n\ndef get_version():\n from pyomo.version import version\n import platform\n return \"Pyomo %s (%s %s on %s %s)\" % (\n version,\n platform.python_implementation(),\n '.'.join( str(x) for x in sys.version_info[:3] ),\n platform.system(),\n platform.release() )\n\n#\n# Create the argparse parser for Pyomo\n#\ndoc=\"This is the main driver for the Pyomo optimization software.\"\nepilog=\"\"\"\n-------------------------------------------------------------------------\nPyomo supports a variety of modeling and optimization capabilities,\nwhich are executed either as subcommands of 'pyomo' or as separate\ncommands. Use the 'help' subcommand to get information about the\ncapabilities installed with Pyomo. Additionally, each subcommand\nsupports independent command-line options. Use the -h option to\nprint details for a subcommand. For example, type\n\n pyomo solve -h\n\nto print information about the `solve` subcommand.\n\"\"\"\n_pyomo_parser = None\n_pyomo_subparsers = None\n\nsubparsers = []\n\ndef add_subparser(name, **args):\n \"\"\"\n Add a subparser to the 'pyomo' command.\n \"\"\"\n if _pyomo_subparsers is None:\n get_parser()\n\n func = args.pop('func', None)\n parser = _pyomo_subparsers.add_parser(name, **args)\n subparsers.append(name)\n if func is not None:\n parser.set_defaults(func=func)\n return parser\n\ndef get_parser():\n \"\"\"\n Return the parser used by the 'pyomo' commmand.\n \"\"\"\n global _pyomo_parser\n if _pyomo_parser is None:\n _pyomo_parser = argparse.ArgumentParser(\n description=doc,\n epilog=epilog,\n formatter_class=CustomHelpFormatter\n )\n _pyomo_parser.add_argument(\n \"--version\", action=\"version\", version=get_version())\n global _pyomo_subparsers\n _pyomo_subparsers = _pyomo_parser.add_subparsers(\n dest='subparser_name', title='subcommands' )\n return _pyomo_parser\n", "path": "pyomo/scripting/pyomo_parser.py"}]} | 1,464 | 401 |
gh_patches_debug_7895 | rasdani/github-patches | git_diff | chainer__chainer-819 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
link.LSTM.to_cpu (resp. to_gpu) does not move state variables to CPU (resp. GPU).
When `link.LSTM` is move to CPU (resp. GPU), its state variables should also be moved to CPU(resp. GPU). But it does not.
```
In [39]: l = chainer.links.LSTM(10, 10)
In [40]: l.xp
Out[40]: <module 'numpy' from '/home/delta/.pyenv/versions/pyenv-2.7.9/lib/python2.7/site-packages/numpy/__init__.pyc'>
In [41]: x = chainer.Variable(numpy.random.uniform(-1, 1, (5, 10)).astype(numpy.float32))
In [42]: l(x)
Out[42]: <variable at 0x7fe900a88d90>
In [43]: type(l.h.data)
Out[43]: numpy.ndarray
In [44]: l.to_gpu()
Out[44]: <chainer.links.connection.lstm.LSTM at 0x7fe900a42a10>
In [45]: l.xp
Out[45]: <module 'cupy' from 'cupy/__init__.pyc'>
In [46]: type(l.h.data)
Out[46]: numpy.ndarray
In [47]: type(l.c.data)
Out[47]: numpy.ndarray
```
The output of [46] and [47] should be `cupy.ndarray`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/links/connection/lstm.py`
Content:
```
1 from chainer.functions.activation import lstm
2 from chainer import link
3 from chainer.links.connection import linear
4 from chainer import variable
5
6
7 class LSTM(link.Chain):
8
9 """Fully-connected LSTM layer.
10
11 This is a fully-connected LSTM layer as a chain. Unlike the
12 :func:`~chainer.functions.lstm` function, which is defined as a stateless
13 activation function, this chain holds upward and lateral connections as
14 child links.
15
16 It also maintains *states*, including the cell state and the output
17 at the previous time step. Therefore, it can be used as a *stateful LSTM*.
18
19 Args:
20 in_size (int): Dimensionality of input vectors.
21 out_size (int): Dimensionality of output vectors.
22
23 Attributes:
24 upward (chainer.links.Linear): Linear layer of upward connections.
25 lateral (chainer.links.Linear): Linear layer of lateral connections.
26 c (chainer.Variable): Cell states of LSTM units.
27 h (chainer.Variable): Output at the previous timestep.
28
29 """
30 def __init__(self, in_size, out_size):
31 super(LSTM, self).__init__(
32 upward=linear.Linear(in_size, 4 * out_size),
33 lateral=linear.Linear(out_size, 4 * out_size, nobias=True),
34 )
35 self.state_size = out_size
36 self.reset_state()
37
38 def reset_state(self):
39 """Resets the internal state.
40
41 It sets None to the :attr:`c` and :attr:`h` attributes.
42
43 """
44 self.c = self.h = None
45
46 def __call__(self, x):
47 """Updates the internal state and returns the LSTM outputs.
48
49 Args:
50 x (~chainer.Variable): A new batch from the input sequence.
51
52 Returns:
53 ~chainer.Variable: Outputs of updated LSTM units.
54
55 """
56 lstm_in = self.upward(x)
57 if self.h is not None:
58 lstm_in += self.lateral(self.h)
59 if self.c is None:
60 xp = self.xp
61 self.c = variable.Variable(
62 xp.zeros((len(x.data), self.state_size), dtype=x.data.dtype),
63 volatile='auto')
64 self.c, self.h = lstm.lstm(self.c, lstm_in)
65 return self.h
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/links/connection/lstm.py b/chainer/links/connection/lstm.py
--- a/chainer/links/connection/lstm.py
+++ b/chainer/links/connection/lstm.py
@@ -35,6 +35,20 @@
self.state_size = out_size
self.reset_state()
+ def to_cpu(self):
+ super(LSTM, self).to_cpu()
+ if self.c is not None:
+ self.c.to_cpu()
+ if self.h is not None:
+ self.h.to_cpu()
+
+ def to_gpu(self, device=None):
+ super(LSTM, self).to_gpu(device)
+ if self.c is not None:
+ self.c.to_gpu(device)
+ if self.h is not None:
+ self.h.to_gpu(device)
+
def reset_state(self):
"""Resets the internal state.
| {"golden_diff": "diff --git a/chainer/links/connection/lstm.py b/chainer/links/connection/lstm.py\n--- a/chainer/links/connection/lstm.py\n+++ b/chainer/links/connection/lstm.py\n@@ -35,6 +35,20 @@\n self.state_size = out_size\n self.reset_state()\n \n+ def to_cpu(self):\n+ super(LSTM, self).to_cpu()\n+ if self.c is not None:\n+ self.c.to_cpu()\n+ if self.h is not None:\n+ self.h.to_cpu()\n+\n+ def to_gpu(self, device=None):\n+ super(LSTM, self).to_gpu(device)\n+ if self.c is not None:\n+ self.c.to_gpu(device)\n+ if self.h is not None:\n+ self.h.to_gpu(device)\n+\n def reset_state(self):\n \"\"\"Resets the internal state.\n", "issue": "link.LSTM.to_cpu (resp. to_gpu) does not move state variables to CPU (resp. GPU).\nWhen `link.LSTM` is move to CPU (resp. GPU), its state variables should also be moved to CPU(resp. GPU). But it does not.\n\n```\nIn [39]: l = chainer.links.LSTM(10, 10)\n\nIn [40]: l.xp\nOut[40]: <module 'numpy' from '/home/delta/.pyenv/versions/pyenv-2.7.9/lib/python2.7/site-packages/numpy/__init__.pyc'>\n\nIn [41]: x = chainer.Variable(numpy.random.uniform(-1, 1, (5, 10)).astype(numpy.float32))\n\nIn [42]: l(x)\nOut[42]: <variable at 0x7fe900a88d90>\n\nIn [43]: type(l.h.data)\nOut[43]: numpy.ndarray\n\nIn [44]: l.to_gpu()\nOut[44]: <chainer.links.connection.lstm.LSTM at 0x7fe900a42a10>\n\nIn [45]: l.xp\nOut[45]: <module 'cupy' from 'cupy/__init__.pyc'>\n\nIn [46]: type(l.h.data)\nOut[46]: numpy.ndarray\n\nIn [47]: type(l.c.data)\nOut[47]: numpy.ndarray\n```\n\nThe output of [46] and [47] should be `cupy.ndarray`\n\n", "before_files": [{"content": "from chainer.functions.activation import lstm\nfrom chainer import link\nfrom chainer.links.connection import linear\nfrom chainer import variable\n\n\nclass LSTM(link.Chain):\n\n \"\"\"Fully-connected LSTM layer.\n\n This is a fully-connected LSTM layer as a chain. Unlike the\n :func:`~chainer.functions.lstm` function, which is defined as a stateless\n activation function, this chain holds upward and lateral connections as\n child links.\n\n It also maintains *states*, including the cell state and the output\n at the previous time step. Therefore, it can be used as a *stateful LSTM*.\n\n Args:\n in_size (int): Dimensionality of input vectors.\n out_size (int): Dimensionality of output vectors.\n\n Attributes:\n upward (chainer.links.Linear): Linear layer of upward connections.\n lateral (chainer.links.Linear): Linear layer of lateral connections.\n c (chainer.Variable): Cell states of LSTM units.\n h (chainer.Variable): Output at the previous timestep.\n\n \"\"\"\n def __init__(self, in_size, out_size):\n super(LSTM, self).__init__(\n upward=linear.Linear(in_size, 4 * out_size),\n lateral=linear.Linear(out_size, 4 * out_size, nobias=True),\n )\n self.state_size = out_size\n self.reset_state()\n\n def reset_state(self):\n \"\"\"Resets the internal state.\n\n It sets None to the :attr:`c` and :attr:`h` attributes.\n\n \"\"\"\n self.c = self.h = None\n\n def __call__(self, x):\n \"\"\"Updates the internal state and returns the LSTM outputs.\n\n Args:\n x (~chainer.Variable): A new batch from the input sequence.\n\n Returns:\n ~chainer.Variable: Outputs of updated LSTM units.\n\n \"\"\"\n lstm_in = self.upward(x)\n if self.h is not None:\n lstm_in += self.lateral(self.h)\n if self.c is None:\n xp = self.xp\n self.c = variable.Variable(\n xp.zeros((len(x.data), self.state_size), dtype=x.data.dtype),\n volatile='auto')\n self.c, self.h = lstm.lstm(self.c, lstm_in)\n return self.h\n", "path": "chainer/links/connection/lstm.py"}], "after_files": [{"content": "from chainer.functions.activation import lstm\nfrom chainer import link\nfrom chainer.links.connection import linear\nfrom chainer import variable\n\n\nclass LSTM(link.Chain):\n\n \"\"\"Fully-connected LSTM layer.\n\n This is a fully-connected LSTM layer as a chain. Unlike the\n :func:`~chainer.functions.lstm` function, which is defined as a stateless\n activation function, this chain holds upward and lateral connections as\n child links.\n\n It also maintains *states*, including the cell state and the output\n at the previous time step. Therefore, it can be used as a *stateful LSTM*.\n\n Args:\n in_size (int): Dimensionality of input vectors.\n out_size (int): Dimensionality of output vectors.\n\n Attributes:\n upward (chainer.links.Linear): Linear layer of upward connections.\n lateral (chainer.links.Linear): Linear layer of lateral connections.\n c (chainer.Variable): Cell states of LSTM units.\n h (chainer.Variable): Output at the previous timestep.\n\n \"\"\"\n def __init__(self, in_size, out_size):\n super(LSTM, self).__init__(\n upward=linear.Linear(in_size, 4 * out_size),\n lateral=linear.Linear(out_size, 4 * out_size, nobias=True),\n )\n self.state_size = out_size\n self.reset_state()\n\n def to_cpu(self):\n super(LSTM, self).to_cpu()\n if self.c is not None:\n self.c.to_cpu()\n if self.h is not None:\n self.h.to_cpu()\n\n def to_gpu(self, device=None):\n super(LSTM, self).to_gpu(device)\n if self.c is not None:\n self.c.to_gpu(device)\n if self.h is not None:\n self.h.to_gpu(device)\n\n def reset_state(self):\n \"\"\"Resets the internal state.\n\n It sets None to the :attr:`c` and :attr:`h` attributes.\n\n \"\"\"\n self.c = self.h = None\n\n def __call__(self, x):\n \"\"\"Updates the internal state and returns the LSTM outputs.\n\n Args:\n x (~chainer.Variable): A new batch from the input sequence.\n\n Returns:\n ~chainer.Variable: Outputs of updated LSTM units.\n\n \"\"\"\n lstm_in = self.upward(x)\n if self.h is not None:\n lstm_in += self.lateral(self.h)\n if self.c is None:\n xp = self.xp\n self.c = variable.Variable(\n xp.zeros((len(x.data), self.state_size), dtype=x.data.dtype),\n volatile='auto')\n self.c, self.h = lstm.lstm(self.c, lstm_in)\n return self.h\n", "path": "chainer/links/connection/lstm.py"}]} | 1,212 | 193 |
gh_patches_debug_21024 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-1146 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
create config aliases for pre-hook and post-hook
## Feature
### Feature description
Pre-hooks and post-hooks can be specified in a `config()` block, but they require the use of a dictionary instead of a list of keyword arguments. Instead, dbt should support the specification of these hooks using `pre_hook` and `post_hook` aliases.
Before:
```
{{
config({
"post-hook" : "grant select on {{ this }} to db_reader"
})
}}
```
After:
```
{{
config(
post_hook="grant select on {{ this }} to db_reader"
)
}}
```
### Who will this benefit?
This will be a quality of life improvement for many dbt users :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dbt/context/parser.py`
Content:
```
1 import dbt.exceptions
2
3 import dbt.context.common
4
5
6 execute = False
7
8
9 def ref(db_wrapper, model, config, manifest):
10
11 def ref(*args):
12 if len(args) == 1 or len(args) == 2:
13 model.refs.append(list(args))
14
15 else:
16 dbt.exceptions.ref_invalid_args(model, args)
17
18 return db_wrapper.adapter.Relation.create_from_node(config, model)
19
20 return ref
21
22
23 def docs(unparsed, docrefs, column_name=None):
24
25 def do_docs(*args):
26 if len(args) != 1 and len(args) != 2:
27 dbt.exceptions.doc_invalid_args(unparsed, args)
28 doc_package_name = ''
29 doc_name = args[0]
30 if len(args) == 2:
31 doc_package_name = args[1]
32
33 docref = {
34 'documentation_package': doc_package_name,
35 'documentation_name': doc_name,
36 }
37 if column_name is not None:
38 docref['column_name'] = column_name
39
40 docrefs.append(docref)
41
42 # IDK
43 return True
44
45 return do_docs
46
47
48 class Config:
49 def __init__(self, model, source_config):
50 self.model = model
51 self.source_config = source_config
52
53 def __call__(self, *args, **kwargs):
54 if len(args) == 1 and len(kwargs) == 0:
55 opts = args[0]
56 elif len(args) == 0 and len(kwargs) > 0:
57 opts = kwargs
58 else:
59 dbt.exceptions.raise_compiler_error(
60 "Invalid inline model config",
61 self.model)
62
63 self.source_config.update_in_model_config(opts)
64 return ''
65
66 def set(self, name, value):
67 return self.__call__({name: value})
68
69 def require(self, name, validator=None):
70 return ''
71
72 def get(self, name, validator=None, default=None):
73 return ''
74
75
76 def generate(model, runtime_config, manifest, source_config):
77 return dbt.context.common.generate(
78 model, runtime_config, manifest, source_config, dbt.context.parser)
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dbt/context/parser.py b/dbt/context/parser.py
--- a/dbt/context/parser.py
+++ b/dbt/context/parser.py
@@ -50,6 +50,19 @@
self.model = model
self.source_config = source_config
+ def _transform_config(self, config):
+ for oldkey in ('pre_hook', 'post_hook'):
+ if oldkey in config:
+ newkey = oldkey.replace('_', '-')
+ if newkey in config:
+ dbt.exceptions.raise_compiler_error(
+ 'Invalid config, has conflicting keys "{}" and "{}"'
+ .format(oldkey, newkey),
+ self.model
+ )
+ config[newkey] = config.pop(oldkey)
+ return config
+
def __call__(self, *args, **kwargs):
if len(args) == 1 and len(kwargs) == 0:
opts = args[0]
@@ -60,6 +73,8 @@
"Invalid inline model config",
self.model)
+ opts = self._transform_config(opts)
+
self.source_config.update_in_model_config(opts)
return ''
| {"golden_diff": "diff --git a/dbt/context/parser.py b/dbt/context/parser.py\n--- a/dbt/context/parser.py\n+++ b/dbt/context/parser.py\n@@ -50,6 +50,19 @@\n self.model = model\n self.source_config = source_config\n \n+ def _transform_config(self, config):\n+ for oldkey in ('pre_hook', 'post_hook'):\n+ if oldkey in config:\n+ newkey = oldkey.replace('_', '-')\n+ if newkey in config:\n+ dbt.exceptions.raise_compiler_error(\n+ 'Invalid config, has conflicting keys \"{}\" and \"{}\"'\n+ .format(oldkey, newkey),\n+ self.model\n+ )\n+ config[newkey] = config.pop(oldkey)\n+ return config\n+\n def __call__(self, *args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0:\n opts = args[0]\n@@ -60,6 +73,8 @@\n \"Invalid inline model config\",\n self.model)\n \n+ opts = self._transform_config(opts)\n+\n self.source_config.update_in_model_config(opts)\n return ''\n", "issue": "create config aliases for pre-hook and post-hook\n## Feature\r\n\r\n### Feature description\r\nPre-hooks and post-hooks can be specified in a `config()` block, but they require the use of a dictionary instead of a list of keyword arguments. Instead, dbt should support the specification of these hooks using `pre_hook` and `post_hook` aliases.\r\n\r\nBefore:\r\n```\r\n{{\r\n config({\r\n \"post-hook\" : \"grant select on {{ this }} to db_reader\"\r\n })\r\n}}\r\n```\r\n\r\nAfter:\r\n```\r\n{{\r\n config(\r\n post_hook=\"grant select on {{ this }} to db_reader\"\r\n )\r\n}}\r\n```\r\n\r\n\r\n### Who will this benefit?\r\nThis will be a quality of life improvement for many dbt users :)\n", "before_files": [{"content": "import dbt.exceptions\n\nimport dbt.context.common\n\n\nexecute = False\n\n\ndef ref(db_wrapper, model, config, manifest):\n\n def ref(*args):\n if len(args) == 1 or len(args) == 2:\n model.refs.append(list(args))\n\n else:\n dbt.exceptions.ref_invalid_args(model, args)\n\n return db_wrapper.adapter.Relation.create_from_node(config, model)\n\n return ref\n\n\ndef docs(unparsed, docrefs, column_name=None):\n\n def do_docs(*args):\n if len(args) != 1 and len(args) != 2:\n dbt.exceptions.doc_invalid_args(unparsed, args)\n doc_package_name = ''\n doc_name = args[0]\n if len(args) == 2:\n doc_package_name = args[1]\n\n docref = {\n 'documentation_package': doc_package_name,\n 'documentation_name': doc_name,\n }\n if column_name is not None:\n docref['column_name'] = column_name\n\n docrefs.append(docref)\n\n # IDK\n return True\n\n return do_docs\n\n\nclass Config:\n def __init__(self, model, source_config):\n self.model = model\n self.source_config = source_config\n\n def __call__(self, *args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0:\n opts = args[0]\n elif len(args) == 0 and len(kwargs) > 0:\n opts = kwargs\n else:\n dbt.exceptions.raise_compiler_error(\n \"Invalid inline model config\",\n self.model)\n\n self.source_config.update_in_model_config(opts)\n return ''\n\n def set(self, name, value):\n return self.__call__({name: value})\n\n def require(self, name, validator=None):\n return ''\n\n def get(self, name, validator=None, default=None):\n return ''\n\n\ndef generate(model, runtime_config, manifest, source_config):\n return dbt.context.common.generate(\n model, runtime_config, manifest, source_config, dbt.context.parser)\n", "path": "dbt/context/parser.py"}], "after_files": [{"content": "import dbt.exceptions\n\nimport dbt.context.common\n\n\nexecute = False\n\n\ndef ref(db_wrapper, model, config, manifest):\n\n def ref(*args):\n if len(args) == 1 or len(args) == 2:\n model.refs.append(list(args))\n\n else:\n dbt.exceptions.ref_invalid_args(model, args)\n\n return db_wrapper.adapter.Relation.create_from_node(config, model)\n\n return ref\n\n\ndef docs(unparsed, docrefs, column_name=None):\n\n def do_docs(*args):\n if len(args) != 1 and len(args) != 2:\n dbt.exceptions.doc_invalid_args(unparsed, args)\n doc_package_name = ''\n doc_name = args[0]\n if len(args) == 2:\n doc_package_name = args[1]\n\n docref = {\n 'documentation_package': doc_package_name,\n 'documentation_name': doc_name,\n }\n if column_name is not None:\n docref['column_name'] = column_name\n\n docrefs.append(docref)\n\n # IDK\n return True\n\n return do_docs\n\n\nclass Config:\n def __init__(self, model, source_config):\n self.model = model\n self.source_config = source_config\n\n def _transform_config(self, config):\n for oldkey in ('pre_hook', 'post_hook'):\n if oldkey in config:\n newkey = oldkey.replace('_', '-')\n if newkey in config:\n dbt.exceptions.raise_compiler_error(\n 'Invalid config, has conflicting keys \"{}\" and \"{}\"'\n .format(oldkey, newkey),\n self.model\n )\n config[newkey] = config.pop(oldkey)\n return config\n\n def __call__(self, *args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0:\n opts = args[0]\n elif len(args) == 0 and len(kwargs) > 0:\n opts = kwargs\n else:\n dbt.exceptions.raise_compiler_error(\n \"Invalid inline model config\",\n self.model)\n\n opts = self._transform_config(opts)\n\n self.source_config.update_in_model_config(opts)\n return ''\n\n def set(self, name, value):\n return self.__call__({name: value})\n\n def require(self, name, validator=None):\n return ''\n\n def get(self, name, validator=None, default=None):\n return ''\n\n\ndef generate(model, runtime_config, manifest, source_config):\n return dbt.context.common.generate(\n model, runtime_config, manifest, source_config, dbt.context.parser)\n", "path": "dbt/context/parser.py"}]} | 1,021 | 252 |
gh_patches_debug_5548 | rasdani/github-patches | git_diff | sosreport__sos-3458 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[transport] saltstack causes a final error trying to parse the package list from down nodes
Down nodes for saltstack transport cause a final error for each down host after not being able to connect early on.
Similar to the control_persist transport error:
```
[store-2:__init__] Unable to open remote session: Timeout expires while trying to connect
```
The saltstack transport errors slightly differently for nodes that are down:
```
[store-2:read_file] Exception while reading /etc/os-release: Timeout expired executing salt --out json --static --no-color store-2 cmd.shell 'cat /etc/os-release'
```
Then one additional error via `sys.stderr.write()` in `redhat.py` occurs for each down node:
```
Could not obtain installed package list[root@EL8_supervisor ~]#
```
_Originally posted by @TrevorBenson in https://github.com/sosreport/sos/issues/3456#issuecomment-1870554310_
----
I'll try to diagnose why it differs from control_persist transport.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sos/collector/transports/saltstack.py`
Content:
```
1 # Copyright Red Hat 2022, Trevor Benson <[email protected]>
2
3 # This file is part of the sos project: https://github.com/sosreport/sos
4 #
5 # This copyrighted material is made available to anyone wishing to use,
6 # modify, copy, or redistribute it subject to the terms and conditions of
7 # version 2 of the GNU General Public License.
8 #
9 # See the LICENSE file in the source distribution for further information.
10
11 import contextlib
12 import json
13 import os
14 import shutil
15 from sos.collector.transports import RemoteTransport
16 from sos.collector.exceptions import (ConnectionException,
17 SaltStackMasterUnsupportedException)
18 from sos.utilities import (is_executable,
19 sos_get_command_output)
20
21
22 class SaltStackMaster(RemoteTransport):
23 """
24 A transport for collect that leverages SaltStack's Master Pub/Sub
25 functionality to send commands to minions.
26
27 This transport will by default assume the use cmd.shell module to
28 execute commands on the minions.
29 """
30
31 name = 'saltstack'
32
33 def _convert_output_json(self, json_output):
34 return list(json.loads(json_output).values())[0]
35
36 def run_command(self, cmd, timeout=180, need_root=False, env=None,
37 use_shell=False):
38 """
39 Run a command on the remote host using SaltStack Master.
40 If the output is json, convert it to a string.
41 """
42 ret = super(SaltStackMaster, self).run_command(
43 cmd, timeout, need_root, env, use_shell)
44 with contextlib.suppress(Exception):
45 ret['output'] = self._convert_output_json(ret['output'])
46 return ret
47
48 def _salt_retrieve_file(self, node, fname, dest):
49 """
50 Execute cp.push on the remote host using SaltStack Master
51 """
52 cmd = f"salt {node} cp.push {fname}"
53 res = sos_get_command_output(cmd)
54 if res['status'] == 0:
55 cachedir = f"/var/cache/salt/master/minions/{self.address}/files"
56 cachedir_file = os.path.join(cachedir, fname.lstrip('/'))
57 shutil.move(cachedir_file, dest)
58 return True
59 return False
60
61 @property
62 def connected(self):
63 """Check if the remote host is responding using SaltStack Master."""
64 up = self.run_command("echo Connected", timeout=10)
65 return up['status'] == 0
66
67 def _check_for_saltstack(self, password=None):
68 """Checks to see if the local system supported SaltStack Master.
69
70 This check relies on feedback from the salt binary. The command being
71 run should always generate stderr output, but depending on what that
72 output reads we can determine if SaltStack Master is supported or not.
73
74 For our purposes, a host that does not support SaltStack Master is not
75 able to run sos-collector.
76
77 Returns
78 True if SaltStack Master is supported, else raise Exception
79 """
80
81 cmd = 'salt-run manage.status'
82 res = sos_get_command_output(cmd)
83 if res['status'] == 0:
84 return res['status'] == 0
85 else:
86 raise SaltStackMasterUnsupportedException
87
88 def _connect(self, password=None):
89 """Connect to the remote host using SaltStack Master.
90
91 This method will attempt to connect to the remote host using SaltStack
92 Master. If the connection fails, an exception will be raised.
93
94 If the connection is successful, the connection will be stored in the
95 self._connection attribute.
96 """
97 if not is_executable('salt'):
98 self.log_error("salt command is not executable. ")
99 return False
100
101 try:
102 self._check_for_saltstack()
103 except ConnectionException:
104 self.log_error("Transport is not locally supported. ")
105 raise
106 self.log_info("Transport is locally supported and service running. ")
107 cmd = "echo Connected"
108 result = self.run_command(cmd, timeout=180)
109 return result['status'] == 0
110
111 def _disconnect(self):
112 return True
113
114 @property
115 def remote_exec(self):
116 """The remote execution command to use for this transport."""
117 salt_args = "--out json --static --no-color"
118 return f"salt {salt_args} {self.address} cmd.shell "
119
120 def _retrieve_file(self, fname, dest):
121 """Retrieve a file from the remote host using saltstack
122
123 Parameters
124 fname The path to the file on the remote host
125 dest The path to the destination directory on the master
126
127 Returns
128 True if the file was retrieved, else False
129 """
130 return (
131 self._salt_retrieve_file(self.address, fname, dest)
132 if self.connected
133 else False
134 )
135
136 # vim: set et ts=4 sw=4 :
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sos/collector/transports/saltstack.py b/sos/collector/transports/saltstack.py
--- a/sos/collector/transports/saltstack.py
+++ b/sos/collector/transports/saltstack.py
@@ -106,6 +106,8 @@
self.log_info("Transport is locally supported and service running. ")
cmd = "echo Connected"
result = self.run_command(cmd, timeout=180)
+ if result['status'] == 1:
+ raise ConnectionException(self.address)
return result['status'] == 0
def _disconnect(self):
| {"golden_diff": "diff --git a/sos/collector/transports/saltstack.py b/sos/collector/transports/saltstack.py\n--- a/sos/collector/transports/saltstack.py\n+++ b/sos/collector/transports/saltstack.py\n@@ -106,6 +106,8 @@\n self.log_info(\"Transport is locally supported and service running. \")\n cmd = \"echo Connected\"\n result = self.run_command(cmd, timeout=180)\n+ if result['status'] == 1:\n+ raise ConnectionException(self.address)\n return result['status'] == 0\n \n def _disconnect(self):\n", "issue": "[transport] saltstack causes a final error trying to parse the package list from down nodes\nDown nodes for saltstack transport cause a final error for each down host after not being able to connect early on.\r\n\r\nSimilar to the control_persist transport error:\r\n```\r\n[store-2:__init__] Unable to open remote session: Timeout expires while trying to connect\r\n```\r\nThe saltstack transport errors slightly differently for nodes that are down:\r\n```\r\n[store-2:read_file] Exception while reading /etc/os-release: Timeout expired executing salt --out json --static --no-color store-2 cmd.shell 'cat /etc/os-release'\r\n```\r\n\r\nThen one additional error via `sys.stderr.write()` in `redhat.py` occurs for each down node:\r\n```\r\nCould not obtain installed package list[root@EL8_supervisor ~]#\r\n```\r\n\r\n_Originally posted by @TrevorBenson in https://github.com/sosreport/sos/issues/3456#issuecomment-1870554310_\r\n\r\n----\r\n\r\nI'll try to diagnose why it differs from control_persist transport.\n", "before_files": [{"content": "# Copyright Red Hat 2022, Trevor Benson <[email protected]>\n\n# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nimport contextlib\nimport json\nimport os\nimport shutil\nfrom sos.collector.transports import RemoteTransport\nfrom sos.collector.exceptions import (ConnectionException,\n SaltStackMasterUnsupportedException)\nfrom sos.utilities import (is_executable,\n sos_get_command_output)\n\n\nclass SaltStackMaster(RemoteTransport):\n \"\"\"\n A transport for collect that leverages SaltStack's Master Pub/Sub\n functionality to send commands to minions.\n\n This transport will by default assume the use cmd.shell module to\n execute commands on the minions.\n \"\"\"\n\n name = 'saltstack'\n\n def _convert_output_json(self, json_output):\n return list(json.loads(json_output).values())[0]\n\n def run_command(self, cmd, timeout=180, need_root=False, env=None,\n use_shell=False):\n \"\"\"\n Run a command on the remote host using SaltStack Master.\n If the output is json, convert it to a string.\n \"\"\"\n ret = super(SaltStackMaster, self).run_command(\n cmd, timeout, need_root, env, use_shell)\n with contextlib.suppress(Exception):\n ret['output'] = self._convert_output_json(ret['output'])\n return ret\n\n def _salt_retrieve_file(self, node, fname, dest):\n \"\"\"\n Execute cp.push on the remote host using SaltStack Master\n \"\"\"\n cmd = f\"salt {node} cp.push {fname}\"\n res = sos_get_command_output(cmd)\n if res['status'] == 0:\n cachedir = f\"/var/cache/salt/master/minions/{self.address}/files\"\n cachedir_file = os.path.join(cachedir, fname.lstrip('/'))\n shutil.move(cachedir_file, dest)\n return True\n return False\n\n @property\n def connected(self):\n \"\"\"Check if the remote host is responding using SaltStack Master.\"\"\"\n up = self.run_command(\"echo Connected\", timeout=10)\n return up['status'] == 0\n\n def _check_for_saltstack(self, password=None):\n \"\"\"Checks to see if the local system supported SaltStack Master.\n\n This check relies on feedback from the salt binary. The command being\n run should always generate stderr output, but depending on what that\n output reads we can determine if SaltStack Master is supported or not.\n\n For our purposes, a host that does not support SaltStack Master is not\n able to run sos-collector.\n\n Returns\n True if SaltStack Master is supported, else raise Exception\n \"\"\"\n\n cmd = 'salt-run manage.status'\n res = sos_get_command_output(cmd)\n if res['status'] == 0:\n return res['status'] == 0\n else:\n raise SaltStackMasterUnsupportedException\n\n def _connect(self, password=None):\n \"\"\"Connect to the remote host using SaltStack Master.\n\n This method will attempt to connect to the remote host using SaltStack\n Master. If the connection fails, an exception will be raised.\n\n If the connection is successful, the connection will be stored in the\n self._connection attribute.\n \"\"\"\n if not is_executable('salt'):\n self.log_error(\"salt command is not executable. \")\n return False\n\n try:\n self._check_for_saltstack()\n except ConnectionException:\n self.log_error(\"Transport is not locally supported. \")\n raise\n self.log_info(\"Transport is locally supported and service running. \")\n cmd = \"echo Connected\"\n result = self.run_command(cmd, timeout=180)\n return result['status'] == 0\n\n def _disconnect(self):\n return True\n\n @property\n def remote_exec(self):\n \"\"\"The remote execution command to use for this transport.\"\"\"\n salt_args = \"--out json --static --no-color\"\n return f\"salt {salt_args} {self.address} cmd.shell \"\n\n def _retrieve_file(self, fname, dest):\n \"\"\"Retrieve a file from the remote host using saltstack\n\n Parameters\n fname The path to the file on the remote host\n dest The path to the destination directory on the master\n\n Returns\n True if the file was retrieved, else False\n \"\"\"\n return (\n self._salt_retrieve_file(self.address, fname, dest)\n if self.connected\n else False\n )\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/collector/transports/saltstack.py"}], "after_files": [{"content": "# Copyright Red Hat 2022, Trevor Benson <[email protected]>\n\n# This file is part of the sos project: https://github.com/sosreport/sos\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# version 2 of the GNU General Public License.\n#\n# See the LICENSE file in the source distribution for further information.\n\nimport contextlib\nimport json\nimport os\nimport shutil\nfrom sos.collector.transports import RemoteTransport\nfrom sos.collector.exceptions import (ConnectionException,\n SaltStackMasterUnsupportedException)\nfrom sos.utilities import (is_executable,\n sos_get_command_output)\n\n\nclass SaltStackMaster(RemoteTransport):\n \"\"\"\n A transport for collect that leverages SaltStack's Master Pub/Sub\n functionality to send commands to minions.\n\n This transport will by default assume the use cmd.shell module to\n execute commands on the minions.\n \"\"\"\n\n name = 'saltstack'\n\n def _convert_output_json(self, json_output):\n return list(json.loads(json_output).values())[0]\n\n def run_command(self, cmd, timeout=180, need_root=False, env=None,\n use_shell=False):\n \"\"\"\n Run a command on the remote host using SaltStack Master.\n If the output is json, convert it to a string.\n \"\"\"\n ret = super(SaltStackMaster, self).run_command(\n cmd, timeout, need_root, env, use_shell)\n with contextlib.suppress(Exception):\n ret['output'] = self._convert_output_json(ret['output'])\n return ret\n\n def _salt_retrieve_file(self, node, fname, dest):\n \"\"\"\n Execute cp.push on the remote host using SaltStack Master\n \"\"\"\n cmd = f\"salt {node} cp.push {fname}\"\n res = sos_get_command_output(cmd)\n if res['status'] == 0:\n cachedir = f\"/var/cache/salt/master/minions/{self.address}/files\"\n cachedir_file = os.path.join(cachedir, fname.lstrip('/'))\n shutil.move(cachedir_file, dest)\n return True\n return False\n\n @property\n def connected(self):\n \"\"\"Check if the remote host is responding using SaltStack Master.\"\"\"\n up = self.run_command(\"echo Connected\", timeout=10)\n return up['status'] == 0\n\n def _check_for_saltstack(self, password=None):\n \"\"\"Checks to see if the local system supported SaltStack Master.\n\n This check relies on feedback from the salt binary. The command being\n run should always generate stderr output, but depending on what that\n output reads we can determine if SaltStack Master is supported or not.\n\n For our purposes, a host that does not support SaltStack Master is not\n able to run sos-collector.\n\n Returns\n True if SaltStack Master is supported, else raise Exception\n \"\"\"\n\n cmd = 'salt-run manage.status'\n res = sos_get_command_output(cmd)\n if res['status'] == 0:\n return res['status'] == 0\n else:\n raise SaltStackMasterUnsupportedException\n\n def _connect(self, password=None):\n \"\"\"Connect to the remote host using SaltStack Master.\n\n This method will attempt to connect to the remote host using SaltStack\n Master. If the connection fails, an exception will be raised.\n\n If the connection is successful, the connection will be stored in the\n self._connection attribute.\n \"\"\"\n if not is_executable('salt'):\n self.log_error(\"salt command is not executable. \")\n return False\n\n try:\n self._check_for_saltstack()\n except ConnectionException:\n self.log_error(\"Transport is not locally supported. \")\n raise\n self.log_info(\"Transport is locally supported and service running. \")\n cmd = \"echo Connected\"\n result = self.run_command(cmd, timeout=180)\n if result['status'] == 1:\n raise ConnectionException(self.address)\n return result['status'] == 0\n\n def _disconnect(self):\n return True\n\n @property\n def remote_exec(self):\n \"\"\"The remote execution command to use for this transport.\"\"\"\n salt_args = \"--out json --static --no-color\"\n return f\"salt {salt_args} {self.address} cmd.shell \"\n\n def _retrieve_file(self, fname, dest):\n \"\"\"Retrieve a file from the remote host using saltstack\n\n Parameters\n fname The path to the file on the remote host\n dest The path to the destination directory on the master\n\n Returns\n True if the file was retrieved, else False\n \"\"\"\n return (\n self._salt_retrieve_file(self.address, fname, dest)\n if self.connected\n else False\n )\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/collector/transports/saltstack.py"}]} | 1,846 | 136 |
gh_patches_debug_13231 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-6028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mauvais titre de notification pour le signalement de billet
**Description du bug**
Lorsqu'un membre signale un billet, le titre de la notification est `(<bound method PublishedContent.title of <PublishedContent: Version publique de "La Belgique a un nouveau gouvernement, Γ§a se fΓͺte !">>,)` au lieu d'Γͺtre le nom du contenu.
**Comment reproduire ?**
La liste des Γ©tapes qui permet de reproduire le bug :
1. Aller sur le site
2. Connectez vous avec le compte admin
3. Aller sur un billet
4. Cliquez sur Β« Signaler le contenu Β»
5. Voir que le titre de la nouvelle notification de signalement est incorrect
**Comportement attendu**
Le titre de la nouvelle notification de signalement est le titre du contenu.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/utils/header_notifications.py`
Content:
```
1 from django.contrib.contenttypes.models import ContentType
2 from django.utils.translation import gettext_lazy as _
3
4 from zds.forum.models import Post
5 from zds.mp.models import PrivateTopic
6 from zds.notification.models import Notification
7 from zds.tutorialv2.models.database import ContentReaction, PublishableContent
8 from zds.utils.models import Alert
9
10
11 def _notifications_to_list(notifications_query):
12 query = notifications_query.select_related("sender__profile").order_by("-pubdate")[:10]
13
14 return [{"pubdate": n.pubdate, "author": n.sender, "title": n.title, "url": n.url} for n in query]
15
16
17 def _get_alert_info(alert):
18 if alert.scope == "FORUM":
19 post = Post.objects.select_related("topic").get(pk=alert.comment.pk)
20 return post.topic.title, post.get_absolute_url()
21 elif alert.scope == "CONTENT":
22 published = PublishableContent.objects.select_related("public_version").get(pk=alert.content.pk)
23 title = (published.public_version.title if published.public_version else published.title,)
24 url = published.get_absolute_url_online() if published.public_version else ""
25 return title, url
26 elif alert.scope == "PROFILE":
27 return _("Profil de {}").format(alert.profile.user.username), alert.profile.get_absolute_url() + "#alerts"
28 else:
29 comment = ContentReaction.objects.select_related("related_content").get(pk=alert.comment.pk)
30 return (
31 comment.related_content.title,
32 comment.get_absolute_url(),
33 )
34
35
36 def _alert_to_dict(alert):
37 title, url = _get_alert_info(alert)
38 return {"title": title, "url": url, "pubdate": alert.pubdate, "author": alert.author, "text": alert.text}
39
40
41 def _alerts_to_list(alerts_query):
42 query = alerts_query.select_related("author", "comment", "content").order_by("-pubdate")[:10]
43
44 return [_alert_to_dict(a) for a in query]
45
46
47 def get_header_notifications(user):
48 if not user.is_authenticated:
49 return None
50
51 private_topic = ContentType.objects.get_for_model(PrivateTopic)
52
53 notifications = Notification.objects.filter(subscription__user=user, is_read=False)
54
55 general_notifications = notifications.exclude(subscription__content_type=private_topic)
56
57 private_notifications = notifications.filter(subscription__content_type=private_topic)
58
59 alerts = Alert.objects.filter(solved=False)
60
61 return {
62 "general_notifications": {
63 "total": general_notifications.count(),
64 "list": _notifications_to_list(general_notifications),
65 },
66 "private_topic_notifications": {
67 "total": private_notifications.count(),
68 "list": _notifications_to_list(private_notifications),
69 },
70 "alerts": user.has_perm("forum.change_post")
71 and {
72 "total": alerts.count(),
73 "list": _alerts_to_list(alerts),
74 },
75 }
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/utils/header_notifications.py b/zds/utils/header_notifications.py
--- a/zds/utils/header_notifications.py
+++ b/zds/utils/header_notifications.py
@@ -20,7 +20,7 @@
return post.topic.title, post.get_absolute_url()
elif alert.scope == "CONTENT":
published = PublishableContent.objects.select_related("public_version").get(pk=alert.content.pk)
- title = (published.public_version.title if published.public_version else published.title,)
+ title = published.public_version.title if published.public_version else published.title
url = published.get_absolute_url_online() if published.public_version else ""
return title, url
elif alert.scope == "PROFILE":
| {"golden_diff": "diff --git a/zds/utils/header_notifications.py b/zds/utils/header_notifications.py\n--- a/zds/utils/header_notifications.py\n+++ b/zds/utils/header_notifications.py\n@@ -20,7 +20,7 @@\n return post.topic.title, post.get_absolute_url()\n elif alert.scope == \"CONTENT\":\n published = PublishableContent.objects.select_related(\"public_version\").get(pk=alert.content.pk)\n- title = (published.public_version.title if published.public_version else published.title,)\n+ title = published.public_version.title if published.public_version else published.title\n url = published.get_absolute_url_online() if published.public_version else \"\"\n return title, url\n elif alert.scope == \"PROFILE\":\n", "issue": "Mauvais titre de notification pour le signalement de billet\n**Description du bug**\r\n\r\nLorsqu'un membre signale un billet, le titre de la notification est `(<bound method PublishedContent.title of <PublishedContent: Version publique de \"La Belgique a un nouveau gouvernement, \u00e7a se f\u00eate !\">>,)` au lieu d'\u00eatre le nom du contenu.\r\n\r\n**Comment reproduire ?**\r\n\r\nLa liste des \u00e9tapes qui permet de reproduire le bug :\r\n\r\n1. Aller sur le site\r\n2. Connectez vous avec le compte admin\r\n3. Aller sur un billet\r\n4. Cliquez sur \u00ab Signaler le contenu \u00bb\r\n5. Voir que le titre de la nouvelle notification de signalement est incorrect\r\n\r\n**Comportement attendu**\r\n\r\nLe titre de la nouvelle notification de signalement est le titre du contenu.\n", "before_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.forum.models import Post\nfrom zds.mp.models import PrivateTopic\nfrom zds.notification.models import Notification\nfrom zds.tutorialv2.models.database import ContentReaction, PublishableContent\nfrom zds.utils.models import Alert\n\n\ndef _notifications_to_list(notifications_query):\n query = notifications_query.select_related(\"sender__profile\").order_by(\"-pubdate\")[:10]\n\n return [{\"pubdate\": n.pubdate, \"author\": n.sender, \"title\": n.title, \"url\": n.url} for n in query]\n\n\ndef _get_alert_info(alert):\n if alert.scope == \"FORUM\":\n post = Post.objects.select_related(\"topic\").get(pk=alert.comment.pk)\n return post.topic.title, post.get_absolute_url()\n elif alert.scope == \"CONTENT\":\n published = PublishableContent.objects.select_related(\"public_version\").get(pk=alert.content.pk)\n title = (published.public_version.title if published.public_version else published.title,)\n url = published.get_absolute_url_online() if published.public_version else \"\"\n return title, url\n elif alert.scope == \"PROFILE\":\n return _(\"Profil de {}\").format(alert.profile.user.username), alert.profile.get_absolute_url() + \"#alerts\"\n else:\n comment = ContentReaction.objects.select_related(\"related_content\").get(pk=alert.comment.pk)\n return (\n comment.related_content.title,\n comment.get_absolute_url(),\n )\n\n\ndef _alert_to_dict(alert):\n title, url = _get_alert_info(alert)\n return {\"title\": title, \"url\": url, \"pubdate\": alert.pubdate, \"author\": alert.author, \"text\": alert.text}\n\n\ndef _alerts_to_list(alerts_query):\n query = alerts_query.select_related(\"author\", \"comment\", \"content\").order_by(\"-pubdate\")[:10]\n\n return [_alert_to_dict(a) for a in query]\n\n\ndef get_header_notifications(user):\n if not user.is_authenticated:\n return None\n\n private_topic = ContentType.objects.get_for_model(PrivateTopic)\n\n notifications = Notification.objects.filter(subscription__user=user, is_read=False)\n\n general_notifications = notifications.exclude(subscription__content_type=private_topic)\n\n private_notifications = notifications.filter(subscription__content_type=private_topic)\n\n alerts = Alert.objects.filter(solved=False)\n\n return {\n \"general_notifications\": {\n \"total\": general_notifications.count(),\n \"list\": _notifications_to_list(general_notifications),\n },\n \"private_topic_notifications\": {\n \"total\": private_notifications.count(),\n \"list\": _notifications_to_list(private_notifications),\n },\n \"alerts\": user.has_perm(\"forum.change_post\")\n and {\n \"total\": alerts.count(),\n \"list\": _alerts_to_list(alerts),\n },\n }\n", "path": "zds/utils/header_notifications.py"}], "after_files": [{"content": "from django.contrib.contenttypes.models import ContentType\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.forum.models import Post\nfrom zds.mp.models import PrivateTopic\nfrom zds.notification.models import Notification\nfrom zds.tutorialv2.models.database import ContentReaction, PublishableContent\nfrom zds.utils.models import Alert\n\n\ndef _notifications_to_list(notifications_query):\n query = notifications_query.select_related(\"sender__profile\").order_by(\"-pubdate\")[:10]\n\n return [{\"pubdate\": n.pubdate, \"author\": n.sender, \"title\": n.title, \"url\": n.url} for n in query]\n\n\ndef _get_alert_info(alert):\n if alert.scope == \"FORUM\":\n post = Post.objects.select_related(\"topic\").get(pk=alert.comment.pk)\n return post.topic.title, post.get_absolute_url()\n elif alert.scope == \"CONTENT\":\n published = PublishableContent.objects.select_related(\"public_version\").get(pk=alert.content.pk)\n title = published.public_version.title if published.public_version else published.title\n url = published.get_absolute_url_online() if published.public_version else \"\"\n return title, url\n elif alert.scope == \"PROFILE\":\n return _(\"Profil de {}\").format(alert.profile.user.username), alert.profile.get_absolute_url() + \"#alerts\"\n else:\n comment = ContentReaction.objects.select_related(\"related_content\").get(pk=alert.comment.pk)\n return (\n comment.related_content.title,\n comment.get_absolute_url(),\n )\n\n\ndef _alert_to_dict(alert):\n title, url = _get_alert_info(alert)\n return {\"title\": title, \"url\": url, \"pubdate\": alert.pubdate, \"author\": alert.author, \"text\": alert.text}\n\n\ndef _alerts_to_list(alerts_query):\n query = alerts_query.select_related(\"author\", \"comment\", \"content\").order_by(\"-pubdate\")[:10]\n\n return [_alert_to_dict(a) for a in query]\n\n\ndef get_header_notifications(user):\n if not user.is_authenticated:\n return None\n\n private_topic = ContentType.objects.get_for_model(PrivateTopic)\n\n notifications = Notification.objects.filter(subscription__user=user, is_read=False)\n\n general_notifications = notifications.exclude(subscription__content_type=private_topic)\n\n private_notifications = notifications.filter(subscription__content_type=private_topic)\n\n alerts = Alert.objects.filter(solved=False)\n\n return {\n \"general_notifications\": {\n \"total\": general_notifications.count(),\n \"list\": _notifications_to_list(general_notifications),\n },\n \"private_topic_notifications\": {\n \"total\": private_notifications.count(),\n \"list\": _notifications_to_list(private_notifications),\n },\n \"alerts\": user.has_perm(\"forum.change_post\")\n and {\n \"total\": alerts.count(),\n \"list\": _alerts_to_list(alerts),\n },\n }\n", "path": "zds/utils/header_notifications.py"}]} | 1,196 | 149 |
gh_patches_debug_18405 | rasdani/github-patches | git_diff | kubeflow__pipelines-4583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3 support in Dataflow component
Apache beam has had python 3 support since version 2.14.0, and python 2 will reach EOL in January 2020. Right now it looks like using `python2` is hard coded into the [Dataflow component](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py):
```python
def _prepare_cmd(project_id, python_file_path, args, staging_location):
dataflow_args = [
'--runner', 'dataflow',
'--project', project_id]
if staging_location:
dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]
return (['python2', '-u', python_file_path] +
dataflow_args + args)
```
Can an option be added to run Dataflow jobs using `python3`?
Python 3 support in Dataflow component
Apache beam has had python 3 support since version 2.14.0, and python 2 will reach EOL in January 2020. Right now it looks like using `python2` is hard coded into the [Dataflow component](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py):
```python
def _prepare_cmd(project_id, python_file_path, args, staging_location):
dataflow_args = [
'--runner', 'dataflow',
'--project', project_id]
if staging_location:
dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]
return (['python2', '-u', python_file_path] +
dataflow_args + args)
```
Can an option be added to run Dataflow jobs using `python3`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import subprocess
15 import re
16 import logging
17 import os
18
19 from google.cloud import storage
20 from kfp_component.core import KfpExecutionContext
21 from ._client import DataflowClient
22 from ._common_ops import (wait_and_dump_job, stage_file, get_staging_location,
23 read_job_id_and_location, upload_job_id_and_location)
24 from ._process import Process
25 from ..storage import parse_blob_path
26
27 def launch_python(python_file_path, project_id, staging_dir=None, requirements_file_path=None,
28 args=[], wait_interval=30,
29 job_id_output_path='/tmp/kfp/output/dataflow/job_id.txt',
30 job_object_output_path='/tmp/kfp/output/dataflow/job.json',
31 ):
32 """Launch a self-executing beam python file.
33
34 Args:
35 python_file_path (str): The gcs or local path to the python file to run.
36 project_id (str): The ID of the parent project.
37 staging_dir (str): Optional. The GCS directory for keeping staging files.
38 A random subdirectory will be created under the directory to keep job info
39 for resuming the job in case of failure and it will be passed as
40 `staging_location` and `temp_location` command line args of the beam code.
41 requirements_file_path (str): Optional, the gcs or local path to the pip
42 requirements file.
43 args (list): The list of args to pass to the python file.
44 wait_interval (int): The wait seconds between polling.
45 Returns:
46 The completed job.
47 """
48 storage_client = storage.Client()
49 df_client = DataflowClient()
50 job_id = None
51 location = None
52 def cancel():
53 if job_id:
54 df_client.cancel_job(
55 project_id,
56 job_id,
57 location
58 )
59 with KfpExecutionContext(on_cancel=cancel) as ctx:
60 staging_location = get_staging_location(staging_dir, ctx.context_id())
61 job_id, location = read_job_id_and_location(storage_client, staging_location)
62 # Continue waiting for the job if it's has been uploaded to staging location.
63 if job_id:
64 job = df_client.get_job(project_id, job_id, location)
65 return wait_and_dump_job(df_client, project_id, location, job,
66 wait_interval,
67 job_id_output_path=job_id_output_path,
68 job_object_output_path=job_object_output_path,
69 )
70
71 _install_requirements(requirements_file_path)
72 python_file_path = stage_file(python_file_path)
73 cmd = _prepare_cmd(project_id, python_file_path, args, staging_location)
74 sub_process = Process(cmd)
75 for line in sub_process.read_lines():
76 job_id, location = _extract_job_id_and_location(line)
77 if job_id:
78 logging.info('Found job id {} and location {}.'.format(job_id, location))
79 upload_job_id_and_location(storage_client, staging_location, job_id, location)
80 break
81 sub_process.wait_and_check()
82 if not job_id:
83 logging.warning('No dataflow job was found when '
84 'running the python file.')
85 return None
86 job = df_client.get_job(project_id, job_id,
87 location=location)
88 return wait_and_dump_job(df_client, project_id, location, job,
89 wait_interval,
90 job_id_output_path=job_id_output_path,
91 job_object_output_path=job_object_output_path,
92 )
93
94 def _prepare_cmd(project_id, python_file_path, args, staging_location):
95 dataflow_args = [
96 '--runner', 'dataflow',
97 '--project', project_id]
98 if staging_location:
99 dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]
100 return (['python2', '-u', python_file_path] +
101 dataflow_args + args)
102
103 def _extract_job_id_and_location(line):
104 """Returns (job_id, location) from matched log.
105 """
106 job_id_pattern = re.compile(
107 br'.*console.cloud.google.com/dataflow.*/locations/([a-z|0-9|A-Z|\-|\_]+)/jobs/([a-z|0-9|A-Z|\-|\_]+).*')
108 matched_job_id = job_id_pattern.search(line or '')
109 if matched_job_id:
110 return (matched_job_id.group(2).decode(), matched_job_id.group(1).decode())
111 return (None, None)
112
113 def _install_requirements(requirements_file_path):
114 if not requirements_file_path:
115 return
116 requirements_file_path = stage_file(requirements_file_path)
117 subprocess.check_call(['pip2', 'install', '-r', requirements_file_path])
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py b/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py
--- a/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py
+++ b/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py
@@ -97,7 +97,7 @@
'--project', project_id]
if staging_location:
dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]
- return (['python2', '-u', python_file_path] +
+ return (['python', '-u', python_file_path] +
dataflow_args + args)
def _extract_job_id_and_location(line):
@@ -114,4 +114,4 @@
if not requirements_file_path:
return
requirements_file_path = stage_file(requirements_file_path)
- subprocess.check_call(['pip2', 'install', '-r', requirements_file_path])
\ No newline at end of file
+ subprocess.check_call(['pip', 'install', '-r', requirements_file_path])
\ No newline at end of file
| {"golden_diff": "diff --git a/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py b/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py\n--- a/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py\n+++ b/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py\n@@ -97,7 +97,7 @@\n '--project', project_id]\n if staging_location:\n dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]\n- return (['python2', '-u', python_file_path] + \n+ return (['python', '-u', python_file_path] + \n dataflow_args + args)\n \n def _extract_job_id_and_location(line):\n@@ -114,4 +114,4 @@\n if not requirements_file_path:\n return\n requirements_file_path = stage_file(requirements_file_path)\n- subprocess.check_call(['pip2', 'install', '-r', requirements_file_path])\n\\ No newline at end of file\n+ subprocess.check_call(['pip', 'install', '-r', requirements_file_path])\n\\ No newline at end of file\n", "issue": "Python 3 support in Dataflow component\nApache beam has had python 3 support since version 2.14.0, and python 2 will reach EOL in January 2020. Right now it looks like using `python2` is hard coded into the [Dataflow component](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py):\r\n\r\n```python\r\ndef _prepare_cmd(project_id, python_file_path, args, staging_location):\r\n dataflow_args = [\r\n '--runner', 'dataflow', \r\n '--project', project_id]\r\n if staging_location:\r\n dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]\r\n return (['python2', '-u', python_file_path] + \r\n dataflow_args + args)\r\n```\r\n\r\nCan an option be added to run Dataflow jobs using `python3`?\nPython 3 support in Dataflow component\nApache beam has had python 3 support since version 2.14.0, and python 2 will reach EOL in January 2020. Right now it looks like using `python2` is hard coded into the [Dataflow component](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py):\r\n\r\n```python\r\ndef _prepare_cmd(project_id, python_file_path, args, staging_location):\r\n dataflow_args = [\r\n '--runner', 'dataflow', \r\n '--project', project_id]\r\n if staging_location:\r\n dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]\r\n return (['python2', '-u', python_file_path] + \r\n dataflow_args + args)\r\n```\r\n\r\nCan an option be added to run Dataflow jobs using `python3`?\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport subprocess\nimport re\nimport logging\nimport os\n\nfrom google.cloud import storage\nfrom kfp_component.core import KfpExecutionContext\nfrom ._client import DataflowClient\nfrom ._common_ops import (wait_and_dump_job, stage_file, get_staging_location, \n read_job_id_and_location, upload_job_id_and_location)\nfrom ._process import Process\nfrom ..storage import parse_blob_path\n\ndef launch_python(python_file_path, project_id, staging_dir=None, requirements_file_path=None, \n args=[], wait_interval=30,\n job_id_output_path='/tmp/kfp/output/dataflow/job_id.txt',\n job_object_output_path='/tmp/kfp/output/dataflow/job.json',\n):\n \"\"\"Launch a self-executing beam python file.\n\n Args:\n python_file_path (str): The gcs or local path to the python file to run.\n project_id (str): The ID of the parent project.\n staging_dir (str): Optional. The GCS directory for keeping staging files. \n A random subdirectory will be created under the directory to keep job info\n for resuming the job in case of failure and it will be passed as \n `staging_location` and `temp_location` command line args of the beam code.\n requirements_file_path (str): Optional, the gcs or local path to the pip \n requirements file.\n args (list): The list of args to pass to the python file.\n wait_interval (int): The wait seconds between polling.\n Returns:\n The completed job.\n \"\"\"\n storage_client = storage.Client()\n df_client = DataflowClient()\n job_id = None\n location = None\n def cancel():\n if job_id:\n df_client.cancel_job(\n project_id,\n job_id,\n location\n )\n with KfpExecutionContext(on_cancel=cancel) as ctx:\n staging_location = get_staging_location(staging_dir, ctx.context_id())\n job_id, location = read_job_id_and_location(storage_client, staging_location)\n # Continue waiting for the job if it's has been uploaded to staging location.\n if job_id:\n job = df_client.get_job(project_id, job_id, location)\n return wait_and_dump_job(df_client, project_id, location, job,\n wait_interval,\n job_id_output_path=job_id_output_path,\n job_object_output_path=job_object_output_path,\n )\n\n _install_requirements(requirements_file_path)\n python_file_path = stage_file(python_file_path)\n cmd = _prepare_cmd(project_id, python_file_path, args, staging_location)\n sub_process = Process(cmd)\n for line in sub_process.read_lines():\n job_id, location = _extract_job_id_and_location(line)\n if job_id:\n logging.info('Found job id {} and location {}.'.format(job_id, location))\n upload_job_id_and_location(storage_client, staging_location, job_id, location)\n break\n sub_process.wait_and_check()\n if not job_id:\n logging.warning('No dataflow job was found when '\n 'running the python file.')\n return None\n job = df_client.get_job(project_id, job_id, \n location=location)\n return wait_and_dump_job(df_client, project_id, location, job,\n wait_interval,\n job_id_output_path=job_id_output_path,\n job_object_output_path=job_object_output_path,\n )\n\ndef _prepare_cmd(project_id, python_file_path, args, staging_location):\n dataflow_args = [\n '--runner', 'dataflow', \n '--project', project_id]\n if staging_location:\n dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]\n return (['python2', '-u', python_file_path] + \n dataflow_args + args)\n\ndef _extract_job_id_and_location(line):\n \"\"\"Returns (job_id, location) from matched log.\n \"\"\"\n job_id_pattern = re.compile(\n br'.*console.cloud.google.com/dataflow.*/locations/([a-z|0-9|A-Z|\\-|\\_]+)/jobs/([a-z|0-9|A-Z|\\-|\\_]+).*')\n matched_job_id = job_id_pattern.search(line or '')\n if matched_job_id:\n return (matched_job_id.group(2).decode(), matched_job_id.group(1).decode())\n return (None, None)\n\ndef _install_requirements(requirements_file_path):\n if not requirements_file_path:\n return\n requirements_file_path = stage_file(requirements_file_path)\n subprocess.check_call(['pip2', 'install', '-r', requirements_file_path])", "path": "components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport subprocess\nimport re\nimport logging\nimport os\n\nfrom google.cloud import storage\nfrom kfp_component.core import KfpExecutionContext\nfrom ._client import DataflowClient\nfrom ._common_ops import (wait_and_dump_job, stage_file, get_staging_location, \n read_job_id_and_location, upload_job_id_and_location)\nfrom ._process import Process\nfrom ..storage import parse_blob_path\n\ndef launch_python(python_file_path, project_id, staging_dir=None, requirements_file_path=None, \n args=[], wait_interval=30,\n job_id_output_path='/tmp/kfp/output/dataflow/job_id.txt',\n job_object_output_path='/tmp/kfp/output/dataflow/job.json',\n):\n \"\"\"Launch a self-executing beam python file.\n\n Args:\n python_file_path (str): The gcs or local path to the python file to run.\n project_id (str): The ID of the parent project.\n staging_dir (str): Optional. The GCS directory for keeping staging files. \n A random subdirectory will be created under the directory to keep job info\n for resuming the job in case of failure and it will be passed as \n `staging_location` and `temp_location` command line args of the beam code.\n requirements_file_path (str): Optional, the gcs or local path to the pip \n requirements file.\n args (list): The list of args to pass to the python file.\n wait_interval (int): The wait seconds between polling.\n Returns:\n The completed job.\n \"\"\"\n storage_client = storage.Client()\n df_client = DataflowClient()\n job_id = None\n location = None\n def cancel():\n if job_id:\n df_client.cancel_job(\n project_id,\n job_id,\n location\n )\n with KfpExecutionContext(on_cancel=cancel) as ctx:\n staging_location = get_staging_location(staging_dir, ctx.context_id())\n job_id, location = read_job_id_and_location(storage_client, staging_location)\n # Continue waiting for the job if it's has been uploaded to staging location.\n if job_id:\n job = df_client.get_job(project_id, job_id, location)\n return wait_and_dump_job(df_client, project_id, location, job,\n wait_interval,\n job_id_output_path=job_id_output_path,\n job_object_output_path=job_object_output_path,\n )\n\n _install_requirements(requirements_file_path)\n python_file_path = stage_file(python_file_path)\n cmd = _prepare_cmd(project_id, python_file_path, args, staging_location)\n sub_process = Process(cmd)\n for line in sub_process.read_lines():\n job_id, location = _extract_job_id_and_location(line)\n if job_id:\n logging.info('Found job id {} and location {}.'.format(job_id, location))\n upload_job_id_and_location(storage_client, staging_location, job_id, location)\n break\n sub_process.wait_and_check()\n if not job_id:\n logging.warning('No dataflow job was found when '\n 'running the python file.')\n return None\n job = df_client.get_job(project_id, job_id, \n location=location)\n return wait_and_dump_job(df_client, project_id, location, job,\n wait_interval,\n job_id_output_path=job_id_output_path,\n job_object_output_path=job_object_output_path,\n )\n\ndef _prepare_cmd(project_id, python_file_path, args, staging_location):\n dataflow_args = [\n '--runner', 'dataflow', \n '--project', project_id]\n if staging_location:\n dataflow_args += ['--staging_location', staging_location, '--temp_location', staging_location]\n return (['python', '-u', python_file_path] + \n dataflow_args + args)\n\ndef _extract_job_id_and_location(line):\n \"\"\"Returns (job_id, location) from matched log.\n \"\"\"\n job_id_pattern = re.compile(\n br'.*console.cloud.google.com/dataflow.*/locations/([a-z|0-9|A-Z|\\-|\\_]+)/jobs/([a-z|0-9|A-Z|\\-|\\_]+).*')\n matched_job_id = job_id_pattern.search(line or '')\n if matched_job_id:\n return (matched_job_id.group(2).decode(), matched_job_id.group(1).decode())\n return (None, None)\n\ndef _install_requirements(requirements_file_path):\n if not requirements_file_path:\n return\n requirements_file_path = stage_file(requirements_file_path)\n subprocess.check_call(['pip', 'install', '-r', requirements_file_path])", "path": "components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_python.py"}]} | 2,029 | 264 |
gh_patches_debug_26355 | rasdani/github-patches | git_diff | catalyst-team__catalyst-119 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiple dataset source for tag2label
It will be nice if I can choose not one input dir here https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/scripts/tag2label.py
For example it can looks like:
`catalyst-contrib tag2label --in-dir=dataset1,dataset2`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `catalyst/contrib/scripts/tag2label.py`
Content:
```
1 import argparse
2 import json
3 import pandas as pd
4
5 from catalyst.utils.data import create_dataset, create_dataframe, \
6 prepare_dataset_labeling, separate_tags
7
8
9 def prepare_df_from_dirs(in_dir, tag_column_name):
10 if not in_dir.endswith("/"):
11 in_dir = f"{in_dir}/"
12
13 dataset = create_dataset(
14 f"{in_dir}/**", process_fn=lambda x: x.replace(f"{in_dir}", "")
15 )
16 df = create_dataframe(dataset, columns=[tag_column_name, "filepath"])
17
18 return df
19
20
21 def build_args(parser):
22 parser.add_argument(
23 "--in-csv",
24 type=str,
25 default=None,
26 help="Path to data in `.csv`."
27 )
28 parser.add_argument(
29 "--in-dir",
30 type=str,
31 default=None,
32 help="Path to directory with dataset."
33 )
34
35 parser.add_argument(
36 "--out-dataset",
37 type=str,
38 default=None,
39 required=True,
40 help="Path to output dataframe"
41 )
42 parser.add_argument(
43 "--out-labeling",
44 type=str,
45 default=None,
46 required=True,
47 help="Path to output JSON"
48 )
49
50 parser.add_argument(
51 "--tag-column",
52 type=str,
53 default="tag",
54 help="Target column name"
55 )
56 parser.add_argument(
57 "--tag-delim",
58 type=str,
59 default=None,
60 help="Separator if you want to use several target columns"
61 )
62
63 return parser
64
65
66 def parse_args():
67 parser = argparse.ArgumentParser()
68 build_args(parser)
69 args = parser.parse_args()
70 return args
71
72
73 def main(args, _=None):
74 if args.in_csv is not None:
75 df = pd.read_csv(args.in_csv)
76 elif args.in_dir is not None:
77 df = prepare_df_from_dirs(args.in_dir, args.tag_column)
78 else:
79 raise Exception
80
81 if args.tag_delim is not None:
82 df = separate_tags(
83 df, tag_column=args.tag_column, tag_delim=args.tag_delim
84 )
85
86 tag2lbl = prepare_dataset_labeling(df, args.tag_column)
87 print("Num classes: ", len(tag2lbl))
88
89 with open(args.out_labeling, "w") as fout:
90 json.dump(tag2lbl, fout, indent=4)
91
92 if args.out_dataset is not None:
93 df.to_csv(args.out_dataset, index=False)
94
95
96 if __name__ == "__main__":
97 args = parse_args()
98 main(args)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/catalyst/contrib/scripts/tag2label.py b/catalyst/contrib/scripts/tag2label.py
--- a/catalyst/contrib/scripts/tag2label.py
+++ b/catalyst/contrib/scripts/tag2label.py
@@ -6,15 +6,35 @@
prepare_dataset_labeling, separate_tags
-def prepare_df_from_dirs(in_dir, tag_column_name):
- if not in_dir.endswith("/"):
- in_dir = f"{in_dir}/"
-
- dataset = create_dataset(
- f"{in_dir}/**", process_fn=lambda x: x.replace(f"{in_dir}", "")
- )
- df = create_dataframe(dataset, columns=[tag_column_name, "filepath"])
-
+def prepare_df_from_dirs(in_dirs, tag_column_name):
+ dfs = []
+ splitted_dirs = in_dirs.strip(',').split(',')
+
+ def process_fn(x):
+ if len(splitted_dirs) == 1:
+ # remove all in_dir part from path
+ return x.replace(f"{in_dir}", "")
+ else:
+ # leaves last part of in_dir path,
+ # which identifies separate in_dir
+ return x.replace(
+ f"{in_dir}",
+ f"{in_dir.split('/')[-2]}/")
+
+ for in_dir in splitted_dirs:
+ if not in_dir.endswith("/"):
+ in_dir = f"{in_dir}/"
+
+ dataset = create_dataset(
+ f"{in_dir}/**",
+ process_fn=process_fn)
+
+ dfs.append(
+ create_dataframe(
+ dataset,
+ columns=[tag_column_name, "filepath"]))
+
+ df = pd.concat(dfs).reset_index(drop=True)
return df
@@ -29,7 +49,8 @@
"--in-dir",
type=str,
default=None,
- help="Path to directory with dataset."
+ help="Path to directory with dataset"
+ "or paths separated by commas for several datasets"
)
parser.add_argument(
| {"golden_diff": "diff --git a/catalyst/contrib/scripts/tag2label.py b/catalyst/contrib/scripts/tag2label.py\n--- a/catalyst/contrib/scripts/tag2label.py\n+++ b/catalyst/contrib/scripts/tag2label.py\n@@ -6,15 +6,35 @@\n prepare_dataset_labeling, separate_tags\n \n \n-def prepare_df_from_dirs(in_dir, tag_column_name):\n- if not in_dir.endswith(\"/\"):\n- in_dir = f\"{in_dir}/\"\n-\n- dataset = create_dataset(\n- f\"{in_dir}/**\", process_fn=lambda x: x.replace(f\"{in_dir}\", \"\")\n- )\n- df = create_dataframe(dataset, columns=[tag_column_name, \"filepath\"])\n-\n+def prepare_df_from_dirs(in_dirs, tag_column_name):\n+ dfs = []\n+ splitted_dirs = in_dirs.strip(',').split(',')\n+\n+ def process_fn(x):\n+ if len(splitted_dirs) == 1:\n+ # remove all in_dir part from path\n+ return x.replace(f\"{in_dir}\", \"\")\n+ else:\n+ # leaves last part of in_dir path,\n+ # which identifies separate in_dir\n+ return x.replace(\n+ f\"{in_dir}\",\n+ f\"{in_dir.split('/')[-2]}/\")\n+\n+ for in_dir in splitted_dirs:\n+ if not in_dir.endswith(\"/\"):\n+ in_dir = f\"{in_dir}/\"\n+\n+ dataset = create_dataset(\n+ f\"{in_dir}/**\",\n+ process_fn=process_fn)\n+\n+ dfs.append(\n+ create_dataframe(\n+ dataset,\n+ columns=[tag_column_name, \"filepath\"]))\n+\n+ df = pd.concat(dfs).reset_index(drop=True)\n return df\n \n \n@@ -29,7 +49,8 @@\n \"--in-dir\",\n type=str,\n default=None,\n- help=\"Path to directory with dataset.\"\n+ help=\"Path to directory with dataset\"\n+ \"or paths separated by commas for several datasets\"\n )\n \n parser.add_argument(\n", "issue": "Multiple dataset source for tag2label\nIt will be nice if I can choose not one input dir here https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/scripts/tag2label.py\r\n\r\nFor example it can looks like:\r\n`catalyst-contrib tag2label --in-dir=dataset1,dataset2`\n", "before_files": [{"content": "import argparse\nimport json\nimport pandas as pd\n\nfrom catalyst.utils.data import create_dataset, create_dataframe, \\\n prepare_dataset_labeling, separate_tags\n\n\ndef prepare_df_from_dirs(in_dir, tag_column_name):\n if not in_dir.endswith(\"/\"):\n in_dir = f\"{in_dir}/\"\n\n dataset = create_dataset(\n f\"{in_dir}/**\", process_fn=lambda x: x.replace(f\"{in_dir}\", \"\")\n )\n df = create_dataframe(dataset, columns=[tag_column_name, \"filepath\"])\n\n return df\n\n\ndef build_args(parser):\n parser.add_argument(\n \"--in-csv\",\n type=str,\n default=None,\n help=\"Path to data in `.csv`.\"\n )\n parser.add_argument(\n \"--in-dir\",\n type=str,\n default=None,\n help=\"Path to directory with dataset.\"\n )\n\n parser.add_argument(\n \"--out-dataset\",\n type=str,\n default=None,\n required=True,\n help=\"Path to output dataframe\"\n )\n parser.add_argument(\n \"--out-labeling\",\n type=str,\n default=None,\n required=True,\n help=\"Path to output JSON\"\n )\n\n parser.add_argument(\n \"--tag-column\",\n type=str,\n default=\"tag\",\n help=\"Target column name\"\n )\n parser.add_argument(\n \"--tag-delim\",\n type=str,\n default=None,\n help=\"Separator if you want to use several target columns\"\n )\n\n return parser\n\n\ndef parse_args():\n parser = argparse.ArgumentParser()\n build_args(parser)\n args = parser.parse_args()\n return args\n\n\ndef main(args, _=None):\n if args.in_csv is not None:\n df = pd.read_csv(args.in_csv)\n elif args.in_dir is not None:\n df = prepare_df_from_dirs(args.in_dir, args.tag_column)\n else:\n raise Exception\n\n if args.tag_delim is not None:\n df = separate_tags(\n df, tag_column=args.tag_column, tag_delim=args.tag_delim\n )\n\n tag2lbl = prepare_dataset_labeling(df, args.tag_column)\n print(\"Num classes: \", len(tag2lbl))\n\n with open(args.out_labeling, \"w\") as fout:\n json.dump(tag2lbl, fout, indent=4)\n\n if args.out_dataset is not None:\n df.to_csv(args.out_dataset, index=False)\n\n\nif __name__ == \"__main__\":\n args = parse_args()\n main(args)\n", "path": "catalyst/contrib/scripts/tag2label.py"}], "after_files": [{"content": "import argparse\nimport json\nimport pandas as pd\n\nfrom catalyst.utils.data import create_dataset, create_dataframe, \\\n prepare_dataset_labeling, separate_tags\n\n\ndef prepare_df_from_dirs(in_dirs, tag_column_name):\n dfs = []\n splitted_dirs = in_dirs.strip(',').split(',')\n\n def process_fn(x):\n if len(splitted_dirs) == 1:\n # remove all in_dir part from path\n return x.replace(f\"{in_dir}\", \"\")\n else:\n # leaves last part of in_dir path,\n # which identifies separate in_dir\n return x.replace(\n f\"{in_dir}\",\n f\"{in_dir.split('/')[-2]}/\")\n\n for in_dir in splitted_dirs:\n if not in_dir.endswith(\"/\"):\n in_dir = f\"{in_dir}/\"\n\n dataset = create_dataset(\n f\"{in_dir}/**\",\n process_fn=process_fn)\n\n dfs.append(\n create_dataframe(\n dataset,\n columns=[tag_column_name, \"filepath\"]))\n\n df = pd.concat(dfs).reset_index(drop=True)\n return df\n\n\ndef build_args(parser):\n parser.add_argument(\n \"--in-csv\",\n type=str,\n default=None,\n help=\"Path to data in `.csv`.\"\n )\n parser.add_argument(\n \"--in-dir\",\n type=str,\n default=None,\n help=\"Path to directory with dataset\"\n \"or paths separated by commas for several datasets\"\n )\n\n parser.add_argument(\n \"--out-dataset\",\n type=str,\n default=None,\n required=True,\n help=\"Path to output dataframe\"\n )\n parser.add_argument(\n \"--out-labeling\",\n type=str,\n default=None,\n required=True,\n help=\"Path to output JSON\"\n )\n\n parser.add_argument(\n \"--tag-column\",\n type=str,\n default=\"tag\",\n help=\"Target column name\"\n )\n parser.add_argument(\n \"--tag-delim\",\n type=str,\n default=None,\n help=\"Separator if you want to use several target columns\"\n )\n\n return parser\n\n\ndef parse_args():\n parser = argparse.ArgumentParser()\n build_args(parser)\n args = parser.parse_args()\n return args\n\n\ndef main(args, _=None):\n if args.in_csv is not None:\n df = pd.read_csv(args.in_csv)\n elif args.in_dir is not None:\n df = prepare_df_from_dirs(args.in_dir, args.tag_column)\n else:\n raise Exception\n\n if args.tag_delim is not None:\n df = separate_tags(\n df, tag_column=args.tag_column, tag_delim=args.tag_delim\n )\n\n tag2lbl = prepare_dataset_labeling(df, args.tag_column)\n print(\"Num classes: \", len(tag2lbl))\n\n with open(args.out_labeling, \"w\") as fout:\n json.dump(tag2lbl, fout, indent=4)\n\n if args.out_dataset is not None:\n df.to_csv(args.out_dataset, index=False)\n\n\nif __name__ == \"__main__\":\n args = parse_args()\n main(args)\n", "path": "catalyst/contrib/scripts/tag2label.py"}]} | 1,063 | 446 |
gh_patches_debug_2691 | rasdani/github-patches | git_diff | hydroshare__hydroshare-2769 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add back Active, Date joined, and last login in mezzanine listing of users
In the 3/19/18 version of HydroShare when an admin listed users the fields listed were

At present when an admin lists users the fields are

The fields Active, Date joined and last login are needed so that when there are problems with users creating and activating accounts (as occurred this week) an admin can list recent account creations and account creation attempts to assess the extent of the problem, and contact users that may have been impacted.
This regression was noted in https://github.com/hydroshare/hydroshare/pull/2677#issuecomment-374183106
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/admin.py`
Content:
```
1 from django import forms
2 from django.contrib.auth.admin import UserAdmin
3 from django.contrib.auth.forms import UserCreationForm
4 from django.contrib.auth.models import User
5 from django.contrib.gis import admin
6 from django.contrib.contenttypes.admin import GenericTabularInline
7 from django.utils.translation import ugettext_lazy as _
8
9 from mezzanine.pages.admin import PageAdmin
10
11 from .models import *
12
13
14 class UserCreationFormExtended(UserCreationForm):
15 def __init__(self, *args, **kwargs):
16 super(UserCreationFormExtended, self).__init__(*args, **kwargs)
17 self.fields['email'] = forms.EmailField(label=_("E-mail"), max_length=75)
18
19 UserAdmin.add_form = UserCreationFormExtended
20 UserAdmin.add_fieldsets = (
21 (None, {
22 'classes': ('wide',),
23 'fields': ('email', 'username', 'password1', 'password2',)
24 }),
25 )
26
27 class InlineResourceFiles(GenericTabularInline):
28 model = ResourceFile
29
30 class GenericResourceAdmin(PageAdmin):
31 inlines = PageAdmin.inlines + [InlineResourceFiles]
32
33 admin.site.unregister(User)
34 admin.site.register(User, UserAdmin)
35 admin.site.register(GenericResource, GenericResourceAdmin)
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/admin.py b/hs_core/admin.py
--- a/hs_core/admin.py
+++ b/hs_core/admin.py
@@ -23,6 +23,10 @@
'fields': ('email', 'username', 'password1', 'password2',)
}),
)
+UserAdmin.list_display = [
+ 'username', 'email', 'first_name', 'last_name', 'is_staff',
+ 'is_active', 'date_joined', 'last_login'
+]
class InlineResourceFiles(GenericTabularInline):
model = ResourceFile
| {"golden_diff": "diff --git a/hs_core/admin.py b/hs_core/admin.py\n--- a/hs_core/admin.py\n+++ b/hs_core/admin.py\n@@ -23,6 +23,10 @@\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n )\n+UserAdmin.list_display = [\n+ 'username', 'email', 'first_name', 'last_name', 'is_staff',\n+ 'is_active', 'date_joined', 'last_login'\n+]\n \n class InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n", "issue": "Add back Active, Date joined, and last login in mezzanine listing of users\nIn the 3/19/18 version of HydroShare when an admin listed users the fields listed were\r\n\r\n\r\nAt present when an admin lists users the fields are\r\n\r\n\r\n\r\nThe fields Active, Date joined and last login are needed so that when there are problems with users creating and activating accounts (as occurred this week) an admin can list recent account creations and account creation attempts to assess the extent of the problem, and contact users that may have been impacted.\r\n\r\nThis regression was noted in https://github.com/hydroshare/hydroshare/pull/2677#issuecomment-374183106\r\n\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth.models import User\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom mezzanine.pages.admin import PageAdmin\n\nfrom .models import *\n\n\nclass UserCreationFormExtended(UserCreationForm):\n def __init__(self, *args, **kwargs):\n super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n\nUserAdmin.add_form = UserCreationFormExtended\nUserAdmin.add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n)\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\nadmin.site.register(GenericResource, GenericResourceAdmin)\n", "path": "hs_core/admin.py"}], "after_files": [{"content": "from django import forms\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth.models import User\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom mezzanine.pages.admin import PageAdmin\n\nfrom .models import *\n\n\nclass UserCreationFormExtended(UserCreationForm):\n def __init__(self, *args, **kwargs):\n super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n\nUserAdmin.add_form = UserCreationFormExtended\nUserAdmin.add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n)\nUserAdmin.list_display = [\n 'username', 'email', 'first_name', 'last_name', 'is_staff',\n 'is_active', 'date_joined', 'last_login'\n]\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\nadmin.site.register(GenericResource, GenericResourceAdmin)\n", "path": "hs_core/admin.py"}]} | 844 | 127 |
gh_patches_debug_17207 | rasdani/github-patches | git_diff | pytorch__pytorch-4461 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JIT leaks memory in nets with dropout layer
if I add a dropout layer in FC layers, a JIT compiled forward pass of a net starts to seriously leak memory in the backward pass. I made a simple test case below. When it runs, you will see a rapidly increasing GPU/CPU memory usage in nvidia-smi/top and it will crash with an out of memory exception soon. Without the dropout layer or the backward pass, it works fine.
```python
import torch
from torch import jit
import torch.nn as nn
from torch.autograd import Variable
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.net1 = nn.Linear(100, 200)
self.net2 = nn.Linear(200, 1)
self.sigmoid = nn.Sigmoid()
self.ReLU = nn.ReLU(inplace=False)
self.drop = nn.Dropout(0.5)
def forward(self, V):
return self.sigmoid(self.net2(self.drop(self.ReLU(self.net1(V))))).squeeze()
use_cuda = True
net = TestNet()
criterion = nn.BCELoss()
if use_cuda:
net.cuda()
criterion.cuda()
V = Variable(torch.randn(100, 100)).cuda()
label = Variable(torch.randn(100)).cuda()
else:
V = Variable(torch.randn(100, 100))
label = Variable(torch.randn(100))
net.train()
fwd = jit.compile(net.forward)
for i in range(0,1000000):
r = fwd(V)
err = criterion(r, label)
err.backward()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/nn/_functions/dropout.py`
Content:
```
1 import torch
2 from torch.autograd.function import InplaceFunction
3 from torch.autograd import Variable
4 from itertools import repeat
5
6
7 class Dropout(InplaceFunction):
8
9 @staticmethod
10 def _make_noise(input):
11 return input.new().resize_as_(input)
12
13 @staticmethod
14 def symbolic(g, input, p=0.5, train=False, inplace=False):
15 # See Note [Export inplace]
16 r, _ = g.op("Dropout", input, ratio_f=p, is_test_i=not train, outputs=2)
17 return r
18
19 @classmethod
20 def forward(cls, ctx, input, p=0.5, train=False, inplace=False):
21 if p < 0 or p > 1:
22 raise ValueError("dropout probability has to be between 0 and 1, "
23 "but got {}".format(p))
24 ctx.p = p
25 ctx.train = train
26 ctx.inplace = inplace
27
28 if ctx.inplace:
29 ctx.mark_dirty(input)
30 output = input
31 else:
32 output = input.clone()
33
34 if ctx.p > 0 and ctx.train:
35 ctx.noise = cls._make_noise(input)
36 if ctx.p == 1:
37 ctx.noise.fill_(0)
38 else:
39 ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)
40 ctx.noise = ctx.noise.expand_as(input)
41 output.mul_(ctx.noise)
42
43 return output
44
45 @staticmethod
46 def backward(ctx, grad_output):
47 if ctx.p > 0 and ctx.train:
48 return grad_output.mul(Variable(ctx.noise)), None, None, None
49 else:
50 return grad_output, None, None, None
51
52
53 class FeatureDropout(Dropout):
54
55 @staticmethod
56 def symbolic(g, input, p=0.5, train=False, inplace=False):
57 # See Note [Export inplace]
58 # NB: In inference mode, FeatureDropout is exported as an identity op.
59 from torch.onnx.symbolic import _unimplemented
60 if train:
61 return _unimplemented("FeatureDropout", "training mode")
62 return input
63
64 @staticmethod
65 def _make_noise(input):
66 return input.new().resize_(input.size(0), input.size(1),
67 *repeat(1, input.dim() - 2))
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torch/nn/_functions/dropout.py b/torch/nn/_functions/dropout.py
--- a/torch/nn/_functions/dropout.py
+++ b/torch/nn/_functions/dropout.py
@@ -25,20 +25,22 @@
ctx.train = train
ctx.inplace = inplace
+ if ctx.p == 0 or not ctx.train:
+ return input
+
if ctx.inplace:
ctx.mark_dirty(input)
output = input
else:
output = input.clone()
- if ctx.p > 0 and ctx.train:
- ctx.noise = cls._make_noise(input)
- if ctx.p == 1:
- ctx.noise.fill_(0)
- else:
- ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)
- ctx.noise = ctx.noise.expand_as(input)
- output.mul_(ctx.noise)
+ ctx.noise = cls._make_noise(input)
+ if ctx.p == 1:
+ ctx.noise.fill_(0)
+ else:
+ ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)
+ ctx.noise = ctx.noise.expand_as(input)
+ output.mul_(ctx.noise)
return output
| {"golden_diff": "diff --git a/torch/nn/_functions/dropout.py b/torch/nn/_functions/dropout.py\n--- a/torch/nn/_functions/dropout.py\n+++ b/torch/nn/_functions/dropout.py\n@@ -25,20 +25,22 @@\n ctx.train = train\n ctx.inplace = inplace\n \n+ if ctx.p == 0 or not ctx.train:\n+ return input\n+\n if ctx.inplace:\n ctx.mark_dirty(input)\n output = input\n else:\n output = input.clone()\n \n- if ctx.p > 0 and ctx.train:\n- ctx.noise = cls._make_noise(input)\n- if ctx.p == 1:\n- ctx.noise.fill_(0)\n- else:\n- ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)\n- ctx.noise = ctx.noise.expand_as(input)\n- output.mul_(ctx.noise)\n+ ctx.noise = cls._make_noise(input)\n+ if ctx.p == 1:\n+ ctx.noise.fill_(0)\n+ else:\n+ ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)\n+ ctx.noise = ctx.noise.expand_as(input)\n+ output.mul_(ctx.noise)\n \n return output\n", "issue": "JIT leaks memory in nets with dropout layer \nif I add a dropout layer in FC layers, a JIT compiled forward pass of a net starts to seriously leak memory in the backward pass. I made a simple test case below. When it runs, you will see a rapidly increasing GPU/CPU memory usage in nvidia-smi/top and it will crash with an out of memory exception soon. Without the dropout layer or the backward pass, it works fine. \r\n\r\n```python\r\nimport torch\r\nfrom torch import jit\r\nimport torch.nn as nn\r\nfrom torch.autograd import Variable\r\n\r\nclass TestNet(nn.Module):\r\n def __init__(self):\r\n super(TestNet, self).__init__()\r\n self.net1 = nn.Linear(100, 200)\r\n self.net2 = nn.Linear(200, 1)\r\n self.sigmoid = nn.Sigmoid()\r\n self.ReLU = nn.ReLU(inplace=False)\r\n self.drop = nn.Dropout(0.5)\r\n \r\n def forward(self, V):\r\n return self.sigmoid(self.net2(self.drop(self.ReLU(self.net1(V))))).squeeze() \r\n\r\n\r\nuse_cuda = True\r\nnet = TestNet()\r\ncriterion = nn.BCELoss()\r\nif use_cuda:\r\n net.cuda()\r\n criterion.cuda()\r\n V = Variable(torch.randn(100, 100)).cuda()\r\n label = Variable(torch.randn(100)).cuda()\r\nelse:\r\n V = Variable(torch.randn(100, 100))\r\n label = Variable(torch.randn(100))\r\n\r\nnet.train()\r\nfwd = jit.compile(net.forward)\r\nfor i in range(0,1000000):\r\n r = fwd(V)\r\n err = criterion(r, label)\r\n err.backward() \r\n\r\n```\r\n\r\n\n", "before_files": [{"content": "import torch\nfrom torch.autograd.function import InplaceFunction\nfrom torch.autograd import Variable\nfrom itertools import repeat\n\n\nclass Dropout(InplaceFunction):\n\n @staticmethod\n def _make_noise(input):\n return input.new().resize_as_(input)\n\n @staticmethod\n def symbolic(g, input, p=0.5, train=False, inplace=False):\n # See Note [Export inplace]\n r, _ = g.op(\"Dropout\", input, ratio_f=p, is_test_i=not train, outputs=2)\n return r\n\n @classmethod\n def forward(cls, ctx, input, p=0.5, train=False, inplace=False):\n if p < 0 or p > 1:\n raise ValueError(\"dropout probability has to be between 0 and 1, \"\n \"but got {}\".format(p))\n ctx.p = p\n ctx.train = train\n ctx.inplace = inplace\n\n if ctx.inplace:\n ctx.mark_dirty(input)\n output = input\n else:\n output = input.clone()\n\n if ctx.p > 0 and ctx.train:\n ctx.noise = cls._make_noise(input)\n if ctx.p == 1:\n ctx.noise.fill_(0)\n else:\n ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)\n ctx.noise = ctx.noise.expand_as(input)\n output.mul_(ctx.noise)\n\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n if ctx.p > 0 and ctx.train:\n return grad_output.mul(Variable(ctx.noise)), None, None, None\n else:\n return grad_output, None, None, None\n\n\nclass FeatureDropout(Dropout):\n\n @staticmethod\n def symbolic(g, input, p=0.5, train=False, inplace=False):\n # See Note [Export inplace]\n # NB: In inference mode, FeatureDropout is exported as an identity op.\n from torch.onnx.symbolic import _unimplemented\n if train:\n return _unimplemented(\"FeatureDropout\", \"training mode\")\n return input\n\n @staticmethod\n def _make_noise(input):\n return input.new().resize_(input.size(0), input.size(1),\n *repeat(1, input.dim() - 2))\n", "path": "torch/nn/_functions/dropout.py"}], "after_files": [{"content": "import torch\nfrom torch.autograd.function import InplaceFunction\nfrom torch.autograd import Variable\nfrom itertools import repeat\n\n\nclass Dropout(InplaceFunction):\n\n @staticmethod\n def _make_noise(input):\n return input.new().resize_as_(input)\n\n @staticmethod\n def symbolic(g, input, p=0.5, train=False, inplace=False):\n # See Note [Export inplace]\n r, _ = g.op(\"Dropout\", input, ratio_f=p, is_test_i=not train, outputs=2)\n return r\n\n @classmethod\n def forward(cls, ctx, input, p=0.5, train=False, inplace=False):\n if p < 0 or p > 1:\n raise ValueError(\"dropout probability has to be between 0 and 1, \"\n \"but got {}\".format(p))\n ctx.p = p\n ctx.train = train\n ctx.inplace = inplace\n\n if ctx.p == 0 or not ctx.train:\n return input\n\n if ctx.inplace:\n ctx.mark_dirty(input)\n output = input\n else:\n output = input.clone()\n\n ctx.noise = cls._make_noise(input)\n if ctx.p == 1:\n ctx.noise.fill_(0)\n else:\n ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)\n ctx.noise = ctx.noise.expand_as(input)\n output.mul_(ctx.noise)\n\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n if ctx.p > 0 and ctx.train:\n return grad_output.mul(Variable(ctx.noise)), None, None, None\n else:\n return grad_output, None, None, None\n\n\nclass FeatureDropout(Dropout):\n\n @staticmethod\n def symbolic(g, input, p=0.5, train=False, inplace=False):\n # See Note [Export inplace]\n # NB: In inference mode, FeatureDropout is exported as an identity op.\n from torch.onnx.symbolic import _unimplemented\n if train:\n return _unimplemented(\"FeatureDropout\", \"training mode\")\n return input\n\n @staticmethod\n def _make_noise(input):\n return input.new().resize_(input.size(0), input.size(1),\n *repeat(1, input.dim() - 2))\n", "path": "torch/nn/_functions/dropout.py"}]} | 1,266 | 298 |
gh_patches_debug_23964 | rasdani/github-patches | git_diff | fidals__shopelectro-582 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ΠΡΠ°Π²Ρ ΡΠ°Π±Π»ΠΎΠ½ ΠΏΠΈΡΠ΅ΠΌ
Π ΡΠ°Π±Π»ΠΎΠ½Π΅ ΠΏΠΈΡΡΠΌΠ°, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΡΠΈΡ
ΠΎΠ΄ΠΈΡ Π΅ΡΠ»ΠΈ ΠΎΠΏΠ»Π°ΡΠΈΠ²Π°ΡΡ ΠΎΠ½Π»Π°ΠΉΠ½, Π½Π΅Ρ ΡΡΡΠΎΠΊΠΈ ΠΊΠΎΠΌΠΌΠ΅Π½ΡΠ°ΡΠΈΡΠΌΠΈ. ΠΠ°ΠΊ Π² ΠΎΠ±ΡΡΠ½ΠΎΠΌ ΡΠ°Π±Π»ΠΎΠ½Π΅. ΠΠΎΠ±Π°Π²Ρ.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/views/ecommerce.py`
Content:
```
1 from django.conf import settings
2 from django.core import serializers
3 from django.http import HttpResponse, JsonResponse
4 from django.shortcuts import get_object_or_404, render
5 from django.views.decorators.http import require_POST
6
7 from ecommerce import mailer, views as ec_views
8 from pages.models import CustomPage
9
10 from shopelectro.cart import SECart
11 from shopelectro.forms import OrderForm
12 from shopelectro.models import Product, Order
13
14
15 # ECOMMERCE VIEWS
16 class OrderPage(ec_views.OrderPage):
17 order_form = OrderForm
18 cart = SECart
19
20 def get_context_data(self, request, **kwargs):
21 return {
22 **super(OrderPage, self).get_context_data(request, **kwargs),
23 'page': CustomPage.objects.get(slug='order'),
24 }
25
26
27 class AddToCart(ec_views.AddToCart):
28 cart = SECart
29 product_model = Product
30 order_form = OrderForm
31
32
33 class RemoveFromCart(ec_views.RemoveFromCart):
34 cart = SECart
35 product_model = Product
36 order_form = OrderForm
37
38
39 class ChangeCount(ec_views.ChangeCount):
40 cart = SECart
41 product_model = Product
42 order_form = OrderForm
43
44
45 class FlushCart(ec_views.FlushCart):
46 product_model = Product
47 order_form = OrderForm
48
49
50 class OrderSuccess(ec_views.OrderSuccess):
51 order = Order.objects.all().prefetch_related('positions')
52
53 def get_context_data(self, **kwargs):
54 context = super().get_context_data(**kwargs)
55 positions_json = serializers.serialize(
56 'json',
57 context['order'].positions.all(),
58 fields=['name', 'quantity', 'price'],
59 )
60
61 return {
62 **context,
63 'positions_json': positions_json,
64 }
65
66
67 @require_POST
68 def one_click_buy(request):
69 """
70 Handle one-click-buy.
71
72 Accept XHR, save Order to DB, send mail about it
73 and return 200 OK.
74 """
75 SECart(request.session).clear()
76
77 cart = SECart(request.session)
78 product = get_object_or_404(Product, id=request.POST['product'])
79 cart.add(product, int(request.POST['quantity']))
80 order = Order(phone=request.POST['phone'])
81 order.set_positions(cart)
82 ec_views.save_order_to_session(request.session, order)
83 mailer.send_order(
84 subject=settings.EMAIL_SUBJECTS['one_click'],
85 order=order,
86 to_customer=False,
87 )
88 return HttpResponse('ok')
89
90
91 @require_POST
92 def order_call(request):
93 """Send email about ordered call."""
94 phone, time, url = ec_views.get_keys_from_post(
95 request, 'phone', 'time', 'url')
96
97 mailer.send_backcall(
98 subject=settings.EMAIL_SUBJECTS['call'],
99 phone=phone,
100 time=time,
101 url=url,
102 )
103
104 return HttpResponse('ok')
105
106
107 class YandexOrder(OrderPage):
108
109 def post(self, request):
110 cart = self.cart(request.session)
111 form = self.order_form(request.POST.dict())
112
113 if not form.is_valid():
114 return render(request, self.template, {'cart': cart, 'form': form})
115
116 order = form.save()
117 order.set_positions(cart)
118 ec_views.save_order_to_session(request.session, order)
119
120 # Took form fields from Yandex docs https://goo.gl/afKfsz
121 response_data = {
122 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required
123 'shopId': settings.SHOP['id'], # Required
124 'scid': settings.SHOP['scid'], # Required
125 'shopSuccessURL': settings.SHOP['success_url'],
126 'shopFailURL': settings.SHOP['fail_url'],
127 'customerNumber': order.id, # Required
128 'sum': order.total_price, # Required
129 'orderNumber': order.fake_order_number,
130 'cps_phone': order.phone,
131 'cps_email': order.email,
132 'paymentType': request.POST.get('payment_type'),
133 }
134
135 return JsonResponse(response_data)
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/views/ecommerce.py b/shopelectro/views/ecommerce.py
--- a/shopelectro/views/ecommerce.py
+++ b/shopelectro/views/ecommerce.py
@@ -1,3 +1,5 @@
+import json
+
from django.conf import settings
from django.core import serializers
from django.http import HttpResponse, JsonResponse
@@ -18,9 +20,13 @@
cart = SECart
def get_context_data(self, request, **kwargs):
+ data = super().get_context_data(request, **kwargs)
return {
- **super(OrderPage, self).get_context_data(request, **kwargs),
+ **data,
'page': CustomPage.objects.get(slug='order'),
+ 'raw_order_fields': json.dumps({
+ field.html_name: f'#{field.id_for_label}' for field in data['form']
+ }),
}
@@ -108,8 +114,7 @@
def post(self, request):
cart = self.cart(request.session)
- form = self.order_form(request.POST.dict())
-
+ form = self.order_form(request.POST)
if not form.is_valid():
return render(request, self.template, {'cart': cart, 'form': form})
| {"golden_diff": "diff --git a/shopelectro/views/ecommerce.py b/shopelectro/views/ecommerce.py\n--- a/shopelectro/views/ecommerce.py\n+++ b/shopelectro/views/ecommerce.py\n@@ -1,3 +1,5 @@\n+import json\n+\n from django.conf import settings\n from django.core import serializers\n from django.http import HttpResponse, JsonResponse\n@@ -18,9 +20,13 @@\n cart = SECart\n \n def get_context_data(self, request, **kwargs):\n+ data = super().get_context_data(request, **kwargs)\n return {\n- **super(OrderPage, self).get_context_data(request, **kwargs),\n+ **data,\n 'page': CustomPage.objects.get(slug='order'),\n+ 'raw_order_fields': json.dumps({\n+ field.html_name: f'#{field.id_for_label}' for field in data['form']\n+ }),\n }\n \n \n@@ -108,8 +114,7 @@\n \n def post(self, request):\n cart = self.cart(request.session)\n- form = self.order_form(request.POST.dict())\n-\n+ form = self.order_form(request.POST)\n if not form.is_valid():\n return render(request, self.template, {'cart': cart, 'form': form})\n", "issue": "\u041f\u0440\u0430\u0432\u044c \u0448\u0430\u0431\u043b\u043e\u043d \u043f\u0438\u0441\u0435\u043c\n\u0412 \u0448\u0430\u0431\u043b\u043e\u043d\u0435 \u043f\u0438\u0441\u044c\u043c\u0430, \u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u043f\u0440\u0438\u0445\u043e\u0434\u0438\u0442 \u0435\u0441\u043b\u0438 \u043e\u043f\u043b\u0430\u0447\u0438\u0432\u0430\u0442\u044c \u043e\u043d\u043b\u0430\u0439\u043d, \u043d\u0435\u0442 \u0441\u0442\u0440\u043e\u043a\u0438 \u043a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u044f\u043c\u0438. \u041a\u0430\u043a \u0432 \u043e\u0431\u044b\u0447\u043d\u043e\u043c \u0448\u0430\u0431\u043b\u043e\u043d\u0435. \u0414\u043e\u0431\u0430\u0432\u044c.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.core import serializers\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.decorators.http import require_POST\n\nfrom ecommerce import mailer, views as ec_views\nfrom pages.models import CustomPage\n\nfrom shopelectro.cart import SECart\nfrom shopelectro.forms import OrderForm\nfrom shopelectro.models import Product, Order\n\n\n# ECOMMERCE VIEWS\nclass OrderPage(ec_views.OrderPage):\n order_form = OrderForm\n cart = SECart\n\n def get_context_data(self, request, **kwargs):\n return {\n **super(OrderPage, self).get_context_data(request, **kwargs),\n 'page': CustomPage.objects.get(slug='order'),\n }\n\n\nclass AddToCart(ec_views.AddToCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass RemoveFromCart(ec_views.RemoveFromCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass ChangeCount(ec_views.ChangeCount):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass FlushCart(ec_views.FlushCart):\n product_model = Product\n order_form = OrderForm\n\n\nclass OrderSuccess(ec_views.OrderSuccess):\n order = Order.objects.all().prefetch_related('positions')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n positions_json = serializers.serialize(\n 'json',\n context['order'].positions.all(),\n fields=['name', 'quantity', 'price'],\n )\n\n return {\n **context,\n 'positions_json': positions_json,\n }\n\n\n@require_POST\ndef one_click_buy(request):\n \"\"\"\n Handle one-click-buy.\n\n Accept XHR, save Order to DB, send mail about it\n and return 200 OK.\n \"\"\"\n SECart(request.session).clear()\n\n cart = SECart(request.session)\n product = get_object_or_404(Product, id=request.POST['product'])\n cart.add(product, int(request.POST['quantity']))\n order = Order(phone=request.POST['phone'])\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n mailer.send_order(\n subject=settings.EMAIL_SUBJECTS['one_click'],\n order=order,\n to_customer=False,\n )\n return HttpResponse('ok')\n\n\n@require_POST\ndef order_call(request):\n \"\"\"Send email about ordered call.\"\"\"\n phone, time, url = ec_views.get_keys_from_post(\n request, 'phone', 'time', 'url')\n\n mailer.send_backcall(\n subject=settings.EMAIL_SUBJECTS['call'],\n phone=phone,\n time=time,\n url=url,\n )\n\n return HttpResponse('ok')\n\n\nclass YandexOrder(OrderPage):\n\n def post(self, request):\n cart = self.cart(request.session)\n form = self.order_form(request.POST.dict())\n\n if not form.is_valid():\n return render(request, self.template, {'cart': cart, 'form': form})\n\n order = form.save()\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n\n # Took form fields from Yandex docs https://goo.gl/afKfsz\n response_data = {\n 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required\n 'shopId': settings.SHOP['id'], # Required\n 'scid': settings.SHOP['scid'], # Required\n 'shopSuccessURL': settings.SHOP['success_url'],\n 'shopFailURL': settings.SHOP['fail_url'],\n 'customerNumber': order.id, # Required\n 'sum': order.total_price, # Required\n 'orderNumber': order.fake_order_number,\n 'cps_phone': order.phone,\n 'cps_email': order.email,\n 'paymentType': request.POST.get('payment_type'),\n }\n\n return JsonResponse(response_data)\n", "path": "shopelectro/views/ecommerce.py"}], "after_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.core import serializers\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.decorators.http import require_POST\n\nfrom ecommerce import mailer, views as ec_views\nfrom pages.models import CustomPage\n\nfrom shopelectro.cart import SECart\nfrom shopelectro.forms import OrderForm\nfrom shopelectro.models import Product, Order\n\n\n# ECOMMERCE VIEWS\nclass OrderPage(ec_views.OrderPage):\n order_form = OrderForm\n cart = SECart\n\n def get_context_data(self, request, **kwargs):\n data = super().get_context_data(request, **kwargs)\n return {\n **data,\n 'page': CustomPage.objects.get(slug='order'),\n 'raw_order_fields': json.dumps({\n field.html_name: f'#{field.id_for_label}' for field in data['form']\n }),\n }\n\n\nclass AddToCart(ec_views.AddToCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass RemoveFromCart(ec_views.RemoveFromCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass ChangeCount(ec_views.ChangeCount):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass FlushCart(ec_views.FlushCart):\n product_model = Product\n order_form = OrderForm\n\n\nclass OrderSuccess(ec_views.OrderSuccess):\n order = Order.objects.all().prefetch_related('positions')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n positions_json = serializers.serialize(\n 'json',\n context['order'].positions.all(),\n fields=['name', 'quantity', 'price'],\n )\n\n return {\n **context,\n 'positions_json': positions_json,\n }\n\n\n@require_POST\ndef one_click_buy(request):\n \"\"\"\n Handle one-click-buy.\n\n Accept XHR, save Order to DB, send mail about it\n and return 200 OK.\n \"\"\"\n SECart(request.session).clear()\n\n cart = SECart(request.session)\n product = get_object_or_404(Product, id=request.POST['product'])\n cart.add(product, int(request.POST['quantity']))\n order = Order(phone=request.POST['phone'])\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n mailer.send_order(\n subject=settings.EMAIL_SUBJECTS['one_click'],\n order=order,\n to_customer=False,\n )\n return HttpResponse('ok')\n\n\n@require_POST\ndef order_call(request):\n \"\"\"Send email about ordered call.\"\"\"\n phone, time, url = ec_views.get_keys_from_post(\n request, 'phone', 'time', 'url')\n\n mailer.send_backcall(\n subject=settings.EMAIL_SUBJECTS['call'],\n phone=phone,\n time=time,\n url=url,\n )\n\n return HttpResponse('ok')\n\n\nclass YandexOrder(OrderPage):\n\n def post(self, request):\n cart = self.cart(request.session)\n form = self.order_form(request.POST)\n if not form.is_valid():\n return render(request, self.template, {'cart': cart, 'form': form})\n\n order = form.save()\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n\n # Took form fields from Yandex docs https://goo.gl/afKfsz\n response_data = {\n 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required\n 'shopId': settings.SHOP['id'], # Required\n 'scid': settings.SHOP['scid'], # Required\n 'shopSuccessURL': settings.SHOP['success_url'],\n 'shopFailURL': settings.SHOP['fail_url'],\n 'customerNumber': order.id, # Required\n 'sum': order.total_price, # Required\n 'orderNumber': order.fake_order_number,\n 'cps_phone': order.phone,\n 'cps_email': order.email,\n 'paymentType': request.POST.get('payment_type'),\n }\n\n return JsonResponse(response_data)\n", "path": "shopelectro/views/ecommerce.py"}]} | 1,492 | 279 |
gh_patches_debug_1234 | rasdani/github-patches | git_diff | feast-dev__feast-244 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feast cli config file should be settable by an env var
**Is your feature request related to a problem? Please describe.**
If I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.
**Describe the solution you'd like**
export FEAST_CONFIG=path/to/feast/configfile
it should default to ~/.feast
Feast cli config file should be settable by an env var
**Is your feature request related to a problem? Please describe.**
If I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.
**Describe the solution you'd like**
export FEAST_CONFIG=path/to/feast/configfile
it should default to ~/.feast
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/config.py`
Content:
```
1 #
2 # Copyright 2019 The Feast Authors
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # https://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 from os.path import expanduser, join
18 import logging
19 import os
20 import sys
21 from typing import Dict
22 from urllib.parse import urlparse
23 from urllib.parse import ParseResult
24
25 import toml
26
27 _logger = logging.getLogger(__name__)
28
29 feast_configuration_properties = {"core_url": "URL", "serving_url": "URL"}
30
31 CONFIGURATION_FILE_DIR = ".feast"
32 CONFIGURATION_FILE_NAME = "config.toml"
33
34
35 def get_or_create_config() -> Dict:
36 """
37 Creates or gets the Feast users active configuration
38 :return: dictionary of Feast properties
39 """
40
41 user_config_file_dir, user_config_file_path = _get_config_file_locations()
42
43 if not os.path.exists(os.path.dirname(user_config_file_dir)):
44 os.makedirs(os.path.dirname(user_config_file_dir))
45
46 if not os.path.isfile(user_config_file_path):
47 _save_config(user_config_file_path, _fproperties_to_dict())
48
49 try:
50 return toml.load(user_config_file_path)
51 except FileNotFoundError:
52 _logger.error(
53 "Could not find Feast configuration file " + user_config_file_path
54 )
55 sys.exit(1)
56 except toml.decoder.TomlDecodeError:
57 _logger.error(
58 "Could not decode Feast configuration file " + user_config_file_path
59 )
60 sys.exit(1)
61 except Exception as e:
62 _logger.error(e)
63 sys.exit(1)
64
65
66 def set_property(fproperty: str, value: str):
67 """
68 Sets a single property in the Feast users local configuration file
69 :param fproperty: Feast property name
70 :param value: Feast property value
71 """
72
73 if _is_valid_property(fproperty, value):
74 active_feast_config = get_or_create_config()
75 active_feast_config[fproperty] = value
76 _, user_config_file_path = _get_config_file_locations()
77 _save_config(user_config_file_path, active_feast_config)
78 print("Updated property [%s]" % fproperty)
79 else:
80 _logger.error("Invalid property selected")
81 sys.exit(1)
82
83
84 def get_config_property_or_fail(fproperty):
85 active_feast_config = get_or_create_config()
86 if _is_valid_property(fproperty, active_feast_config[fproperty]):
87 return active_feast_config[fproperty]
88 _logger.error("Could not load Feast property from configuration: %s" % fproperty)
89 sys.exit(1)
90
91
92 def _fproperties_to_dict() -> Dict[str, str]:
93 prop_dict = {}
94 for fproperty in feast_configuration_properties:
95 prop_dict[fproperty] = ""
96 return prop_dict
97
98
99 def _is_valid_property(fproperty: str, value: str) -> bool:
100 """
101 Validates both a Feast property as well as value
102 :param fproperty: Feast property name
103 :param value: Feast property value
104 :return: Returns True if property and value are valid
105 """
106
107 if fproperty not in feast_configuration_properties:
108 _logger.error("You are trying to set an invalid property")
109 sys.exit(1)
110
111 fprop_type = feast_configuration_properties[fproperty]
112
113 if fprop_type == "URL":
114 if "//" not in value:
115 value = "%s%s" % ("grpc://", value)
116 parsed_value = urlparse(value) # type: ParseResult
117 if parsed_value.netloc:
118 return True
119
120 _logger.error("The property you are trying to set could not be identified")
121 sys.exit(1)
122
123
124 def _save_config(user_config_file_path: str, config_string: Dict[str, str]):
125 """
126 Saves Feast configuration
127 :param user_config_file_path: Local file system path to save configuration
128 :param config_string: Contents in dictionary format to save to path
129 """
130
131 try:
132 with open(user_config_file_path, "w+") as f:
133 toml.dump(config_string, f)
134 except Exception as e:
135 _logger.error("Could not update configuration file for Feast")
136 print(e)
137 sys.exit(1)
138
139
140 def _get_config_file_locations() -> (str, str):
141 user_config_file_dir = join(expanduser("~"), CONFIGURATION_FILE_DIR)
142 user_config_file_path = join(user_config_file_dir, CONFIGURATION_FILE_NAME)
143 return user_config_file_dir, user_config_file_path
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/config.py b/sdk/python/feast/config.py
--- a/sdk/python/feast/config.py
+++ b/sdk/python/feast/config.py
@@ -28,7 +28,7 @@
feast_configuration_properties = {"core_url": "URL", "serving_url": "URL"}
-CONFIGURATION_FILE_DIR = ".feast"
+CONFIGURATION_FILE_DIR = os.environ.get("FEAST_CONFIG", ".feast")
CONFIGURATION_FILE_NAME = "config.toml"
| {"golden_diff": "diff --git a/sdk/python/feast/config.py b/sdk/python/feast/config.py\n--- a/sdk/python/feast/config.py\n+++ b/sdk/python/feast/config.py\n@@ -28,7 +28,7 @@\n \n feast_configuration_properties = {\"core_url\": \"URL\", \"serving_url\": \"URL\"}\n \n-CONFIGURATION_FILE_DIR = \".feast\"\n+CONFIGURATION_FILE_DIR = os.environ.get(\"FEAST_CONFIG\", \".feast\")\n CONFIGURATION_FILE_NAME = \"config.toml\"\n", "issue": "Feast cli config file should be settable by an env var\n**Is your feature request related to a problem? Please describe.**\r\n\r\nIf I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.\r\n\r\n**Describe the solution you'd like**\r\n\r\nexport FEAST_CONFIG=path/to/feast/configfile\r\n\r\nit should default to ~/.feast\nFeast cli config file should be settable by an env var\n**Is your feature request related to a problem? Please describe.**\r\n\r\nIf I have multiple feast instances, I want to be able to set different .feast files to configure the CLI.\r\n\r\n**Describe the solution you'd like**\r\n\r\nexport FEAST_CONFIG=path/to/feast/configfile\r\n\r\nit should default to ~/.feast\n", "before_files": [{"content": "#\n# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom os.path import expanduser, join\nimport logging\nimport os\nimport sys\nfrom typing import Dict\nfrom urllib.parse import urlparse\nfrom urllib.parse import ParseResult\n\nimport toml\n\n_logger = logging.getLogger(__name__)\n\nfeast_configuration_properties = {\"core_url\": \"URL\", \"serving_url\": \"URL\"}\n\nCONFIGURATION_FILE_DIR = \".feast\"\nCONFIGURATION_FILE_NAME = \"config.toml\"\n\n\ndef get_or_create_config() -> Dict:\n \"\"\"\n Creates or gets the Feast users active configuration\n :return: dictionary of Feast properties\n \"\"\"\n\n user_config_file_dir, user_config_file_path = _get_config_file_locations()\n\n if not os.path.exists(os.path.dirname(user_config_file_dir)):\n os.makedirs(os.path.dirname(user_config_file_dir))\n\n if not os.path.isfile(user_config_file_path):\n _save_config(user_config_file_path, _fproperties_to_dict())\n\n try:\n return toml.load(user_config_file_path)\n except FileNotFoundError:\n _logger.error(\n \"Could not find Feast configuration file \" + user_config_file_path\n )\n sys.exit(1)\n except toml.decoder.TomlDecodeError:\n _logger.error(\n \"Could not decode Feast configuration file \" + user_config_file_path\n )\n sys.exit(1)\n except Exception as e:\n _logger.error(e)\n sys.exit(1)\n\n\ndef set_property(fproperty: str, value: str):\n \"\"\"\n Sets a single property in the Feast users local configuration file\n :param fproperty: Feast property name\n :param value: Feast property value\n \"\"\"\n\n if _is_valid_property(fproperty, value):\n active_feast_config = get_or_create_config()\n active_feast_config[fproperty] = value\n _, user_config_file_path = _get_config_file_locations()\n _save_config(user_config_file_path, active_feast_config)\n print(\"Updated property [%s]\" % fproperty)\n else:\n _logger.error(\"Invalid property selected\")\n sys.exit(1)\n\n\ndef get_config_property_or_fail(fproperty):\n active_feast_config = get_or_create_config()\n if _is_valid_property(fproperty, active_feast_config[fproperty]):\n return active_feast_config[fproperty]\n _logger.error(\"Could not load Feast property from configuration: %s\" % fproperty)\n sys.exit(1)\n\n\ndef _fproperties_to_dict() -> Dict[str, str]:\n prop_dict = {}\n for fproperty in feast_configuration_properties:\n prop_dict[fproperty] = \"\"\n return prop_dict\n\n\ndef _is_valid_property(fproperty: str, value: str) -> bool:\n \"\"\"\n Validates both a Feast property as well as value\n :param fproperty: Feast property name\n :param value: Feast property value\n :return: Returns True if property and value are valid\n \"\"\"\n\n if fproperty not in feast_configuration_properties:\n _logger.error(\"You are trying to set an invalid property\")\n sys.exit(1)\n\n fprop_type = feast_configuration_properties[fproperty]\n\n if fprop_type == \"URL\":\n if \"//\" not in value:\n value = \"%s%s\" % (\"grpc://\", value)\n parsed_value = urlparse(value) # type: ParseResult\n if parsed_value.netloc:\n return True\n\n _logger.error(\"The property you are trying to set could not be identified\")\n sys.exit(1)\n\n\ndef _save_config(user_config_file_path: str, config_string: Dict[str, str]):\n \"\"\"\n Saves Feast configuration\n :param user_config_file_path: Local file system path to save configuration\n :param config_string: Contents in dictionary format to save to path\n \"\"\"\n\n try:\n with open(user_config_file_path, \"w+\") as f:\n toml.dump(config_string, f)\n except Exception as e:\n _logger.error(\"Could not update configuration file for Feast\")\n print(e)\n sys.exit(1)\n\n\ndef _get_config_file_locations() -> (str, str):\n user_config_file_dir = join(expanduser(\"~\"), CONFIGURATION_FILE_DIR)\n user_config_file_path = join(user_config_file_dir, CONFIGURATION_FILE_NAME)\n return user_config_file_dir, user_config_file_path\n", "path": "sdk/python/feast/config.py"}], "after_files": [{"content": "#\n# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom os.path import expanduser, join\nimport logging\nimport os\nimport sys\nfrom typing import Dict\nfrom urllib.parse import urlparse\nfrom urllib.parse import ParseResult\n\nimport toml\n\n_logger = logging.getLogger(__name__)\n\nfeast_configuration_properties = {\"core_url\": \"URL\", \"serving_url\": \"URL\"}\n\nCONFIGURATION_FILE_DIR = os.environ.get(\"FEAST_CONFIG\", \".feast\")\nCONFIGURATION_FILE_NAME = \"config.toml\"\n\n\ndef get_or_create_config() -> Dict:\n \"\"\"\n Creates or gets the Feast users active configuration\n :return: dictionary of Feast properties\n \"\"\"\n\n user_config_file_dir, user_config_file_path = _get_config_file_locations()\n\n if not os.path.exists(os.path.dirname(user_config_file_dir)):\n os.makedirs(os.path.dirname(user_config_file_dir))\n\n if not os.path.isfile(user_config_file_path):\n _save_config(user_config_file_path, _fproperties_to_dict())\n\n try:\n return toml.load(user_config_file_path)\n except FileNotFoundError:\n _logger.error(\n \"Could not find Feast configuration file \" + user_config_file_path\n )\n sys.exit(1)\n except toml.decoder.TomlDecodeError:\n _logger.error(\n \"Could not decode Feast configuration file \" + user_config_file_path\n )\n sys.exit(1)\n except Exception as e:\n _logger.error(e)\n sys.exit(1)\n\n\ndef set_property(fproperty: str, value: str):\n \"\"\"\n Sets a single property in the Feast users local configuration file\n :param fproperty: Feast property name\n :param value: Feast property value\n \"\"\"\n\n if _is_valid_property(fproperty, value):\n active_feast_config = get_or_create_config()\n active_feast_config[fproperty] = value\n _, user_config_file_path = _get_config_file_locations()\n _save_config(user_config_file_path, active_feast_config)\n print(\"Updated property [%s]\" % fproperty)\n else:\n _logger.error(\"Invalid property selected\")\n sys.exit(1)\n\n\ndef get_config_property_or_fail(fproperty):\n active_feast_config = get_or_create_config()\n if _is_valid_property(fproperty, active_feast_config[fproperty]):\n return active_feast_config[fproperty]\n _logger.error(\"Could not load Feast property from configuration: %s\" % fproperty)\n sys.exit(1)\n\n\ndef _fproperties_to_dict() -> Dict[str, str]:\n prop_dict = {}\n for fproperty in feast_configuration_properties:\n prop_dict[fproperty] = \"\"\n return prop_dict\n\n\ndef _is_valid_property(fproperty: str, value: str) -> bool:\n \"\"\"\n Validates both a Feast property as well as value\n :param fproperty: Feast property name\n :param value: Feast property value\n :return: Returns True if property and value are valid\n \"\"\"\n\n if fproperty not in feast_configuration_properties:\n _logger.error(\"You are trying to set an invalid property\")\n sys.exit(1)\n\n fprop_type = feast_configuration_properties[fproperty]\n\n if fprop_type == \"URL\":\n if \"//\" not in value:\n value = \"%s%s\" % (\"grpc://\", value)\n parsed_value = urlparse(value) # type: ParseResult\n if parsed_value.netloc:\n return True\n\n _logger.error(\"The property you are trying to set could not be identified\")\n sys.exit(1)\n\n\ndef _save_config(user_config_file_path: str, config_string: Dict[str, str]):\n \"\"\"\n Saves Feast configuration\n :param user_config_file_path: Local file system path to save configuration\n :param config_string: Contents in dictionary format to save to path\n \"\"\"\n\n try:\n with open(user_config_file_path, \"w+\") as f:\n toml.dump(config_string, f)\n except Exception as e:\n _logger.error(\"Could not update configuration file for Feast\")\n print(e)\n sys.exit(1)\n\n\ndef _get_config_file_locations() -> (str, str):\n user_config_file_dir = join(expanduser(\"~\"), CONFIGURATION_FILE_DIR)\n user_config_file_path = join(user_config_file_dir, CONFIGURATION_FILE_NAME)\n return user_config_file_dir, user_config_file_path\n", "path": "sdk/python/feast/config.py"}]} | 1,814 | 111 |
gh_patches_debug_13315 | rasdani/github-patches | git_diff | pypi__warehouse-3525 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove X-Forwarded-Proto/X-Forwarded-Port h2o workaround
[In `warehouse/utils/wsgi.py`](https://github.com/pypa/warehouse/blob/master/warehouse/utils/wsgi.py#L51-L58) we note that once https://github.com/h2o/h2o/issues/883 is solved, we can remove a few lines about `X-Forwarded-Port` and `X-Forwarded-Proto`. They resolved that issue in May 2016 and [have released several new versions since then](https://h2o.examp1e.net/). OK to remove workaround?
Followup to b8b9f385382cd659750c694cf8b1b3db6f1f6d35 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/utils/wsgi.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import hmac
14
15
16 def _forwarded_value(values, num_proxies):
17 values = [v.strip() for v in values.split(",")]
18 if len(values) >= num_proxies:
19 return values[-num_proxies]
20
21
22 class ProxyFixer:
23
24 def __init__(self, app, token, num_proxies=1):
25 self.app = app
26 self.token = token
27 self.num_proxies = num_proxies
28
29 def __call__(self, environ, start_response):
30 # Determine if the request comes from a trusted proxy or not by looking
31 # for a token in the request.
32 request_token = environ.get("HTTP_WAREHOUSE_TOKEN")
33 if (request_token is not None and
34 hmac.compare_digest(self.token, request_token)):
35 # Compute our values from the environment.
36 proto = environ.get("HTTP_WAREHOUSE_PROTO", "")
37 remote_addr = environ.get("HTTP_WAREHOUSE_IP", "")
38 host = environ.get("HTTP_WAREHOUSE_HOST", "")
39 # If we're not getting headers from a trusted third party via the
40 # specialized Warehouse-* headers, then we'll fall back to looking at
41 # X-Fowarded-* headers, assuming that whatever we have in front of us
42 # will strip invalid ones.
43 else:
44 proto = environ.get("HTTP_X_FORWARDED_PROTO", "")
45 remote_addr = _forwarded_value(
46 environ.get("HTTP_X_FORWARDED_FOR", ""),
47 self.num_proxies,
48 )
49 host = environ.get("HTTP_X_FORWARDED_HOST", "")
50
51 # If we have a X-Forwarded-Port and it disagreed with
52 # X-Forwarded-Proto then we're going to listen to X-Forwarded-Port
53 # instead. This is because h2o overwrites X-Forwarded-Proto but not
54 # X-Forwarded-Port
55 # TODO: Note, this can go away if/once h2o/h2o#883 is solved.
56 port = environ.get("HTTP_X_FORWARDED_PORT", "")
57 if port == "443":
58 proto = "https"
59
60 # Put the new header values into our environment.
61 if remote_addr:
62 environ["REMOTE_ADDR"] = remote_addr
63 if host:
64 environ["HTTP_HOST"] = host
65 if proto:
66 environ["wsgi.url_scheme"] = proto
67
68 # Remove any of the forwarded or warehouse headers from the environment
69 for header in {
70 "HTTP_X_FORWARDED_PROTO", "HTTP_X_FORWARDED_FOR",
71 "HTTP_X_FORWARDED_HOST", "HTTP_X_FORWARDED_PORT",
72 "HTTP_WAREHOUSE_TOKEN", "HTTP_WAREHOUSE_PROTO",
73 "HTTP_WAREHOUSE_IP", "HTTP_WAREHOUSE_HOST"}:
74 if header in environ:
75 del environ[header]
76
77 # Dispatch to the real underlying application.
78 return self.app(environ, start_response)
79
80
81 class VhmRootRemover:
82
83 def __init__(self, app):
84 self.app = app
85
86 def __call__(self, environ, start_response):
87 # Delete the X-Vhm-Root header if it exists.
88 if "HTTP_X_VHM_ROOT" in environ:
89 del environ["HTTP_X_VHM_ROOT"]
90
91 return self.app(environ, start_response)
92
93
94 class HostRewrite:
95
96 # TODO: This entire class should not be required.
97
98 def __init__(self, app):
99 self.app = app
100
101 def __call__(self, environ, start_response):
102 # If the host header matches upload.pypi.io, then we want to rewrite it
103 # so that it is instead upload.pypi.org.
104 if environ.get("HTTP_HOST", "").lower() == "upload.pypi.io":
105 environ["HTTP_HOST"] = "upload.pypi.org"
106
107 return self.app(environ, start_response)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/utils/wsgi.py b/warehouse/utils/wsgi.py
--- a/warehouse/utils/wsgi.py
+++ b/warehouse/utils/wsgi.py
@@ -48,15 +48,6 @@
)
host = environ.get("HTTP_X_FORWARDED_HOST", "")
- # If we have a X-Forwarded-Port and it disagreed with
- # X-Forwarded-Proto then we're going to listen to X-Forwarded-Port
- # instead. This is because h2o overwrites X-Forwarded-Proto but not
- # X-Forwarded-Port
- # TODO: Note, this can go away if/once h2o/h2o#883 is solved.
- port = environ.get("HTTP_X_FORWARDED_PORT", "")
- if port == "443":
- proto = "https"
-
# Put the new header values into our environment.
if remote_addr:
environ["REMOTE_ADDR"] = remote_addr
| {"golden_diff": "diff --git a/warehouse/utils/wsgi.py b/warehouse/utils/wsgi.py\n--- a/warehouse/utils/wsgi.py\n+++ b/warehouse/utils/wsgi.py\n@@ -48,15 +48,6 @@\n )\n host = environ.get(\"HTTP_X_FORWARDED_HOST\", \"\")\n \n- # If we have a X-Forwarded-Port and it disagreed with\n- # X-Forwarded-Proto then we're going to listen to X-Forwarded-Port\n- # instead. This is because h2o overwrites X-Forwarded-Proto but not\n- # X-Forwarded-Port\n- # TODO: Note, this can go away if/once h2o/h2o#883 is solved.\n- port = environ.get(\"HTTP_X_FORWARDED_PORT\", \"\")\n- if port == \"443\":\n- proto = \"https\"\n-\n # Put the new header values into our environment.\n if remote_addr:\n environ[\"REMOTE_ADDR\"] = remote_addr\n", "issue": "Remove X-Forwarded-Proto/X-Forwarded-Port h2o workaround\n[In `warehouse/utils/wsgi.py`](https://github.com/pypa/warehouse/blob/master/warehouse/utils/wsgi.py#L51-L58) we note that once https://github.com/h2o/h2o/issues/883 is solved, we can remove a few lines about `X-Forwarded-Port` and `X-Forwarded-Proto`. They resolved that issue in May 2016 and [have released several new versions since then](https://h2o.examp1e.net/). OK to remove workaround?\r\n\r\nFollowup to b8b9f385382cd659750c694cf8b1b3db6f1f6d35 .\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport hmac\n\n\ndef _forwarded_value(values, num_proxies):\n values = [v.strip() for v in values.split(\",\")]\n if len(values) >= num_proxies:\n return values[-num_proxies]\n\n\nclass ProxyFixer:\n\n def __init__(self, app, token, num_proxies=1):\n self.app = app\n self.token = token\n self.num_proxies = num_proxies\n\n def __call__(self, environ, start_response):\n # Determine if the request comes from a trusted proxy or not by looking\n # for a token in the request.\n request_token = environ.get(\"HTTP_WAREHOUSE_TOKEN\")\n if (request_token is not None and\n hmac.compare_digest(self.token, request_token)):\n # Compute our values from the environment.\n proto = environ.get(\"HTTP_WAREHOUSE_PROTO\", \"\")\n remote_addr = environ.get(\"HTTP_WAREHOUSE_IP\", \"\")\n host = environ.get(\"HTTP_WAREHOUSE_HOST\", \"\")\n # If we're not getting headers from a trusted third party via the\n # specialized Warehouse-* headers, then we'll fall back to looking at\n # X-Fowarded-* headers, assuming that whatever we have in front of us\n # will strip invalid ones.\n else:\n proto = environ.get(\"HTTP_X_FORWARDED_PROTO\", \"\")\n remote_addr = _forwarded_value(\n environ.get(\"HTTP_X_FORWARDED_FOR\", \"\"),\n self.num_proxies,\n )\n host = environ.get(\"HTTP_X_FORWARDED_HOST\", \"\")\n\n # If we have a X-Forwarded-Port and it disagreed with\n # X-Forwarded-Proto then we're going to listen to X-Forwarded-Port\n # instead. This is because h2o overwrites X-Forwarded-Proto but not\n # X-Forwarded-Port\n # TODO: Note, this can go away if/once h2o/h2o#883 is solved.\n port = environ.get(\"HTTP_X_FORWARDED_PORT\", \"\")\n if port == \"443\":\n proto = \"https\"\n\n # Put the new header values into our environment.\n if remote_addr:\n environ[\"REMOTE_ADDR\"] = remote_addr\n if host:\n environ[\"HTTP_HOST\"] = host\n if proto:\n environ[\"wsgi.url_scheme\"] = proto\n\n # Remove any of the forwarded or warehouse headers from the environment\n for header in {\n \"HTTP_X_FORWARDED_PROTO\", \"HTTP_X_FORWARDED_FOR\",\n \"HTTP_X_FORWARDED_HOST\", \"HTTP_X_FORWARDED_PORT\",\n \"HTTP_WAREHOUSE_TOKEN\", \"HTTP_WAREHOUSE_PROTO\",\n \"HTTP_WAREHOUSE_IP\", \"HTTP_WAREHOUSE_HOST\"}:\n if header in environ:\n del environ[header]\n\n # Dispatch to the real underlying application.\n return self.app(environ, start_response)\n\n\nclass VhmRootRemover:\n\n def __init__(self, app):\n self.app = app\n\n def __call__(self, environ, start_response):\n # Delete the X-Vhm-Root header if it exists.\n if \"HTTP_X_VHM_ROOT\" in environ:\n del environ[\"HTTP_X_VHM_ROOT\"]\n\n return self.app(environ, start_response)\n\n\nclass HostRewrite:\n\n # TODO: This entire class should not be required.\n\n def __init__(self, app):\n self.app = app\n\n def __call__(self, environ, start_response):\n # If the host header matches upload.pypi.io, then we want to rewrite it\n # so that it is instead upload.pypi.org.\n if environ.get(\"HTTP_HOST\", \"\").lower() == \"upload.pypi.io\":\n environ[\"HTTP_HOST\"] = \"upload.pypi.org\"\n\n return self.app(environ, start_response)\n", "path": "warehouse/utils/wsgi.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport hmac\n\n\ndef _forwarded_value(values, num_proxies):\n values = [v.strip() for v in values.split(\",\")]\n if len(values) >= num_proxies:\n return values[-num_proxies]\n\n\nclass ProxyFixer:\n\n def __init__(self, app, token, num_proxies=1):\n self.app = app\n self.token = token\n self.num_proxies = num_proxies\n\n def __call__(self, environ, start_response):\n # Determine if the request comes from a trusted proxy or not by looking\n # for a token in the request.\n request_token = environ.get(\"HTTP_WAREHOUSE_TOKEN\")\n if (request_token is not None and\n hmac.compare_digest(self.token, request_token)):\n # Compute our values from the environment.\n proto = environ.get(\"HTTP_WAREHOUSE_PROTO\", \"\")\n remote_addr = environ.get(\"HTTP_WAREHOUSE_IP\", \"\")\n host = environ.get(\"HTTP_WAREHOUSE_HOST\", \"\")\n # If we're not getting headers from a trusted third party via the\n # specialized Warehouse-* headers, then we'll fall back to looking at\n # X-Fowarded-* headers, assuming that whatever we have in front of us\n # will strip invalid ones.\n else:\n proto = environ.get(\"HTTP_X_FORWARDED_PROTO\", \"\")\n remote_addr = _forwarded_value(\n environ.get(\"HTTP_X_FORWARDED_FOR\", \"\"),\n self.num_proxies,\n )\n host = environ.get(\"HTTP_X_FORWARDED_HOST\", \"\")\n\n # Put the new header values into our environment.\n if remote_addr:\n environ[\"REMOTE_ADDR\"] = remote_addr\n if host:\n environ[\"HTTP_HOST\"] = host\n if proto:\n environ[\"wsgi.url_scheme\"] = proto\n\n # Remove any of the forwarded or warehouse headers from the environment\n for header in {\n \"HTTP_X_FORWARDED_PROTO\", \"HTTP_X_FORWARDED_FOR\",\n \"HTTP_X_FORWARDED_HOST\", \"HTTP_X_FORWARDED_PORT\",\n \"HTTP_WAREHOUSE_TOKEN\", \"HTTP_WAREHOUSE_PROTO\",\n \"HTTP_WAREHOUSE_IP\", \"HTTP_WAREHOUSE_HOST\"}:\n if header in environ:\n del environ[header]\n\n # Dispatch to the real underlying application.\n return self.app(environ, start_response)\n\n\nclass VhmRootRemover:\n\n def __init__(self, app):\n self.app = app\n\n def __call__(self, environ, start_response):\n # Delete the X-Vhm-Root header if it exists.\n if \"HTTP_X_VHM_ROOT\" in environ:\n del environ[\"HTTP_X_VHM_ROOT\"]\n\n return self.app(environ, start_response)\n\n\nclass HostRewrite:\n\n # TODO: This entire class should not be required.\n\n def __init__(self, app):\n self.app = app\n\n def __call__(self, environ, start_response):\n # If the host header matches upload.pypi.io, then we want to rewrite it\n # so that it is instead upload.pypi.org.\n if environ.get(\"HTTP_HOST\", \"\").lower() == \"upload.pypi.io\":\n environ[\"HTTP_HOST\"] = \"upload.pypi.org\"\n\n return self.app(environ, start_response)\n", "path": "warehouse/utils/wsgi.py"}]} | 1,601 | 223 |
gh_patches_debug_20909 | rasdani/github-patches | git_diff | pypi__warehouse-821 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Display how long ago the release was made
We'll want to display how long ago the release was made on the project/release detail page. Probably we'll want to implement this in javascript?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/i18n/filters.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import babel.dates
14 import jinja2
15
16 from pyramid.threadlocal import get_current_request
17
18
19 @jinja2.contextfilter
20 def format_date(ctx, *args, **kwargs):
21 request = ctx.get("request") or get_current_request()
22 kwargs.setdefault("locale", request.locale)
23 return babel.dates.format_date(*args, **kwargs)
24
```
Path: `warehouse/i18n/__init__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import os.path
14
15 from babel.core import Locale
16 from babel.support import Translations
17
18 from warehouse.i18n.translations import (
19 JinjaRequestTranslation, translate_value, gettext, ngettext,
20 )
21
22
23 __all__ = ["gettext", "ngettext", "includeme"]
24
25
26 GETTEXT_DOMAIN = "warehouse"
27
28 LOCALE_DIR = os.path.abspath(
29 os.path.join(os.path.dirname(__file__), "..", "translations")
30 )
31
32
33 def _locale(request):
34 """
35 Computes a babel.core:Locale() object for this request.
36 """
37 return Locale.parse(request.locale_name)
38
39
40 def _translation(request):
41 """
42 Loads a translation object for this request.
43 """
44 # TODO: Should we cache these in memory?
45 return Translations.load(LOCALE_DIR, request.locale, domain=GETTEXT_DOMAIN)
46
47
48 def includeme(config):
49 # Add the request attributes
50 config.add_request_method(_locale, name="locale", reify=True)
51 config.add_request_method(_translation, name="translation", reify=True)
52
53 # Register our i18n/l10n filters for Jinja2
54 filters = config.get_settings().setdefault("jinja2.filters", {})
55 filters.setdefault("format_date", "warehouse.i18n.filters:format_date")
56
57 # Register our finalize function for Jinja2
58 config.get_settings()["jinja2.finalize"] = translate_value
59
60 # Configure Jinja2 for translation
61 config.get_settings()["jinja2.i18n.domain"] = GETTEXT_DOMAIN
62 config.get_settings()["jinja2.i18n.gettext"] = JinjaRequestTranslation
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py
--- a/warehouse/i18n/__init__.py
+++ b/warehouse/i18n/__init__.py
@@ -53,6 +53,10 @@
# Register our i18n/l10n filters for Jinja2
filters = config.get_settings().setdefault("jinja2.filters", {})
filters.setdefault("format_date", "warehouse.i18n.filters:format_date")
+ filters.setdefault(
+ "format_datetime",
+ "warehouse.i18n.filters:format_datetime",
+ )
# Register our finalize function for Jinja2
config.get_settings()["jinja2.finalize"] = translate_value
diff --git a/warehouse/i18n/filters.py b/warehouse/i18n/filters.py
--- a/warehouse/i18n/filters.py
+++ b/warehouse/i18n/filters.py
@@ -21,3 +21,10 @@
request = ctx.get("request") or get_current_request()
kwargs.setdefault("locale", request.locale)
return babel.dates.format_date(*args, **kwargs)
+
+
[email protected]
+def format_datetime(ctx, *args, **kwargs):
+ request = ctx.get("request") or get_current_request()
+ kwargs.setdefault("locale", request.locale)
+ return babel.dates.format_datetime(*args, **kwargs)
| {"golden_diff": "diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py\n--- a/warehouse/i18n/__init__.py\n+++ b/warehouse/i18n/__init__.py\n@@ -53,6 +53,10 @@\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n+ filters.setdefault(\n+ \"format_datetime\",\n+ \"warehouse.i18n.filters:format_datetime\",\n+ )\n \n # Register our finalize function for Jinja2\n config.get_settings()[\"jinja2.finalize\"] = translate_value\ndiff --git a/warehouse/i18n/filters.py b/warehouse/i18n/filters.py\n--- a/warehouse/i18n/filters.py\n+++ b/warehouse/i18n/filters.py\n@@ -21,3 +21,10 @@\n request = ctx.get(\"request\") or get_current_request()\n kwargs.setdefault(\"locale\", request.locale)\n return babel.dates.format_date(*args, **kwargs)\n+\n+\[email protected]\n+def format_datetime(ctx, *args, **kwargs):\n+ request = ctx.get(\"request\") or get_current_request()\n+ kwargs.setdefault(\"locale\", request.locale)\n+ return babel.dates.format_datetime(*args, **kwargs)\n", "issue": "Display how long ago the release was made\nWe'll want to display how long ago the release was made on the project/release detail page. Probably we'll want to implement this in javascript?\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport babel.dates\nimport jinja2\n\nfrom pyramid.threadlocal import get_current_request\n\n\[email protected]\ndef format_date(ctx, *args, **kwargs):\n request = ctx.get(\"request\") or get_current_request()\n kwargs.setdefault(\"locale\", request.locale)\n return babel.dates.format_date(*args, **kwargs)\n", "path": "warehouse/i18n/filters.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os.path\n\nfrom babel.core import Locale\nfrom babel.support import Translations\n\nfrom warehouse.i18n.translations import (\n JinjaRequestTranslation, translate_value, gettext, ngettext,\n)\n\n\n__all__ = [\"gettext\", \"ngettext\", \"includeme\"]\n\n\nGETTEXT_DOMAIN = \"warehouse\"\n\nLOCALE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"..\", \"translations\")\n)\n\n\ndef _locale(request):\n \"\"\"\n Computes a babel.core:Locale() object for this request.\n \"\"\"\n return Locale.parse(request.locale_name)\n\n\ndef _translation(request):\n \"\"\"\n Loads a translation object for this request.\n \"\"\"\n # TODO: Should we cache these in memory?\n return Translations.load(LOCALE_DIR, request.locale, domain=GETTEXT_DOMAIN)\n\n\ndef includeme(config):\n # Add the request attributes\n config.add_request_method(_locale, name=\"locale\", reify=True)\n config.add_request_method(_translation, name=\"translation\", reify=True)\n\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n\n # Register our finalize function for Jinja2\n config.get_settings()[\"jinja2.finalize\"] = translate_value\n\n # Configure Jinja2 for translation\n config.get_settings()[\"jinja2.i18n.domain\"] = GETTEXT_DOMAIN\n config.get_settings()[\"jinja2.i18n.gettext\"] = JinjaRequestTranslation\n", "path": "warehouse/i18n/__init__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport babel.dates\nimport jinja2\n\nfrom pyramid.threadlocal import get_current_request\n\n\[email protected]\ndef format_date(ctx, *args, **kwargs):\n request = ctx.get(\"request\") or get_current_request()\n kwargs.setdefault(\"locale\", request.locale)\n return babel.dates.format_date(*args, **kwargs)\n\n\[email protected]\ndef format_datetime(ctx, *args, **kwargs):\n request = ctx.get(\"request\") or get_current_request()\n kwargs.setdefault(\"locale\", request.locale)\n return babel.dates.format_datetime(*args, **kwargs)\n", "path": "warehouse/i18n/filters.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os.path\n\nfrom babel.core import Locale\nfrom babel.support import Translations\n\nfrom warehouse.i18n.translations import (\n JinjaRequestTranslation, translate_value, gettext, ngettext,\n)\n\n\n__all__ = [\"gettext\", \"ngettext\", \"includeme\"]\n\n\nGETTEXT_DOMAIN = \"warehouse\"\n\nLOCALE_DIR = os.path.abspath(\n os.path.join(os.path.dirname(__file__), \"..\", \"translations\")\n)\n\n\ndef _locale(request):\n \"\"\"\n Computes a babel.core:Locale() object for this request.\n \"\"\"\n return Locale.parse(request.locale_name)\n\n\ndef _translation(request):\n \"\"\"\n Loads a translation object for this request.\n \"\"\"\n # TODO: Should we cache these in memory?\n return Translations.load(LOCALE_DIR, request.locale, domain=GETTEXT_DOMAIN)\n\n\ndef includeme(config):\n # Add the request attributes\n config.add_request_method(_locale, name=\"locale\", reify=True)\n config.add_request_method(_translation, name=\"translation\", reify=True)\n\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n filters.setdefault(\n \"format_datetime\",\n \"warehouse.i18n.filters:format_datetime\",\n )\n\n # Register our finalize function for Jinja2\n config.get_settings()[\"jinja2.finalize\"] = translate_value\n\n # Configure Jinja2 for translation\n config.get_settings()[\"jinja2.i18n.domain\"] = GETTEXT_DOMAIN\n config.get_settings()[\"jinja2.i18n.gettext\"] = JinjaRequestTranslation\n", "path": "warehouse/i18n/__init__.py"}]} | 1,143 | 327 |
gh_patches_debug_44453 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2317 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I3013 - Incorrectly flagged for aurora-<engine> database instances
**cfn-lint version**: 0.61.3
cfn-lint is falsely flagging the missing `BackupRetentionPeriod` property and it's subsequent value from `AWS::RDS:DBInstance` for an Aurora based database instance.
This property and a set value **is not required** if it is part of an Aurora cluster. Assumption is the `Engine` property value would need to be checked for the presence of `aurora-<engine>` to not flag this linting issue.
I suspect if the value is set via a CloudFormation function cfn-lint would still incorrectly flag this.
**AWS Documentation reference** - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod
> Amazon Aurora
> Not applicable. The retention period for automated backups is managed by the DB cluster.
**CloudFormation example**
```yaml
AuroraInstance:
Type: AWS::RDS::DBInstance
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
AllowMajorVersionUpgrade: false
AutoMinorVersionUpgrade: !Ref AutoMinorVersionUpgrade
DBClusterIdentifier: !Ref AuroraCluster
DBInstanceClass: !Ref InstanceClass
DBInstanceIdentifier: "MyAuroraInstance"
DBParameterGroupName: !Ref ParamGroup
DBSubnetGroupName: !Ref SubnetGroup
DeleteAutomatedBackups: !Ref DeleteAutomatedBackups
EnablePerformanceInsights: !Ref EnablePerformanceInsights
Engine: aurora-postgresql
EngineVersion: !Ref EngineVersion
PerformanceInsightsKMSKeyId: !Ref KmsKey
PerformanceInsightsRetentionPeriod: 7
PubliclyAccessible: false
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule
6 from cfnlint.rules import RuleMatch
7
8
9 class RetentionPeriodOnResourceTypesWithAutoExpiringContent(CloudFormationLintRule):
10 """Check for RetentionPeriod """
11 id = 'I3013'
12 shortdesc = 'Check resources with auto expiring content have explicit retention period'
13 description = 'The behaviour for data retention is different across AWS Services.'\
14 'If no retention period is specified the default for some services is to delete the data after a period of time.' \
15 'This check requires you to explicitly set the retention period for those resources to avoid unexpected data losses'
16 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint'
17 tags = ['resources', 'retentionperiod']
18
19 def _check_ref(self, value, parameters, resources, path): # pylint: disable=W0613
20 print('called')
21 print(value)
22
23 def match(self, cfn):
24 """Check for RetentionPeriod"""
25 matches = []
26
27 retention_attributes_by_resource_type = {
28 'AWS::Kinesis::Stream': [
29 {
30 'Attribute': 'RetentionPeriodHours',
31 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesis-stream.html#cfn-kinesis-stream-retentionperiodhours'
32 }
33 ],
34 'AWS::SQS::Queue': [
35 {
36 'Attribute': 'MessageRetentionPeriod',
37 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-msgretentionperiod'
38 }
39 ],
40 'AWS::DocDB::DBCluster': [
41 {
42 'Attribute': 'BackupRetentionPeriod',
43 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-docdb-dbcluster.html#cfn-docdb-dbcluster-backupretentionperiod'
44 }
45 ],
46 'AWS::Synthetics::Canary': [
47 {
48 'Attribute': 'SuccessRetentionPeriod',
49 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-successretentionperiod'
50 },
51 {
52 'Attribute': 'FailureRetentionPeriod',
53 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-failureretentionperiod'
54 }
55 ],
56 'AWS::Redshift::Cluster': [
57 {
58 'Attribute': 'AutomatedSnapshotRetentionPeriod',
59 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html#cfn-redshift-cluster-automatedsnapshotretentionperiod'
60 }
61 ],
62 'AWS::RDS::DBInstance': [
63 {
64 'Attribute': 'BackupRetentionPeriod',
65 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod'
66 }
67 ],
68 'AWS::RDS::DBCluster': [
69 {
70 'Attribute': 'BackupRetentionPeriod',
71 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-backuprententionperiod'
72 }
73 ]
74 }
75
76 resources = cfn.get_resources()
77 for r_name, r_values in resources.items():
78 if r_values.get('Type') in retention_attributes_by_resource_type:
79 for attr_def in retention_attributes_by_resource_type[r_values.get('Type')]:
80 property_sets = r_values.get_safe('Properties')
81 for property_set, path in property_sets:
82 error_path = ['Resources', r_name] + path
83 if not property_set:
84 message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
85 str(x) for x in error_path)
86 matches.append(RuleMatch(error_path, message))
87 else:
88 value = property_set.get(attr_def.get('Attribute'))
89 if not value:
90 message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
91 str(x) for x in error_path)
92 matches.append(RuleMatch(error_path, message))
93 if isinstance(value, dict):
94 # pylint: disable=protected-access
95 refs = cfn._search_deep_keys(
96 'Ref', value, error_path + [attr_def.get('Attribute')])
97 for ref in refs:
98 if ref[-1] == 'AWS::NoValue':
99 message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
100 str(x) for x in ref[0:-1])
101 matches.append(RuleMatch(ref[0:-1], message))
102
103 return matches
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py b/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py
--- a/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py
+++ b/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py
@@ -2,6 +2,7 @@
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""
+import re
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
@@ -17,7 +18,6 @@
tags = ['resources', 'retentionperiod']
def _check_ref(self, value, parameters, resources, path): # pylint: disable=W0613
- print('called')
print(value)
def match(self, cfn):
@@ -62,7 +62,9 @@
'AWS::RDS::DBInstance': [
{
'Attribute': 'BackupRetentionPeriod',
- 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod'
+ 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod',
+ 'CheckAttribute': 'Engine',
+ 'CheckAttributeRegex': re.compile('^((?!aurora).)*$'),
}
],
'AWS::RDS::DBCluster': [
@@ -81,23 +83,29 @@
for property_set, path in property_sets:
error_path = ['Resources', r_name] + path
if not property_set:
- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
- str(x) for x in error_path)
+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {"/".join(str(x) for x in error_path)}'
matches.append(RuleMatch(error_path, message))
else:
value = property_set.get(attr_def.get('Attribute'))
if not value:
- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
- str(x) for x in error_path)
- matches.append(RuleMatch(error_path, message))
+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {"/".join(str(x) for x in error_path)}'
+ if attr_def.get('CheckAttribute'):
+ if self._validate_property(property_set.get(attr_def.get('CheckAttribute')), attr_def.get('CheckAttributeRegex')):
+ matches.append(RuleMatch(error_path, message))
+ else:
+ matches.append(RuleMatch(error_path, message))
if isinstance(value, dict):
# pylint: disable=protected-access
refs = cfn._search_deep_keys(
'Ref', value, error_path + [attr_def.get('Attribute')])
for ref in refs:
if ref[-1] == 'AWS::NoValue':
- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(
- str(x) for x in ref[0:-1])
+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {"/".join(str(x) for x in ref[0:-1])}'
matches.append(RuleMatch(ref[0:-1], message))
return matches
+
+ def _validate_property(self, value, regex) -> bool:
+ if regex.match(value):
+ return True
+ return False
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py b/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py\n--- a/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py\n+++ b/src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py\n@@ -2,6 +2,7 @@\n Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n SPDX-License-Identifier: MIT-0\n \"\"\"\n+import re\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n \n@@ -17,7 +18,6 @@\n tags = ['resources', 'retentionperiod']\n \n def _check_ref(self, value, parameters, resources, path): # pylint: disable=W0613\n- print('called')\n print(value)\n \n def match(self, cfn):\n@@ -62,7 +62,9 @@\n 'AWS::RDS::DBInstance': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n- 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod'\n+ 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod',\n+ 'CheckAttribute': 'Engine',\n+ 'CheckAttributeRegex': re.compile('^((?!aurora).)*$'),\n }\n ],\n 'AWS::RDS::DBCluster': [\n@@ -81,23 +83,29 @@\n for property_set, path in property_sets:\n error_path = ['Resources', r_name] + path\n if not property_set:\n- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n- str(x) for x in error_path)\n+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in error_path)}'\n matches.append(RuleMatch(error_path, message))\n else:\n value = property_set.get(attr_def.get('Attribute'))\n if not value:\n- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n- str(x) for x in error_path)\n- matches.append(RuleMatch(error_path, message))\n+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in error_path)}'\n+ if attr_def.get('CheckAttribute'):\n+ if self._validate_property(property_set.get(attr_def.get('CheckAttribute')), attr_def.get('CheckAttributeRegex')):\n+ matches.append(RuleMatch(error_path, message))\n+ else:\n+ matches.append(RuleMatch(error_path, message))\n if isinstance(value, dict):\n # pylint: disable=protected-access\n refs = cfn._search_deep_keys(\n 'Ref', value, error_path + [attr_def.get('Attribute')])\n for ref in refs:\n if ref[-1] == 'AWS::NoValue':\n- message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n- str(x) for x in ref[0:-1])\n+ message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in ref[0:-1])}'\n matches.append(RuleMatch(ref[0:-1], message))\n \n return matches\n+\n+ def _validate_property(self, value, regex) -> bool:\n+ if regex.match(value):\n+ return True\n+ return False\n", "issue": "I3013 - Incorrectly flagged for aurora-<engine> database instances\n**cfn-lint version**: 0.61.3\r\n\r\ncfn-lint is falsely flagging the missing `BackupRetentionPeriod` property and it's subsequent value from `AWS::RDS:DBInstance` for an Aurora based database instance.\r\n\r\nThis property and a set value **is not required** if it is part of an Aurora cluster. Assumption is the `Engine` property value would need to be checked for the presence of `aurora-<engine>` to not flag this linting issue.\r\n\r\nI suspect if the value is set via a CloudFormation function cfn-lint would still incorrectly flag this.\r\n\r\n**AWS Documentation reference** - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod\r\n\r\n> Amazon Aurora\r\n> Not applicable. The retention period for automated backups is managed by the DB cluster.\r\n\r\n**CloudFormation example**\r\n\r\n```yaml\r\nAuroraInstance:\r\n Type: AWS::RDS::DBInstance\r\n DeletionPolicy: Retain\r\n UpdateReplacePolicy: Retain\r\n Properties:\r\n AllowMajorVersionUpgrade: false\r\n AutoMinorVersionUpgrade: !Ref AutoMinorVersionUpgrade\r\n DBClusterIdentifier: !Ref AuroraCluster\r\n DBInstanceClass: !Ref InstanceClass\r\n DBInstanceIdentifier: \"MyAuroraInstance\"\r\n DBParameterGroupName: !Ref ParamGroup\r\n DBSubnetGroupName: !Ref SubnetGroup\r\n DeleteAutomatedBackups: !Ref DeleteAutomatedBackups\r\n EnablePerformanceInsights: !Ref EnablePerformanceInsights\r\n Engine: aurora-postgresql\r\n EngineVersion: !Ref EngineVersion\r\n PerformanceInsightsKMSKeyId: !Ref KmsKey\r\n PerformanceInsightsRetentionPeriod: 7\r\n PubliclyAccessible: false\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RetentionPeriodOnResourceTypesWithAutoExpiringContent(CloudFormationLintRule):\n \"\"\"Check for RetentionPeriod \"\"\"\n id = 'I3013'\n shortdesc = 'Check resources with auto expiring content have explicit retention period'\n description = 'The behaviour for data retention is different across AWS Services.'\\\n 'If no retention period is specified the default for some services is to delete the data after a period of time.' \\\n 'This check requires you to explicitly set the retention period for those resources to avoid unexpected data losses'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint'\n tags = ['resources', 'retentionperiod']\n\n def _check_ref(self, value, parameters, resources, path): # pylint: disable=W0613\n print('called')\n print(value)\n\n def match(self, cfn):\n \"\"\"Check for RetentionPeriod\"\"\"\n matches = []\n\n retention_attributes_by_resource_type = {\n 'AWS::Kinesis::Stream': [\n {\n 'Attribute': 'RetentionPeriodHours',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesis-stream.html#cfn-kinesis-stream-retentionperiodhours'\n }\n ],\n 'AWS::SQS::Queue': [\n {\n 'Attribute': 'MessageRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-msgretentionperiod'\n }\n ],\n 'AWS::DocDB::DBCluster': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-docdb-dbcluster.html#cfn-docdb-dbcluster-backupretentionperiod'\n }\n ],\n 'AWS::Synthetics::Canary': [\n {\n 'Attribute': 'SuccessRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-successretentionperiod'\n },\n {\n 'Attribute': 'FailureRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-failureretentionperiod'\n }\n ],\n 'AWS::Redshift::Cluster': [\n {\n 'Attribute': 'AutomatedSnapshotRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html#cfn-redshift-cluster-automatedsnapshotretentionperiod'\n }\n ],\n 'AWS::RDS::DBInstance': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod'\n }\n ],\n 'AWS::RDS::DBCluster': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-backuprententionperiod'\n }\n ]\n }\n\n resources = cfn.get_resources()\n for r_name, r_values in resources.items():\n if r_values.get('Type') in retention_attributes_by_resource_type:\n for attr_def in retention_attributes_by_resource_type[r_values.get('Type')]:\n property_sets = r_values.get_safe('Properties')\n for property_set, path in property_sets:\n error_path = ['Resources', r_name] + path\n if not property_set:\n message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n str(x) for x in error_path)\n matches.append(RuleMatch(error_path, message))\n else:\n value = property_set.get(attr_def.get('Attribute'))\n if not value:\n message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n str(x) for x in error_path)\n matches.append(RuleMatch(error_path, message))\n if isinstance(value, dict):\n # pylint: disable=protected-access\n refs = cfn._search_deep_keys(\n 'Ref', value, error_path + [attr_def.get('Attribute')])\n for ref in refs:\n if ref[-1] == 'AWS::NoValue':\n message = 'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : %s' % '/'.join(\n str(x) for x in ref[0:-1])\n matches.append(RuleMatch(ref[0:-1], message))\n\n return matches\n", "path": "src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass RetentionPeriodOnResourceTypesWithAutoExpiringContent(CloudFormationLintRule):\n \"\"\"Check for RetentionPeriod \"\"\"\n id = 'I3013'\n shortdesc = 'Check resources with auto expiring content have explicit retention period'\n description = 'The behaviour for data retention is different across AWS Services.'\\\n 'If no retention period is specified the default for some services is to delete the data after a period of time.' \\\n 'This check requires you to explicitly set the retention period for those resources to avoid unexpected data losses'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint'\n tags = ['resources', 'retentionperiod']\n\n def _check_ref(self, value, parameters, resources, path): # pylint: disable=W0613\n print(value)\n\n def match(self, cfn):\n \"\"\"Check for RetentionPeriod\"\"\"\n matches = []\n\n retention_attributes_by_resource_type = {\n 'AWS::Kinesis::Stream': [\n {\n 'Attribute': 'RetentionPeriodHours',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesis-stream.html#cfn-kinesis-stream-retentionperiodhours'\n }\n ],\n 'AWS::SQS::Queue': [\n {\n 'Attribute': 'MessageRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-msgretentionperiod'\n }\n ],\n 'AWS::DocDB::DBCluster': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-docdb-dbcluster.html#cfn-docdb-dbcluster-backupretentionperiod'\n }\n ],\n 'AWS::Synthetics::Canary': [\n {\n 'Attribute': 'SuccessRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-successretentionperiod'\n },\n {\n 'Attribute': 'FailureRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-synthetics-canary.html#cfn-synthetics-canary-failureretentionperiod'\n }\n ],\n 'AWS::Redshift::Cluster': [\n {\n 'Attribute': 'AutomatedSnapshotRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-redshift-cluster.html#cfn-redshift-cluster-automatedsnapshotretentionperiod'\n }\n ],\n 'AWS::RDS::DBInstance': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-backupretentionperiod',\n 'CheckAttribute': 'Engine',\n 'CheckAttributeRegex': re.compile('^((?!aurora).)*$'),\n }\n ],\n 'AWS::RDS::DBCluster': [\n {\n 'Attribute': 'BackupRetentionPeriod',\n 'SourceUrl': 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html#cfn-rds-dbcluster-backuprententionperiod'\n }\n ]\n }\n\n resources = cfn.get_resources()\n for r_name, r_values in resources.items():\n if r_values.get('Type') in retention_attributes_by_resource_type:\n for attr_def in retention_attributes_by_resource_type[r_values.get('Type')]:\n property_sets = r_values.get_safe('Properties')\n for property_set, path in property_sets:\n error_path = ['Resources', r_name] + path\n if not property_set:\n message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in error_path)}'\n matches.append(RuleMatch(error_path, message))\n else:\n value = property_set.get(attr_def.get('Attribute'))\n if not value:\n message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in error_path)}'\n if attr_def.get('CheckAttribute'):\n if self._validate_property(property_set.get(attr_def.get('CheckAttribute')), attr_def.get('CheckAttributeRegex')):\n matches.append(RuleMatch(error_path, message))\n else:\n matches.append(RuleMatch(error_path, message))\n if isinstance(value, dict):\n # pylint: disable=protected-access\n refs = cfn._search_deep_keys(\n 'Ref', value, error_path + [attr_def.get('Attribute')])\n for ref in refs:\n if ref[-1] == 'AWS::NoValue':\n message = f'The default retention period will delete the data after a pre-defined time. Set an explicit values to avoid data loss on resource : {\"/\".join(str(x) for x in ref[0:-1])}'\n matches.append(RuleMatch(ref[0:-1], message))\n\n return matches\n\n def _validate_property(self, value, regex) -> bool:\n if regex.match(value):\n return True\n return False\n", "path": "src/cfnlint/rules/resources/RetentionPeriodOnResourceTypesWithAutoExpiringContent.py"}]} | 2,000 | 916 |
gh_patches_debug_7923 | rasdani/github-patches | git_diff | plotly__dash-2734 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] Python 3.10 support
**Is your feature request related to a problem? Please describe.**
Hi, I'm wondering if Dash supports Python 3.10 or if this is on the roadmap. Thanks for all your great work!
**Describe the solution you'd like**
As a user of Dash, I would like to be able to install Dash and/or Plotly in a Python 3.10 environment.
**Describe alternatives you've considered**
I tried [searching the forums](https://community.plotly.com/search?q=python%203.10), but didn't find any results. I haven't tried building it myself yet, but plan to.
**Additional context**
n/a
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import io
2 from setuptools import setup, find_packages
3
4 main_ns = {}
5 exec(open("dash/version.py", encoding="utf-8").read(), main_ns) # pylint: disable=exec-used, consider-using-with
6
7
8 def read_req_file(req_type):
9 with open(f"requires-{req_type}.txt", encoding="utf-8") as fp:
10 requires = (line.strip() for line in fp)
11 return [req for req in requires if req and not req.startswith("#")]
12
13
14 setup(
15 name="dash",
16 version=main_ns["__version__"],
17 author="Chris Parmer",
18 author_email="[email protected]",
19 packages=find_packages(exclude=["tests*"]),
20 include_package_data=True,
21 license="MIT",
22 description=(
23 "A Python framework for building reactive web-apps. "
24 "Developed by Plotly."
25 ),
26 long_description=io.open("README.md", encoding="utf-8").read(), # pylint: disable=consider-using-with
27 long_description_content_type="text/markdown",
28 install_requires=read_req_file("install"),
29 python_requires=">=3.6",
30 extras_require={
31 "ci": read_req_file("ci"),
32 "dev": read_req_file("dev"),
33 "testing": read_req_file("testing"),
34 "celery": read_req_file("celery"),
35 "diskcache": read_req_file("diskcache"),
36 "compress": read_req_file("compress")
37 },
38 entry_points={
39 "console_scripts": [
40 "dash-generate-components = "
41 "dash.development.component_generator:cli",
42 "renderer = dash.development.build_process:renderer",
43 "dash-update-components = dash.development.update_components:cli"
44 ],
45 "pytest11": ["dash = dash.testing.plugin"],
46 },
47 url="https://plotly.com/dash",
48 project_urls={
49 "Documentation": "https://dash.plotly.com",
50 "Source": "https://github.com/plotly/dash",
51 "Issue Tracker": "https://github.com/plotly/dash/issues",
52 },
53 classifiers=[
54 "Development Status :: 5 - Production/Stable",
55 "Environment :: Web Environment",
56 "Framework :: Dash",
57 "Framework :: Flask",
58 "Intended Audience :: Developers",
59 "Intended Audience :: Education",
60 "Intended Audience :: Financial and Insurance Industry",
61 "Intended Audience :: Healthcare Industry",
62 "Intended Audience :: Manufacturing",
63 "Intended Audience :: Science/Research",
64 "License :: OSI Approved :: MIT License",
65 "Programming Language :: Python",
66 "Programming Language :: Python :: 3",
67 "Programming Language :: Python :: 3.6",
68 "Programming Language :: Python :: 3.7",
69 "Programming Language :: Python :: 3.8",
70 "Programming Language :: Python :: 3.9",
71 "Topic :: Database :: Front-Ends",
72 "Topic :: Office/Business :: Financial :: Spreadsheet",
73 "Topic :: Scientific/Engineering :: Visualization",
74 "Topic :: Software Development :: Libraries :: Application Frameworks",
75 "Topic :: Software Development :: Widget Sets",
76 ],
77 data_files=[
78 # like `jupyter nbextension install --sys-prefix`
79 ("share/jupyter/nbextensions/dash", [
80 "dash/nbextension/main.js",
81 ]),
82 # like `jupyter nbextension enable --sys-prefix`
83 ("etc/jupyter/nbconfig/notebook.d", [
84 "dash/nbextension/dash.json"
85 ]),
86 # Place jupyterlab extension in extension directory
87 ("share/jupyter/lab/extensions", [
88 "dash/labextension/dist/dash-jupyterlab.tgz"
89 ]),
90 ],
91 )
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -68,6 +68,7 @@
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
"Topic :: Database :: Front-Ends",
"Topic :: Office/Business :: Financial :: Spreadsheet",
"Topic :: Scientific/Engineering :: Visualization",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -68,6 +68,7 @@\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n", "issue": "[Feature Request] Python 3.10 support\n**Is your feature request related to a problem? Please describe.**\r\nHi, I'm wondering if Dash supports Python 3.10 or if this is on the roadmap. Thanks for all your great work!\r\n\r\n**Describe the solution you'd like**\r\nAs a user of Dash, I would like to be able to install Dash and/or Plotly in a Python 3.10 environment.\r\n\r\n**Describe alternatives you've considered**\r\nI tried [searching the forums](https://community.plotly.com/search?q=python%203.10), but didn't find any results. I haven't tried building it myself yet, but plan to.\r\n\r\n**Additional context**\r\nn/a\r\n\n", "before_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\", encoding=\"utf-8\").read(), main_ns) # pylint: disable=exec-used, consider-using-with\n\n\ndef read_req_file(req_type):\n with open(f\"requires-{req_type}.txt\", encoding=\"utf-8\") as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"Chris Parmer\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(), # pylint: disable=consider-using-with\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n python_requires=\">=3.6\",\n extras_require={\n \"ci\": read_req_file(\"ci\"),\n \"dev\": read_req_file(\"dev\"),\n \"testing\": read_req_file(\"testing\"),\n \"celery\": read_req_file(\"celery\"),\n \"diskcache\": read_req_file(\"diskcache\"),\n \"compress\": read_req_file(\"compress\")\n },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components = \"\n \"dash.development.component_generator:cli\",\n \"renderer = dash.development.build_process:renderer\",\n \"dash-update-components = dash.development.update_components:cli\"\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plotly.com/dash\",\n project_urls={\n \"Documentation\": \"https://dash.plotly.com\",\n \"Source\": \"https://github.com/plotly/dash\",\n \"Issue Tracker\": \"https://github.com/plotly/dash/issues\",\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Dash\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n data_files=[\n # like `jupyter nbextension install --sys-prefix`\n (\"share/jupyter/nbextensions/dash\", [\n \"dash/nbextension/main.js\",\n ]),\n # like `jupyter nbextension enable --sys-prefix`\n (\"etc/jupyter/nbconfig/notebook.d\", [\n \"dash/nbextension/dash.json\"\n ]),\n # Place jupyterlab extension in extension directory\n (\"share/jupyter/lab/extensions\", [\n \"dash/labextension/dist/dash-jupyterlab.tgz\"\n ]),\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\", encoding=\"utf-8\").read(), main_ns) # pylint: disable=exec-used, consider-using-with\n\n\ndef read_req_file(req_type):\n with open(f\"requires-{req_type}.txt\", encoding=\"utf-8\") as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"Chris Parmer\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(), # pylint: disable=consider-using-with\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n python_requires=\">=3.6\",\n extras_require={\n \"ci\": read_req_file(\"ci\"),\n \"dev\": read_req_file(\"dev\"),\n \"testing\": read_req_file(\"testing\"),\n \"celery\": read_req_file(\"celery\"),\n \"diskcache\": read_req_file(\"diskcache\"),\n \"compress\": read_req_file(\"compress\")\n },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components = \"\n \"dash.development.component_generator:cli\",\n \"renderer = dash.development.build_process:renderer\",\n \"dash-update-components = dash.development.update_components:cli\"\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plotly.com/dash\",\n project_urls={\n \"Documentation\": \"https://dash.plotly.com\",\n \"Source\": \"https://github.com/plotly/dash\",\n \"Issue Tracker\": \"https://github.com/plotly/dash/issues\",\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Dash\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n data_files=[\n # like `jupyter nbextension install --sys-prefix`\n (\"share/jupyter/nbextensions/dash\", [\n \"dash/nbextension/main.js\",\n ]),\n # like `jupyter nbextension enable --sys-prefix`\n (\"etc/jupyter/nbconfig/notebook.d\", [\n \"dash/nbextension/dash.json\"\n ]),\n # Place jupyterlab extension in extension directory\n (\"share/jupyter/lab/extensions\", [\n \"dash/labextension/dist/dash-jupyterlab.tgz\"\n ]),\n ],\n)\n", "path": "setup.py"}]} | 1,371 | 115 |
gh_patches_debug_1562 | rasdani/github-patches | git_diff | pytorch__ignite-1629 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Images not rendered on PyPI
## π Documentation
The logo and other assets on the project README.md are not rendered on the PyPI page of [pytorch-ignite](https://pypi.org/project/pytorch-ignite/)
This is because PyPI does not read the repo for the images, and we'll have to use external links for the image. (reference [here](https://stackoverflow.com/questions/41983209/how-do-i-add-images-to-a-pypi-readme-that-works-on-github))
We could use the `https://raw.githubusercontent.com/*` counterparts for the assets instead of specifying the path
By specifying the path, I mean the following
https://github.com/pytorch/ignite/blob/6753b19b74fd8d128188dd0a75b405d19aa515b5/README.md#L5
Just a suggestion, please ignore if it is unnecessary :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import io
2 import os
3 import re
4
5 from setuptools import find_packages, setup
6
7
8 def read(*names, **kwargs):
9 with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get("encoding", "utf8")) as fp:
10 return fp.read()
11
12
13 def find_version(*file_paths):
14 version_file = read(*file_paths)
15 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
16 if version_match:
17 return version_match.group(1)
18 raise RuntimeError("Unable to find version string.")
19
20
21 readme = read("README.md")
22
23 VERSION = find_version("ignite", "__init__.py")
24
25 requirements = [
26 "torch>=1.3,<2",
27 ]
28
29 setup(
30 # Metadata
31 name="pytorch-ignite",
32 version=VERSION,
33 author="PyTorch Core Team",
34 author_email="[email protected]",
35 url="https://github.com/pytorch/ignite",
36 description="A lightweight library to help with training neural networks in PyTorch.",
37 long_description_content_type="text/markdown",
38 long_description=readme,
39 license="BSD",
40 # Package info
41 packages=find_packages(exclude=("tests", "tests.*",)),
42 zip_safe=True,
43 install_requires=requirements,
44 )
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,9 @@
raise RuntimeError("Unable to find version string.")
-readme = read("README.md")
+readme = read("README.md").replace(
+ 'src="assets/', 'src="https://raw.githubusercontent.com/pytorch/ignite/master/assets/'
+)
VERSION = find_version("ignite", "__init__.py")
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,9 @@\n raise RuntimeError(\"Unable to find version string.\")\n \n \n-readme = read(\"README.md\")\n+readme = read(\"README.md\").replace(\n+ 'src=\"assets/', 'src=\"https://raw.githubusercontent.com/pytorch/ignite/master/assets/'\n+)\n \n VERSION = find_version(\"ignite\", \"__init__.py\")\n", "issue": "Images not rendered on PyPI\n## \ud83d\udcda Documentation\r\nThe logo and other assets on the project README.md are not rendered on the PyPI page of [pytorch-ignite](https://pypi.org/project/pytorch-ignite/)\r\nThis is because PyPI does not read the repo for the images, and we'll have to use external links for the image. (reference [here](https://stackoverflow.com/questions/41983209/how-do-i-add-images-to-a-pypi-readme-that-works-on-github))\r\nWe could use the `https://raw.githubusercontent.com/*` counterparts for the assets instead of specifying the path\r\n\r\nBy specifying the path, I mean the following\r\nhttps://github.com/pytorch/ignite/blob/6753b19b74fd8d128188dd0a75b405d19aa515b5/README.md#L5\r\n\r\nJust a suggestion, please ignore if it is unnecessary :)\r\n\n", "before_files": [{"content": "import io\nimport os\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef read(*names, **kwargs):\n with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nreadme = read(\"README.md\")\n\nVERSION = find_version(\"ignite\", \"__init__.py\")\n\nrequirements = [\n \"torch>=1.3,<2\",\n]\n\nsetup(\n # Metadata\n name=\"pytorch-ignite\",\n version=VERSION,\n author=\"PyTorch Core Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pytorch/ignite\",\n description=\"A lightweight library to help with training neural networks in PyTorch.\",\n long_description_content_type=\"text/markdown\",\n long_description=readme,\n license=\"BSD\",\n # Package info\n packages=find_packages(exclude=(\"tests\", \"tests.*\",)),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import io\nimport os\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef read(*names, **kwargs):\n with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nreadme = read(\"README.md\").replace(\n 'src=\"assets/', 'src=\"https://raw.githubusercontent.com/pytorch/ignite/master/assets/'\n)\n\nVERSION = find_version(\"ignite\", \"__init__.py\")\n\nrequirements = [\n \"torch>=1.3,<2\",\n]\n\nsetup(\n # Metadata\n name=\"pytorch-ignite\",\n version=VERSION,\n author=\"PyTorch Core Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pytorch/ignite\",\n description=\"A lightweight library to help with training neural networks in PyTorch.\",\n long_description_content_type=\"text/markdown\",\n long_description=readme,\n license=\"BSD\",\n # Package info\n packages=find_packages(exclude=(\"tests\", \"tests.*\",)),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]} | 840 | 99 |
gh_patches_debug_35572 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
module 'torch.distributions.constraints' has no attribute '_CorrCholesky'
Moving here from [this PyTorch issue](https://github.com/pytorch/pytorch/issues/77555).
We should
1. fix `patch_dependency` to gracefully fail on `AttributeError`
2. remove the old `_CorrCholesky` patch at some point
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/optim/pytorch_optimizers.py`
Content:
```
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import torch
5
6 from pyro.optim import PyroOptim
7 from pyro.optim.lr_scheduler import PyroLRScheduler
8
9 __all__ = []
10 # Programmatically load all optimizers from PyTorch.
11 for _name, _Optim in torch.optim.__dict__.items():
12 if not isinstance(_Optim, type):
13 continue
14 if not issubclass(_Optim, torch.optim.Optimizer):
15 continue
16 if _Optim is torch.optim.Optimizer:
17 continue
18 if _Optim is torch.optim.LBFGS:
19 # XXX LBFGS is not supported for SVI yet
20 continue
21
22 _PyroOptim = (
23 lambda _Optim: lambda optim_args, clip_args=None: PyroOptim(
24 _Optim, optim_args, clip_args
25 )
26 )(_Optim)
27 _PyroOptim.__name__ = _name
28 _PyroOptim.__doc__ = "Wraps :class:`torch.optim.{}` with :class:`~pyro.optim.optim.PyroOptim`.".format(
29 _name
30 )
31
32 locals()[_name] = _PyroOptim
33 __all__.append(_name)
34 del _PyroOptim
35
36 # Load all schedulers from PyTorch
37 # breaking change in torch >= 1.14: LRScheduler is new base class
38 if hasattr(torch.optim.lr_scheduler, "LRScheduler"):
39 _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler
40 else: # for torch < 1.13, _LRScheduler is base class
41 _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler
42
43 for _name, _Optim in torch.optim.lr_scheduler.__dict__.items():
44 if not isinstance(_Optim, type):
45 continue
46 if not issubclass(_Optim, _torch_scheduler_base) and _name != "ReduceLROnPlateau":
47 continue
48 if _Optim is torch.optim.Optimizer:
49 continue
50
51 _PyroOptim = (
52 lambda _Optim: lambda optim_args, clip_args=None: PyroLRScheduler(
53 _Optim, optim_args, clip_args
54 )
55 )(_Optim)
56 _PyroOptim.__name__ = _name
57 _PyroOptim.__doc__ = (
58 "Wraps :class:`torch.optim.{}` with ".format(_name)
59 + ":class:`~pyro.optim.lr_scheduler.PyroLRScheduler`."
60 )
61
62 locals()[_name] = _PyroOptim
63 __all__.append(_name)
64 del _PyroOptim
65
```
Path: `pyro/distributions/torch_patch.py`
Content:
```
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import functools
5 import math
6 import weakref
7
8 import torch
9
10
11 def patch_dependency(target, root_module=torch):
12 parts = target.split(".")
13 assert parts[0] == root_module.__name__
14 module = root_module
15 for part in parts[1:-1]:
16 module = getattr(module, part)
17 name = parts[-1]
18 old_fn = getattr(module, name, None)
19 old_fn = getattr(old_fn, "_pyro_unpatched", old_fn) # ensure patching is idempotent
20
21 def decorator(new_fn):
22 try:
23 functools.update_wrapper(new_fn, old_fn)
24 except Exception:
25 for attr in functools.WRAPPER_ASSIGNMENTS:
26 if hasattr(old_fn, attr):
27 setattr(new_fn, attr, getattr(old_fn, attr))
28 new_fn._pyro_unpatched = old_fn
29 setattr(module, name, new_fn)
30 return new_fn
31
32 return decorator
33
34
35 # TODO: Move upstream to allow for pickle serialization of transforms
36 @patch_dependency("torch.distributions.transforms.Transform.__getstate__")
37 def _Transform__getstate__(self):
38 attrs = {}
39 for k, v in self.__dict__.items():
40 if isinstance(v, weakref.ref):
41 attrs[k] = None
42 else:
43 attrs[k] = v
44 return attrs
45
46
47 # TODO move upstream
48 @patch_dependency("torch.distributions.transforms.Transform.clear_cache")
49 def _Transform_clear_cache(self):
50 if self._cache_size == 1:
51 self._cached_x_y = None, None
52
53
54 # TODO move upstream
55 @patch_dependency("torch.distributions.TransformedDistribution.clear_cache")
56 def _TransformedDistribution_clear_cache(self):
57 for t in self.transforms:
58 t.clear_cache()
59
60
61 # TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream
62 @patch_dependency("torch.distributions.HalfCauchy.log_prob")
63 def _HalfCauchy_logprob(self, value):
64 if self._validate_args:
65 self._validate_sample(value)
66 value = torch.as_tensor(
67 value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device
68 )
69 log_prob = self.base_dist.log_prob(value) + math.log(2)
70 log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float("inf"))
71 return log_prob
72
73
74 @patch_dependency("torch.distributions.constraints._CorrCholesky.check")
75 def _CorrCholesky_check(self, value):
76 row_norm = torch.linalg.norm(value.detach(), dim=-1)
77 unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)
78 return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm
79
80
81 # This adds a __call__ method to satisfy sphinx.
82 @patch_dependency("torch.distributions.utils.lazy_property.__call__")
83 def _lazy_property__call__(self):
84 raise NotImplementedError
85
86
87 __all__ = []
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py
--- a/pyro/distributions/torch_patch.py
+++ b/pyro/distributions/torch_patch.py
@@ -3,31 +3,40 @@
import functools
import math
+import warnings
import weakref
import torch
def patch_dependency(target, root_module=torch):
- parts = target.split(".")
- assert parts[0] == root_module.__name__
- module = root_module
- for part in parts[1:-1]:
- module = getattr(module, part)
- name = parts[-1]
- old_fn = getattr(module, name, None)
- old_fn = getattr(old_fn, "_pyro_unpatched", old_fn) # ensure patching is idempotent
-
- def decorator(new_fn):
- try:
- functools.update_wrapper(new_fn, old_fn)
- except Exception:
- for attr in functools.WRAPPER_ASSIGNMENTS:
- if hasattr(old_fn, attr):
- setattr(new_fn, attr, getattr(old_fn, attr))
- new_fn._pyro_unpatched = old_fn
- setattr(module, name, new_fn)
- return new_fn
+ try:
+ parts = target.split(".")
+ assert parts[0] == root_module.__name__
+ module = root_module
+ for part in parts[1:-1]:
+ module = getattr(module, part)
+ name = parts[-1]
+ old_fn = getattr(module, name, None)
+ # Ensure patching is idempotent.
+ old_fn = getattr(old_fn, "_pyro_unpatched", old_fn)
+
+ def decorator(new_fn):
+ try:
+ functools.update_wrapper(new_fn, old_fn)
+ except Exception:
+ for attr in functools.WRAPPER_ASSIGNMENTS:
+ if hasattr(old_fn, attr):
+ setattr(new_fn, attr, getattr(old_fn, attr))
+ new_fn._pyro_unpatched = old_fn
+ setattr(module, name, new_fn)
+ return new_fn
+
+ except AttributeError:
+ warnings.warn(f"pyro patch_dependency target is stale: {target}")
+
+ def decorator(new_fn):
+ return new_fn
return decorator
diff --git a/pyro/optim/pytorch_optimizers.py b/pyro/optim/pytorch_optimizers.py
--- a/pyro/optim/pytorch_optimizers.py
+++ b/pyro/optim/pytorch_optimizers.py
@@ -36,9 +36,9 @@
# Load all schedulers from PyTorch
# breaking change in torch >= 1.14: LRScheduler is new base class
if hasattr(torch.optim.lr_scheduler, "LRScheduler"):
- _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler
+ _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler # type: ignore
else: # for torch < 1.13, _LRScheduler is base class
- _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler
+ _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler # type: ignore
for _name, _Optim in torch.optim.lr_scheduler.__dict__.items():
if not isinstance(_Optim, type):
| {"golden_diff": "diff --git a/pyro/distributions/torch_patch.py b/pyro/distributions/torch_patch.py\n--- a/pyro/distributions/torch_patch.py\n+++ b/pyro/distributions/torch_patch.py\n@@ -3,31 +3,40 @@\n \n import functools\n import math\n+import warnings\n import weakref\n \n import torch\n \n \n def patch_dependency(target, root_module=torch):\n- parts = target.split(\".\")\n- assert parts[0] == root_module.__name__\n- module = root_module\n- for part in parts[1:-1]:\n- module = getattr(module, part)\n- name = parts[-1]\n- old_fn = getattr(module, name, None)\n- old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n-\n- def decorator(new_fn):\n- try:\n- functools.update_wrapper(new_fn, old_fn)\n- except Exception:\n- for attr in functools.WRAPPER_ASSIGNMENTS:\n- if hasattr(old_fn, attr):\n- setattr(new_fn, attr, getattr(old_fn, attr))\n- new_fn._pyro_unpatched = old_fn\n- setattr(module, name, new_fn)\n- return new_fn\n+ try:\n+ parts = target.split(\".\")\n+ assert parts[0] == root_module.__name__\n+ module = root_module\n+ for part in parts[1:-1]:\n+ module = getattr(module, part)\n+ name = parts[-1]\n+ old_fn = getattr(module, name, None)\n+ # Ensure patching is idempotent.\n+ old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn)\n+\n+ def decorator(new_fn):\n+ try:\n+ functools.update_wrapper(new_fn, old_fn)\n+ except Exception:\n+ for attr in functools.WRAPPER_ASSIGNMENTS:\n+ if hasattr(old_fn, attr):\n+ setattr(new_fn, attr, getattr(old_fn, attr))\n+ new_fn._pyro_unpatched = old_fn\n+ setattr(module, name, new_fn)\n+ return new_fn\n+\n+ except AttributeError:\n+ warnings.warn(f\"pyro patch_dependency target is stale: {target}\")\n+\n+ def decorator(new_fn):\n+ return new_fn\n \n return decorator\n \ndiff --git a/pyro/optim/pytorch_optimizers.py b/pyro/optim/pytorch_optimizers.py\n--- a/pyro/optim/pytorch_optimizers.py\n+++ b/pyro/optim/pytorch_optimizers.py\n@@ -36,9 +36,9 @@\n # Load all schedulers from PyTorch\n # breaking change in torch >= 1.14: LRScheduler is new base class\n if hasattr(torch.optim.lr_scheduler, \"LRScheduler\"):\n- _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler\n+ _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler # type: ignore\n else: # for torch < 1.13, _LRScheduler is base class\n- _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler\n+ _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler # type: ignore\n \n for _name, _Optim in torch.optim.lr_scheduler.__dict__.items():\n if not isinstance(_Optim, type):\n", "issue": "module 'torch.distributions.constraints' has no attribute '_CorrCholesky'\nMoving here from [this PyTorch issue](https://github.com/pytorch/pytorch/issues/77555).\r\n\r\nWe should\r\n1. fix `patch_dependency` to gracefully fail on `AttributeError`\r\n2. remove the old `_CorrCholesky` patch at some point\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport torch\n\nfrom pyro.optim import PyroOptim\nfrom pyro.optim.lr_scheduler import PyroLRScheduler\n\n__all__ = []\n# Programmatically load all optimizers from PyTorch.\nfor _name, _Optim in torch.optim.__dict__.items():\n if not isinstance(_Optim, type):\n continue\n if not issubclass(_Optim, torch.optim.Optimizer):\n continue\n if _Optim is torch.optim.Optimizer:\n continue\n if _Optim is torch.optim.LBFGS:\n # XXX LBFGS is not supported for SVI yet\n continue\n\n _PyroOptim = (\n lambda _Optim: lambda optim_args, clip_args=None: PyroOptim(\n _Optim, optim_args, clip_args\n )\n )(_Optim)\n _PyroOptim.__name__ = _name\n _PyroOptim.__doc__ = \"Wraps :class:`torch.optim.{}` with :class:`~pyro.optim.optim.PyroOptim`.\".format(\n _name\n )\n\n locals()[_name] = _PyroOptim\n __all__.append(_name)\n del _PyroOptim\n\n# Load all schedulers from PyTorch\n# breaking change in torch >= 1.14: LRScheduler is new base class\nif hasattr(torch.optim.lr_scheduler, \"LRScheduler\"):\n _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler\nelse: # for torch < 1.13, _LRScheduler is base class\n _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler\n\nfor _name, _Optim in torch.optim.lr_scheduler.__dict__.items():\n if not isinstance(_Optim, type):\n continue\n if not issubclass(_Optim, _torch_scheduler_base) and _name != \"ReduceLROnPlateau\":\n continue\n if _Optim is torch.optim.Optimizer:\n continue\n\n _PyroOptim = (\n lambda _Optim: lambda optim_args, clip_args=None: PyroLRScheduler(\n _Optim, optim_args, clip_args\n )\n )(_Optim)\n _PyroOptim.__name__ = _name\n _PyroOptim.__doc__ = (\n \"Wraps :class:`torch.optim.{}` with \".format(_name)\n + \":class:`~pyro.optim.lr_scheduler.PyroLRScheduler`.\"\n )\n\n locals()[_name] = _PyroOptim\n __all__.append(_name)\n del _PyroOptim\n", "path": "pyro/optim/pytorch_optimizers.py"}, {"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport weakref\n\nimport torch\n\n\ndef patch_dependency(target, root_module=torch):\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn) # ensure patching is idempotent\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}], "after_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport torch\n\nfrom pyro.optim import PyroOptim\nfrom pyro.optim.lr_scheduler import PyroLRScheduler\n\n__all__ = []\n# Programmatically load all optimizers from PyTorch.\nfor _name, _Optim in torch.optim.__dict__.items():\n if not isinstance(_Optim, type):\n continue\n if not issubclass(_Optim, torch.optim.Optimizer):\n continue\n if _Optim is torch.optim.Optimizer:\n continue\n if _Optim is torch.optim.LBFGS:\n # XXX LBFGS is not supported for SVI yet\n continue\n\n _PyroOptim = (\n lambda _Optim: lambda optim_args, clip_args=None: PyroOptim(\n _Optim, optim_args, clip_args\n )\n )(_Optim)\n _PyroOptim.__name__ = _name\n _PyroOptim.__doc__ = \"Wraps :class:`torch.optim.{}` with :class:`~pyro.optim.optim.PyroOptim`.\".format(\n _name\n )\n\n locals()[_name] = _PyroOptim\n __all__.append(_name)\n del _PyroOptim\n\n# Load all schedulers from PyTorch\n# breaking change in torch >= 1.14: LRScheduler is new base class\nif hasattr(torch.optim.lr_scheduler, \"LRScheduler\"):\n _torch_scheduler_base = torch.optim.lr_scheduler.LRScheduler # type: ignore\nelse: # for torch < 1.13, _LRScheduler is base class\n _torch_scheduler_base = torch.optim.lr_scheduler._LRScheduler # type: ignore\n\nfor _name, _Optim in torch.optim.lr_scheduler.__dict__.items():\n if not isinstance(_Optim, type):\n continue\n if not issubclass(_Optim, _torch_scheduler_base) and _name != \"ReduceLROnPlateau\":\n continue\n if _Optim is torch.optim.Optimizer:\n continue\n\n _PyroOptim = (\n lambda _Optim: lambda optim_args, clip_args=None: PyroLRScheduler(\n _Optim, optim_args, clip_args\n )\n )(_Optim)\n _PyroOptim.__name__ = _name\n _PyroOptim.__doc__ = (\n \"Wraps :class:`torch.optim.{}` with \".format(_name)\n + \":class:`~pyro.optim.lr_scheduler.PyroLRScheduler`.\"\n )\n\n locals()[_name] = _PyroOptim\n __all__.append(_name)\n del _PyroOptim\n", "path": "pyro/optim/pytorch_optimizers.py"}, {"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nimport functools\nimport math\nimport warnings\nimport weakref\n\nimport torch\n\n\ndef patch_dependency(target, root_module=torch):\n try:\n parts = target.split(\".\")\n assert parts[0] == root_module.__name__\n module = root_module\n for part in parts[1:-1]:\n module = getattr(module, part)\n name = parts[-1]\n old_fn = getattr(module, name, None)\n # Ensure patching is idempotent.\n old_fn = getattr(old_fn, \"_pyro_unpatched\", old_fn)\n\n def decorator(new_fn):\n try:\n functools.update_wrapper(new_fn, old_fn)\n except Exception:\n for attr in functools.WRAPPER_ASSIGNMENTS:\n if hasattr(old_fn, attr):\n setattr(new_fn, attr, getattr(old_fn, attr))\n new_fn._pyro_unpatched = old_fn\n setattr(module, name, new_fn)\n return new_fn\n\n except AttributeError:\n warnings.warn(f\"pyro patch_dependency target is stale: {target}\")\n\n def decorator(new_fn):\n return new_fn\n\n return decorator\n\n\n# TODO: Move upstream to allow for pickle serialization of transforms\n@patch_dependency(\"torch.distributions.transforms.Transform.__getstate__\")\ndef _Transform__getstate__(self):\n attrs = {}\n for k, v in self.__dict__.items():\n if isinstance(v, weakref.ref):\n attrs[k] = None\n else:\n attrs[k] = v\n return attrs\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.transforms.Transform.clear_cache\")\ndef _Transform_clear_cache(self):\n if self._cache_size == 1:\n self._cached_x_y = None, None\n\n\n# TODO move upstream\n@patch_dependency(\"torch.distributions.TransformedDistribution.clear_cache\")\ndef _TransformedDistribution_clear_cache(self):\n for t in self.transforms:\n t.clear_cache()\n\n\n# TODO fix https://github.com/pytorch/pytorch/issues/48054 upstream\n@patch_dependency(\"torch.distributions.HalfCauchy.log_prob\")\ndef _HalfCauchy_logprob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n value = torch.as_tensor(\n value, dtype=self.base_dist.scale.dtype, device=self.base_dist.scale.device\n )\n log_prob = self.base_dist.log_prob(value) + math.log(2)\n log_prob.masked_fill_(value.expand(log_prob.shape) < 0, -float(\"inf\"))\n return log_prob\n\n\n@patch_dependency(\"torch.distributions.constraints._CorrCholesky.check\")\ndef _CorrCholesky_check(self, value):\n row_norm = torch.linalg.norm(value.detach(), dim=-1)\n unit_row_norm = (row_norm - 1.0).abs().le(1e-4).all(dim=-1)\n return torch.distributions.constraints.lower_cholesky.check(value) & unit_row_norm\n\n\n# This adds a __call__ method to satisfy sphinx.\n@patch_dependency(\"torch.distributions.utils.lazy_property.__call__\")\ndef _lazy_property__call__(self):\n raise NotImplementedError\n\n\n__all__ = []\n", "path": "pyro/distributions/torch_patch.py"}]} | 1,934 | 734 |
gh_patches_debug_2725 | rasdani/github-patches | git_diff | beeware__toga-569 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error looking for icon for tutorial for 0.3.0.dev9
This is with Python 3.6.5 in a clean venv:
```
(.venv) PS C:\Users\_\Desktop\toga_tutorial> python .\helloworld.py
[Winforms] No valid icon format available for C:\Users\brcan\Desktop\toga_tutorial\.venv\lib\site-packages\toga\resources\tiberius; fall back on Tiberius instead
Unhandled Exception: Python.Runtime.PythonException: FileNotFoundException : Could not find file 'C:\Users\brcan\Desktop\toga_tutorial\.venv\lib\site-packages\toga\resources\tiberius.ico'.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
at System.Drawing.Icon..ctor(String fileName, Int32 width, Int32 height)
at Python.Runtime.Dispatcher.Dispatch(ArrayList args)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/core/setup.py`
Content:
```
1 #/usr/bin/env python
2 import io
3 import re
4
5 from setuptools import setup, find_packages
6
7 with io.open('toga/__init__.py', encoding='utf8') as version_file:
8 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file.read(), re.M)
9 if version_match:
10 version = version_match.group(1)
11 else:
12 raise RuntimeError("Unable to find version string.")
13
14
15 with io.open('README.rst', encoding='utf8') as readme:
16 long_description = readme.read()
17
18
19 setup(
20 name='toga-core',
21 version=version,
22 description='A Python native, OS native GUI toolkit.',
23 long_description=long_description,
24 author='Russell Keith-Magee',
25 author_email='[email protected]',
26 url='http://pybee.org/toga',
27 packages=find_packages(exclude='tests'),
28 python_requires='>=3.5',
29 package_data={
30 'toga': ['resources/*.icns', 'resources/*.png'],
31 },
32 include_package_data=True,
33 install_requires=[
34 'travertino>=0.1.0'
35 ],
36 tests_require=[
37 'toga-dummy==%s' % version
38 ],
39 license='New BSD',
40 classifiers=[
41 'Development Status :: 3 - Alpha',
42 'Intended Audience :: Developers',
43 'License :: OSI Approved :: BSD License',
44 'Operating System :: OS Independent',
45 'Programming Language :: Python :: 3',
46 'Programming Language :: Python :: 3.5',
47 'Programming Language :: Python :: 3.6',
48 'Programming Language :: Python :: 3.7',
49 'Programming Language :: Python :: 3 :: Only',
50 'Topic :: Software Development',
51 'Topic :: Software Development :: User Interfaces',
52 'Topic :: Software Development :: Widget Sets',
53 ],
54 test_suite='tests',
55 zip_safe=False,
56 )
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/core/setup.py b/src/core/setup.py
--- a/src/core/setup.py
+++ b/src/core/setup.py
@@ -27,7 +27,7 @@
packages=find_packages(exclude='tests'),
python_requires='>=3.5',
package_data={
- 'toga': ['resources/*.icns', 'resources/*.png'],
+ 'toga': ['resources/*.icns', 'resources/*.ico', 'resources/*.png'],
},
include_package_data=True,
install_requires=[
| {"golden_diff": "diff --git a/src/core/setup.py b/src/core/setup.py\n--- a/src/core/setup.py\n+++ b/src/core/setup.py\n@@ -27,7 +27,7 @@\n packages=find_packages(exclude='tests'),\n python_requires='>=3.5',\n package_data={\n- 'toga': ['resources/*.icns', 'resources/*.png'],\n+ 'toga': ['resources/*.icns', 'resources/*.ico', 'resources/*.png'],\n },\n include_package_data=True,\n install_requires=[\n", "issue": "Error looking for icon for tutorial for 0.3.0.dev9\nThis is with Python 3.6.5 in a clean venv:\r\n```\r\n(.venv) PS C:\\Users\\_\\Desktop\\toga_tutorial> python .\\helloworld.py\r\n[Winforms] No valid icon format available for C:\\Users\\brcan\\Desktop\\toga_tutorial\\.venv\\lib\\site-packages\\toga\\resources\\tiberius; fall back on Tiberius instead\r\n\r\nUnhandled Exception: Python.Runtime.PythonException: FileNotFoundException : Could not find file 'C:\\Users\\brcan\\Desktop\\toga_tutorial\\.venv\\lib\\site-packages\\toga\\resources\\tiberius.ico'.\r\n at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)\r\n at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)\r\n at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)\r\n at System.Drawing.Icon..ctor(String fileName, Int32 width, Int32 height)\r\n at Python.Runtime.Dispatcher.Dispatch(ArrayList args)\r\n at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\r\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\r\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)\r\n at System.Threading.ThreadHelper.ThreadStart()\r\n```\n", "before_files": [{"content": "#/usr/bin/env python\nimport io\nimport re\n\nfrom setuptools import setup, find_packages\n\nwith io.open('toga/__init__.py', encoding='utf8') as version_file:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M)\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith io.open('README.rst', encoding='utf8') as readme:\n long_description = readme.read()\n\n\nsetup(\n name='toga-core',\n version=version,\n description='A Python native, OS native GUI toolkit.',\n long_description=long_description,\n author='Russell Keith-Magee',\n author_email='[email protected]',\n url='http://pybee.org/toga',\n packages=find_packages(exclude='tests'),\n python_requires='>=3.5',\n package_data={\n 'toga': ['resources/*.icns', 'resources/*.png'],\n },\n include_package_data=True,\n install_requires=[\n 'travertino>=0.1.0'\n ],\n tests_require=[\n 'toga-dummy==%s' % version\n ],\n license='New BSD',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: User Interfaces',\n 'Topic :: Software Development :: Widget Sets',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n", "path": "src/core/setup.py"}], "after_files": [{"content": "#/usr/bin/env python\nimport io\nimport re\n\nfrom setuptools import setup, find_packages\n\nwith io.open('toga/__init__.py', encoding='utf8') as version_file:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M)\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\n\nwith io.open('README.rst', encoding='utf8') as readme:\n long_description = readme.read()\n\n\nsetup(\n name='toga-core',\n version=version,\n description='A Python native, OS native GUI toolkit.',\n long_description=long_description,\n author='Russell Keith-Magee',\n author_email='[email protected]',\n url='http://pybee.org/toga',\n packages=find_packages(exclude='tests'),\n python_requires='>=3.5',\n package_data={\n 'toga': ['resources/*.icns', 'resources/*.ico', 'resources/*.png'],\n },\n include_package_data=True,\n install_requires=[\n 'travertino>=0.1.0'\n ],\n tests_require=[\n 'toga-dummy==%s' % version\n ],\n license='New BSD',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: User Interfaces',\n 'Topic :: Software Development :: Widget Sets',\n ],\n test_suite='tests',\n zip_safe=False,\n)\n", "path": "src/core/setup.py"}]} | 1,139 | 113 |
gh_patches_debug_29550 | rasdani/github-patches | git_diff | DDMAL__CantusDB-334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automatically run populate_next_chant_fields in staging/production
We need to have some automation to run `python manage.py populate_next_chant_fields` from time to time on our servers; otherwise, the function in /next_chants.py will not work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/signals.py`
Content:
```
1 import operator
2 from functools import reduce
3
4 from django.contrib.postgres.search import SearchVector
5 from django.db import models
6 from django.db.models import Value
7 from django.db.models.signals import post_save, post_delete
8 from django.dispatch import receiver
9
10 from main_app.models import Chant
11 from main_app.models import Sequence
12
13
14 @receiver(post_save, sender=Chant)
15 def update_chant_search_vector(instance, **kwargs):
16 """When saving an instance of Chant, update its search vector field."""
17 index_components = instance.index_components()
18 pk = instance.pk
19 search_vectors = []
20
21 for weight, data in index_components.items():
22 search_vectors.append(
23 SearchVector(
24 Value(data, output_field=models.TextField()), weight=weight
25 )
26 )
27 instance.__class__.objects.filter(pk=pk).update(
28 search_vector=reduce(operator.add, search_vectors)
29 )
30
31 @receiver(post_save, sender=Chant)
32 @receiver(post_save, sender=Sequence)
33 @receiver(post_delete, sender=Chant)
34 @receiver(post_delete, sender=Sequence)
35 def update_source_chant_count(instance, **kwargs):
36 """When saving or deleting a Chant or Sequence, update its Source's number_of_chants field"""
37 source = instance.source
38 source.number_of_chants = source.chant_set.count() + source.sequence_set.count()
39 source.save()
40
41 @receiver(post_save, sender=Chant)
42 @receiver(post_delete, sender=Chant)
43 def update_source_melody_count(instance, **kwargs):
44 """When saving or deleting a Chant, update its Source's number_of_melodies field"""
45 source = instance.source
46 source.number_of_melodies = source.chant_set.filter(volpiano__isnull=False).count()
47 source.save()
48
```
Path: `django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py`
Content:
```
1 from main_app.models import Chant
2 from django.core.exceptions import ValidationError
3 from django.core.management.base import BaseCommand
4
5 class Command(BaseCommand):
6 def add_arguments(self, parser):
7 parser.add_argument(
8 "-o",
9 "--overwrite",
10 action="store_true",
11 help="Overwrites next_chant of chants that already have a next_chant set."
12 )
13
14 def handle(self, *args, **kwargs):
15 CHUNK_SIZE = 1_000
16 overwrite = kwargs["overwrite"]
17 chants = Chant.objects.all()
18 chants_count = chants.count()
19 start_index = 0
20 while start_index <= chants_count:
21 print("processing chunk with start_index of", start_index)
22 chunk = chants[start_index:start_index+CHUNK_SIZE]
23 for chant in chunk:
24 if chant.next_chant and not overwrite: # unless -o or -overwrite flag has been supplied, skip chants that already have a next_chant
25 continue
26 try:
27 chant.next_chant = chant.get_next_chant()
28 chant.save()
29 except ValidationError: # another chant's next_chant already points to this chant's next_chant
30 pass
31 del chunk # make sure we don't use too much RAM
32 start_index += CHUNK_SIZE
33
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py b/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py
--- a/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py
+++ b/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py
@@ -2,6 +2,17 @@
from django.core.exceptions import ValidationError
from django.core.management.base import BaseCommand
+# This script memoizes the result of Chant.get_next_chant(), which is expensive
+# to calculate on the fly, saving it as the Chant's .next_chant property.
+# This script populates the next_chant field for all chants in the database.
+# Once it has been calculated once (for example, after importing data
+# from OldCantus), it shouldn't need to be run again - whenever chants are
+# created, updated or deleted, the field should be recalculated for all chants
+# in the source by update_next_chant_fields() in main_app/signals.py.
+
+# to calculate all chants' next_chants from scratch: `python manage.py populate_next_chant_fields --overwrite`
+# to calculate next_chants for all chants that don't already have a next_chant specified: `python manage.py populate_next_chant_fields`
+
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
diff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py
--- a/django/cantusdb_project/main_app/signals.py
+++ b/django/cantusdb_project/main_app/signals.py
@@ -45,3 +45,15 @@
source = instance.source
source.number_of_melodies = source.chant_set.filter(volpiano__isnull=False).count()
source.save()
+
+@receiver(post_save, sender=Chant)
+@receiver(post_delete, sender=Chant)
+def update_next_chant_fields(instance, **kwargs):
+ """When saving or deleting a Chant, make sure the next_chant of each chant in the source is up-to-date"""
+ source = instance.source
+ for chant in source.chant_set.all():
+ next_chant = chant.get_next_chant()
+ # use .update() instead of .save() to prevent RecursionError
+ # (otherwise, saving would trigger @receiver(post_save, ...) again)
+ Chant.objects.filter(id=chant.id).update(next_chant=next_chant)
+
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py b/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py\n--- a/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py\n+++ b/django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py\n@@ -2,6 +2,17 @@\n from django.core.exceptions import ValidationError\n from django.core.management.base import BaseCommand\n \n+# This script memoizes the result of Chant.get_next_chant(), which is expensive\n+# to calculate on the fly, saving it as the Chant's .next_chant property.\n+# This script populates the next_chant field for all chants in the database.\n+# Once it has been calculated once (for example, after importing data\n+# from OldCantus), it shouldn't need to be run again - whenever chants are\n+# created, updated or deleted, the field should be recalculated for all chants\n+# in the source by update_next_chant_fields() in main_app/signals.py.\n+\n+# to calculate all chants' next_chants from scratch: `python manage.py populate_next_chant_fields --overwrite`\n+# to calculate next_chants for all chants that don't already have a next_chant specified: `python manage.py populate_next_chant_fields`\n+\n class Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\ndiff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py\n--- a/django/cantusdb_project/main_app/signals.py\n+++ b/django/cantusdb_project/main_app/signals.py\n@@ -45,3 +45,15 @@\n source = instance.source\n source.number_of_melodies = source.chant_set.filter(volpiano__isnull=False).count()\n source.save()\n+\n+@receiver(post_save, sender=Chant)\n+@receiver(post_delete, sender=Chant)\n+def update_next_chant_fields(instance, **kwargs):\n+ \"\"\"When saving or deleting a Chant, make sure the next_chant of each chant in the source is up-to-date\"\"\"\n+ source = instance.source\n+ for chant in source.chant_set.all():\n+ next_chant = chant.get_next_chant()\n+ # use .update() instead of .save() to prevent RecursionError\n+ # (otherwise, saving would trigger @receiver(post_save, ...) again)\n+ Chant.objects.filter(id=chant.id).update(next_chant=next_chant)\n+\n", "issue": "Automatically run populate_next_chant_fields in staging/production\nWe need to have some automation to run `python manage.py populate_next_chant_fields` from time to time on our servers; otherwise, the function in /next_chants.py will not work.\n", "before_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.contrib.postgres.search import SearchVector\nfrom django.db import models\nfrom django.db.models import Value\nfrom django.db.models.signals import post_save, post_delete\nfrom django.dispatch import receiver\n\nfrom main_app.models import Chant\nfrom main_app.models import Sequence\n\n\n@receiver(post_save, sender=Chant)\ndef update_chant_search_vector(instance, **kwargs):\n \"\"\"When saving an instance of Chant, update its search vector field.\"\"\"\n index_components = instance.index_components()\n pk = instance.pk\n search_vectors = []\n\n for weight, data in index_components.items():\n search_vectors.append(\n SearchVector(\n Value(data, output_field=models.TextField()), weight=weight\n )\n )\n instance.__class__.objects.filter(pk=pk).update(\n search_vector=reduce(operator.add, search_vectors)\n )\n\n@receiver(post_save, sender=Chant)\n@receiver(post_save, sender=Sequence)\n@receiver(post_delete, sender=Chant)\n@receiver(post_delete, sender=Sequence)\ndef update_source_chant_count(instance, **kwargs):\n \"\"\"When saving or deleting a Chant or Sequence, update its Source's number_of_chants field\"\"\"\n source = instance.source\n source.number_of_chants = source.chant_set.count() + source.sequence_set.count()\n source.save()\n\n@receiver(post_save, sender=Chant)\n@receiver(post_delete, sender=Chant)\ndef update_source_melody_count(instance, **kwargs):\n \"\"\"When saving or deleting a Chant, update its Source's number_of_melodies field\"\"\"\n source = instance.source\n source.number_of_melodies = source.chant_set.filter(volpiano__isnull=False).count()\n source.save()\n", "path": "django/cantusdb_project/main_app/signals.py"}, {"content": "from main_app.models import Chant\nfrom django.core.exceptions import ValidationError\nfrom django.core.management.base import BaseCommand\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-o\",\n \"--overwrite\", \n action=\"store_true\", \n help=\"Overwrites next_chant of chants that already have a next_chant set.\"\n )\n\n def handle(self, *args, **kwargs):\n CHUNK_SIZE = 1_000\n overwrite = kwargs[\"overwrite\"]\n chants = Chant.objects.all()\n chants_count = chants.count()\n start_index = 0\n while start_index <= chants_count:\n print(\"processing chunk with start_index of\", start_index)\n chunk = chants[start_index:start_index+CHUNK_SIZE]\n for chant in chunk:\n if chant.next_chant and not overwrite: # unless -o or -overwrite flag has been supplied, skip chants that already have a next_chant\n continue\n try:\n chant.next_chant = chant.get_next_chant()\n chant.save()\n except ValidationError: # another chant's next_chant already points to this chant's next_chant\n pass\n del chunk # make sure we don't use too much RAM\n start_index += CHUNK_SIZE\n\n", "path": "django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py"}], "after_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.contrib.postgres.search import SearchVector\nfrom django.db import models\nfrom django.db.models import Value\nfrom django.db.models.signals import post_save, post_delete\nfrom django.dispatch import receiver\n\nfrom main_app.models import Chant\nfrom main_app.models import Sequence\n\n\n@receiver(post_save, sender=Chant)\ndef update_chant_search_vector(instance, **kwargs):\n \"\"\"When saving an instance of Chant, update its search vector field.\"\"\"\n index_components = instance.index_components()\n pk = instance.pk\n search_vectors = []\n\n for weight, data in index_components.items():\n search_vectors.append(\n SearchVector(\n Value(data, output_field=models.TextField()), weight=weight\n )\n )\n instance.__class__.objects.filter(pk=pk).update(\n search_vector=reduce(operator.add, search_vectors)\n )\n\n@receiver(post_save, sender=Chant)\n@receiver(post_save, sender=Sequence)\n@receiver(post_delete, sender=Chant)\n@receiver(post_delete, sender=Sequence)\ndef update_source_chant_count(instance, **kwargs):\n \"\"\"When saving or deleting a Chant or Sequence, update its Source's number_of_chants field\"\"\"\n source = instance.source\n source.number_of_chants = source.chant_set.count() + source.sequence_set.count()\n source.save()\n\n@receiver(post_save, sender=Chant)\n@receiver(post_delete, sender=Chant)\ndef update_source_melody_count(instance, **kwargs):\n \"\"\"When saving or deleting a Chant, update its Source's number_of_melodies field\"\"\"\n source = instance.source\n source.number_of_melodies = source.chant_set.filter(volpiano__isnull=False).count()\n source.save()\n\n@receiver(post_save, sender=Chant)\n@receiver(post_delete, sender=Chant)\ndef update_next_chant_fields(instance, **kwargs):\n \"\"\"When saving or deleting a Chant, make sure the next_chant of each chant in the source is up-to-date\"\"\"\n source = instance.source\n for chant in source.chant_set.all():\n next_chant = chant.get_next_chant()\n # use .update() instead of .save() to prevent RecursionError\n # (otherwise, saving would trigger @receiver(post_save, ...) again)\n Chant.objects.filter(id=chant.id).update(next_chant=next_chant)\n\n", "path": "django/cantusdb_project/main_app/signals.py"}, {"content": "from main_app.models import Chant\nfrom django.core.exceptions import ValidationError\nfrom django.core.management.base import BaseCommand\n\n# This script memoizes the result of Chant.get_next_chant(), which is expensive\n# to calculate on the fly, saving it as the Chant's .next_chant property.\n# This script populates the next_chant field for all chants in the database.\n# Once it has been calculated once (for example, after importing data\n# from OldCantus), it shouldn't need to be run again - whenever chants are\n# created, updated or deleted, the field should be recalculated for all chants\n# in the source by update_next_chant_fields() in main_app/signals.py.\n\n# to calculate all chants' next_chants from scratch: `python manage.py populate_next_chant_fields --overwrite`\n# to calculate next_chants for all chants that don't already have a next_chant specified: `python manage.py populate_next_chant_fields`\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-o\",\n \"--overwrite\", \n action=\"store_true\", \n help=\"Overwrites next_chant of chants that already have a next_chant set.\"\n )\n\n def handle(self, *args, **kwargs):\n CHUNK_SIZE = 1_000\n overwrite = kwargs[\"overwrite\"]\n chants = Chant.objects.all()\n chants_count = chants.count()\n start_index = 0\n while start_index <= chants_count:\n print(\"processing chunk with start_index of\", start_index)\n chunk = chants[start_index:start_index+CHUNK_SIZE]\n for chant in chunk:\n if chant.next_chant and not overwrite: # unless -o or -overwrite flag has been supplied, skip chants that already have a next_chant\n continue\n try:\n chant.next_chant = chant.get_next_chant()\n chant.save()\n except ValidationError: # another chant's next_chant already points to this chant's next_chant\n pass\n del chunk # make sure we don't use too much RAM\n start_index += CHUNK_SIZE\n\n", "path": "django/cantusdb_project/main_app/management/commands/populate_next_chant_fields.py"}]} | 1,140 | 574 |
gh_patches_debug_51925 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1551 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User/Contact bug: Signals double saving, overwriting data for test cases
### Current Behavior
A bug was introduced [with a PR](https://github.com/cisagov/manage.get.gov/pull/1491) that fixed our oidc login clearing out user information. To fix this, the Contact object was linked to the User object, such that they would remain in sync.
However, this introduced a sneaky double-save bug wherein if signals get called more than once for the same contact object (as we found happens in our test cases), it overrides information when it should not do so.


### Expected Behavior
When a `Contact` object is saved, the `save()` function checks to see if `self.user` is not None. When it is not, it will update the `self.user` object with whatever value is located at `self`. It appears that in certain situations, (such as our test cases), the `User` field is behaving as if it were a one-to-many relationship (one Contact object to many User objects). This should not be the case, and `Contact` should only update one `User` field.
### Steps to Reproduce
1. Create a new test case in `test_admin.py`, and create 3 fake User objects. Populate them with unique data. Then, log their values. Note that their data seems to get overwritten.
### Environment
_No response_
### Additional Context
This bug seems to be originating in the `signals.py` file in the ` handle_profile` class. By passing in a flag that disables the save behavior (or just commenting it out), this issue seems to resolve.

### Issue Links
π Relates to: [#1464 / #1468](https://github.com/cisagov/manage.get.gov/pull/1491)
π Relates to: [this PR](https://github.com/cisagov/manage.get.gov/pull/1543) (pinpoints where the behavior is occurring but does not solve it)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/models/contact.py`
Content:
```
1 from django.db import models
2
3 from phonenumber_field.modelfields import PhoneNumberField # type: ignore
4
5 from .utility.time_stamped_model import TimeStampedModel
6
7
8 class Contact(TimeStampedModel):
9
10 """Contact information follows a similar pattern for each contact."""
11
12 user = models.OneToOneField(
13 "registrar.User",
14 null=True,
15 blank=True,
16 on_delete=models.SET_NULL,
17 )
18
19 first_name = models.TextField(
20 null=True,
21 blank=True,
22 help_text="First name",
23 verbose_name="first name / given name",
24 db_index=True,
25 )
26 middle_name = models.TextField(
27 null=True,
28 blank=True,
29 help_text="Middle name (optional)",
30 )
31 last_name = models.TextField(
32 null=True,
33 blank=True,
34 help_text="Last name",
35 verbose_name="last name / family name",
36 db_index=True,
37 )
38 title = models.TextField(
39 null=True,
40 blank=True,
41 help_text="Title",
42 verbose_name="title or role in your organization",
43 )
44 email = models.EmailField(
45 null=True,
46 blank=True,
47 help_text="Email",
48 db_index=True,
49 )
50 phone = PhoneNumberField(
51 null=True,
52 blank=True,
53 help_text="Phone",
54 db_index=True,
55 )
56
57 def get_formatted_name(self):
58 """Returns the contact's name in Western order."""
59 names = [n for n in [self.first_name, self.middle_name, self.last_name] if n]
60 return " ".join(names) if names else "Unknown"
61
62 def save(self, *args, **kwargs):
63 # Call the parent class's save method to perform the actual save
64 super().save(*args, **kwargs)
65
66 # Update the related User object's first_name and last_name
67 if self.user:
68 self.user.first_name = self.first_name
69 self.user.last_name = self.last_name
70 self.user.save()
71
72 def __str__(self):
73 if self.first_name or self.last_name:
74 return self.get_formatted_name()
75 elif self.email:
76 return self.email
77 elif self.pk:
78 return str(self.pk)
79 else:
80 return ""
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/models/contact.py b/src/registrar/models/contact.py
--- a/src/registrar/models/contact.py
+++ b/src/registrar/models/contact.py
@@ -64,7 +64,7 @@
super().save(*args, **kwargs)
# Update the related User object's first_name and last_name
- if self.user:
+ if self.user and (not self.user.first_name or not self.user.last_name):
self.user.first_name = self.first_name
self.user.last_name = self.last_name
self.user.save()
| {"golden_diff": "diff --git a/src/registrar/models/contact.py b/src/registrar/models/contact.py\n--- a/src/registrar/models/contact.py\n+++ b/src/registrar/models/contact.py\n@@ -64,7 +64,7 @@\n super().save(*args, **kwargs)\n \n # Update the related User object's first_name and last_name\n- if self.user:\n+ if self.user and (not self.user.first_name or not self.user.last_name):\n self.user.first_name = self.first_name\n self.user.last_name = self.last_name\n self.user.save()\n", "issue": "User/Contact bug: Signals double saving, overwriting data for test cases\n### Current Behavior\r\n\r\nA bug was introduced [with a PR](https://github.com/cisagov/manage.get.gov/pull/1491) that fixed our oidc login clearing out user information. To fix this, the Contact object was linked to the User object, such that they would remain in sync.\r\n\r\nHowever, this introduced a sneaky double-save bug wherein if signals get called more than once for the same contact object (as we found happens in our test cases), it overrides information when it should not do so.\r\n\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\nWhen a `Contact` object is saved, the `save()` function checks to see if `self.user` is not None. When it is not, it will update the `self.user` object with whatever value is located at `self`. It appears that in certain situations, (such as our test cases), the `User` field is behaving as if it were a one-to-many relationship (one Contact object to many User objects). This should not be the case, and `Contact` should only update one `User` field.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create a new test case in `test_admin.py`, and create 3 fake User objects. Populate them with unique data. Then, log their values. Note that their data seems to get overwritten.\r\n\r\n\r\n### Environment\r\n\r\n_No response_\r\n\r\n### Additional Context\r\n\r\nThis bug seems to be originating in the `signals.py` file in the ` handle_profile` class. By passing in a flag that disables the save behavior (or just commenting it out), this issue seems to resolve. \r\n\r\n\r\n\r\n### Issue Links\r\n\r\n\ud83d\udd04 Relates to: [#1464 / #1468](https://github.com/cisagov/manage.get.gov/pull/1491)\r\n\ud83d\udd04 Relates to: [this PR](https://github.com/cisagov/manage.get.gov/pull/1543) (pinpoints where the behavior is occurring but does not solve it)\n", "before_files": [{"content": "from django.db import models\n\nfrom phonenumber_field.modelfields import PhoneNumberField # type: ignore\n\nfrom .utility.time_stamped_model import TimeStampedModel\n\n\nclass Contact(TimeStampedModel):\n\n \"\"\"Contact information follows a similar pattern for each contact.\"\"\"\n\n user = models.OneToOneField(\n \"registrar.User\",\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n )\n\n first_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"First name\",\n verbose_name=\"first name / given name\",\n db_index=True,\n )\n middle_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"Middle name (optional)\",\n )\n last_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"Last name\",\n verbose_name=\"last name / family name\",\n db_index=True,\n )\n title = models.TextField(\n null=True,\n blank=True,\n help_text=\"Title\",\n verbose_name=\"title or role in your organization\",\n )\n email = models.EmailField(\n null=True,\n blank=True,\n help_text=\"Email\",\n db_index=True,\n )\n phone = PhoneNumberField(\n null=True,\n blank=True,\n help_text=\"Phone\",\n db_index=True,\n )\n\n def get_formatted_name(self):\n \"\"\"Returns the contact's name in Western order.\"\"\"\n names = [n for n in [self.first_name, self.middle_name, self.last_name] if n]\n return \" \".join(names) if names else \"Unknown\"\n\n def save(self, *args, **kwargs):\n # Call the parent class's save method to perform the actual save\n super().save(*args, **kwargs)\n\n # Update the related User object's first_name and last_name\n if self.user:\n self.user.first_name = self.first_name\n self.user.last_name = self.last_name\n self.user.save()\n\n def __str__(self):\n if self.first_name or self.last_name:\n return self.get_formatted_name()\n elif self.email:\n return self.email\n elif self.pk:\n return str(self.pk)\n else:\n return \"\"\n", "path": "src/registrar/models/contact.py"}], "after_files": [{"content": "from django.db import models\n\nfrom phonenumber_field.modelfields import PhoneNumberField # type: ignore\n\nfrom .utility.time_stamped_model import TimeStampedModel\n\n\nclass Contact(TimeStampedModel):\n\n \"\"\"Contact information follows a similar pattern for each contact.\"\"\"\n\n user = models.OneToOneField(\n \"registrar.User\",\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n )\n\n first_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"First name\",\n verbose_name=\"first name / given name\",\n db_index=True,\n )\n middle_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"Middle name (optional)\",\n )\n last_name = models.TextField(\n null=True,\n blank=True,\n help_text=\"Last name\",\n verbose_name=\"last name / family name\",\n db_index=True,\n )\n title = models.TextField(\n null=True,\n blank=True,\n help_text=\"Title\",\n verbose_name=\"title or role in your organization\",\n )\n email = models.EmailField(\n null=True,\n blank=True,\n help_text=\"Email\",\n db_index=True,\n )\n phone = PhoneNumberField(\n null=True,\n blank=True,\n help_text=\"Phone\",\n db_index=True,\n )\n\n def get_formatted_name(self):\n \"\"\"Returns the contact's name in Western order.\"\"\"\n names = [n for n in [self.first_name, self.middle_name, self.last_name] if n]\n return \" \".join(names) if names else \"Unknown\"\n\n def save(self, *args, **kwargs):\n # Call the parent class's save method to perform the actual save\n super().save(*args, **kwargs)\n\n # Update the related User object's first_name and last_name\n if self.user and (not self.user.first_name or not self.user.last_name):\n self.user.first_name = self.first_name\n self.user.last_name = self.last_name\n self.user.save()\n\n def __str__(self):\n if self.first_name or self.last_name:\n return self.get_formatted_name()\n elif self.email:\n return self.email\n elif self.pk:\n return str(self.pk)\n else:\n return \"\"\n", "path": "src/registrar/models/contact.py"}]} | 1,487 | 123 |
gh_patches_debug_12221 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Database: Open data only.
Where do we need / want to remind the users?
I added this rule to the [DatabaseRules](http://wiki.openmod-initiative.org/wiki/DatabaseRules).
Do we want to add a reminder to the upload interface?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `base/urls.py`
Content:
```
1 from django.conf.urls import url
2
3 from base import views
4
5 urlpatterns = [
6 url(r"^robots.txt$", views.robot),
7 url(r"^$", views.Welcome.as_view(), name="index"),
8 url(r"^about/$", views.redir, {"target": "about"}, name="index"),
9 url(r"^faq/$", views.redir, {"target": "faq"}, name="index"),
10 url(r"^discussion/$", views.redir, {"target": "discussion"}, name="index"),
11 url(r"^contact/$", views.ContactView.as_view(), name="index"),
12 url(r"^legal/impressum/$", views.redir, {"target": "impressum"}, name="index"),
13 url(r"^legal/datasec/$", views.redir, {"target": "datasecurity"}, name="index"),
14 url(r"^legal/tou/$", views.redir, {"target": "terms_of_use"}, name="index"),
15 ]
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/base/urls.py b/base/urls.py
--- a/base/urls.py
+++ b/base/urls.py
@@ -9,7 +9,6 @@
url(r"^faq/$", views.redir, {"target": "faq"}, name="index"),
url(r"^discussion/$", views.redir, {"target": "discussion"}, name="index"),
url(r"^contact/$", views.ContactView.as_view(), name="index"),
- url(r"^legal/impressum/$", views.redir, {"target": "impressum"}, name="index"),
- url(r"^legal/datasec/$", views.redir, {"target": "datasecurity"}, name="index"),
+ url(r"^legal/privacy_policy/$", views.redir, {"target": "privacy_policy"}, name="index"),
url(r"^legal/tou/$", views.redir, {"target": "terms_of_use"}, name="index"),
]
| {"golden_diff": "diff --git a/base/urls.py b/base/urls.py\n--- a/base/urls.py\n+++ b/base/urls.py\n@@ -9,7 +9,6 @@\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\n- url(r\"^legal/impressum/$\", views.redir, {\"target\": \"impressum\"}, name=\"index\"),\n- url(r\"^legal/datasec/$\", views.redir, {\"target\": \"datasecurity\"}, name=\"index\"),\n+ url(r\"^legal/privacy_policy/$\", views.redir, {\"target\": \"privacy_policy\"}, name=\"index\"),\n url(r\"^legal/tou/$\", views.redir, {\"target\": \"terms_of_use\"}, name=\"index\"),\n ]\n", "issue": "Database: Open data only.\nWhere do we need / want to remind the users?\nI added this rule to the [DatabaseRules](http://wiki.openmod-initiative.org/wiki/DatabaseRules).\nDo we want to add a reminder to the upload interface?\n\n", "before_files": [{"content": "from django.conf.urls import url\n\nfrom base import views\n\nurlpatterns = [\n url(r\"^robots.txt$\", views.robot),\n url(r\"^$\", views.Welcome.as_view(), name=\"index\"),\n url(r\"^about/$\", views.redir, {\"target\": \"about\"}, name=\"index\"),\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\n url(r\"^legal/impressum/$\", views.redir, {\"target\": \"impressum\"}, name=\"index\"),\n url(r\"^legal/datasec/$\", views.redir, {\"target\": \"datasecurity\"}, name=\"index\"),\n url(r\"^legal/tou/$\", views.redir, {\"target\": \"terms_of_use\"}, name=\"index\"),\n]\n", "path": "base/urls.py"}], "after_files": [{"content": "from django.conf.urls import url\n\nfrom base import views\n\nurlpatterns = [\n url(r\"^robots.txt$\", views.robot),\n url(r\"^$\", views.Welcome.as_view(), name=\"index\"),\n url(r\"^about/$\", views.redir, {\"target\": \"about\"}, name=\"index\"),\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\n url(r\"^legal/privacy_policy/$\", views.redir, {\"target\": \"privacy_policy\"}, name=\"index\"),\n url(r\"^legal/tou/$\", views.redir, {\"target\": \"terms_of_use\"}, name=\"index\"),\n]\n", "path": "base/urls.py"}]} | 533 | 203 |
gh_patches_debug_14744 | rasdani/github-patches | git_diff | webkom__lego-1279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EmailAddress case sensitivity
When creating a email address for a user (for gsuite sync) that includes capital letters, the user will be suspended. All input should be lowercased, since that is ehat google is doing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lego/apps/email/validators.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.core.validators import EmailValidator, RegexValidator
3
4 from lego.utils.validators import ReservedNameValidator
5
6
7 def validate_email_address(email_address):
8
9 if email_address.is_assigned():
10 raise ValidationError('The address is already assigned')
11
12
13 def validate_email_address_content(email_address):
14 """Make sure we only create valid emails."""
15
16 regex_validator = RegexValidator(regex=EmailValidator.user_regex)
17 reserved_valdator = ReservedNameValidator()
18
19 regex_validator(email_address.email)
20 reserved_valdator(email_address.email)
21
```
Path: `lego/apps/email/fields.py`
Content:
```
1 from django.core.exceptions import ObjectDoesNotExist
2 from rest_framework import serializers
3
4 from lego.apps.email.validators import validate_email_address, validate_email_address_content
5
6
7 class EmailAddressField(serializers.PrimaryKeyRelatedField):
8 """
9 Manage the email address as a string.
10 """
11
12 def __init__(self, **kwargs):
13 validators = kwargs.get('validators')
14 kwargs['validators'] = validators if validators is not None else [validate_email_address]
15 kwargs['validators'].append(validate_email_address_content)
16 super().__init__(**kwargs)
17
18 def to_internal_value(self, data):
19 """
20 Create email if not exists.
21 """
22 try:
23 email_address, _ = self.get_queryset().get_or_create(pk=data)
24 return email_address
25 except ObjectDoesNotExist:
26 self.fail('does_not_exist', pk_value=data)
27 except (TypeError, ValueError):
28 self.fail('incorrect_type', data_type=type(data).__name__)
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lego/apps/email/fields.py b/lego/apps/email/fields.py
--- a/lego/apps/email/fields.py
+++ b/lego/apps/email/fields.py
@@ -20,6 +20,7 @@
Create email if not exists.
"""
try:
+ data = data.lower()
email_address, _ = self.get_queryset().get_or_create(pk=data)
return email_address
except ObjectDoesNotExist:
diff --git a/lego/apps/email/validators.py b/lego/apps/email/validators.py
--- a/lego/apps/email/validators.py
+++ b/lego/apps/email/validators.py
@@ -5,6 +5,8 @@
def validate_email_address(email_address):
+ if email_address.email != email_address.email.lower():
+ raise ValidationError('Email is not lowercased')
if email_address.is_assigned():
raise ValidationError('The address is already assigned')
| {"golden_diff": "diff --git a/lego/apps/email/fields.py b/lego/apps/email/fields.py\n--- a/lego/apps/email/fields.py\n+++ b/lego/apps/email/fields.py\n@@ -20,6 +20,7 @@\n Create email if not exists.\n \"\"\"\n try:\n+ data = data.lower()\n email_address, _ = self.get_queryset().get_or_create(pk=data)\n return email_address\n except ObjectDoesNotExist:\ndiff --git a/lego/apps/email/validators.py b/lego/apps/email/validators.py\n--- a/lego/apps/email/validators.py\n+++ b/lego/apps/email/validators.py\n@@ -5,6 +5,8 @@\n \n \n def validate_email_address(email_address):\n+ if email_address.email != email_address.email.lower():\n+ raise ValidationError('Email is not lowercased')\n \n if email_address.is_assigned():\n raise ValidationError('The address is already assigned')\n", "issue": "EmailAddress case sensitivity\nWhen creating a email address for a user (for gsuite sync) that includes capital letters, the user will be suspended. All input should be lowercased, since that is ehat google is doing.\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator, RegexValidator\n\nfrom lego.utils.validators import ReservedNameValidator\n\n\ndef validate_email_address(email_address):\n\n if email_address.is_assigned():\n raise ValidationError('The address is already assigned')\n\n\ndef validate_email_address_content(email_address):\n \"\"\"Make sure we only create valid emails.\"\"\"\n\n regex_validator = RegexValidator(regex=EmailValidator.user_regex)\n reserved_valdator = ReservedNameValidator()\n\n regex_validator(email_address.email)\n reserved_valdator(email_address.email)\n", "path": "lego/apps/email/validators.py"}, {"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom rest_framework import serializers\n\nfrom lego.apps.email.validators import validate_email_address, validate_email_address_content\n\n\nclass EmailAddressField(serializers.PrimaryKeyRelatedField):\n \"\"\"\n Manage the email address as a string.\n \"\"\"\n\n def __init__(self, **kwargs):\n validators = kwargs.get('validators')\n kwargs['validators'] = validators if validators is not None else [validate_email_address]\n kwargs['validators'].append(validate_email_address_content)\n super().__init__(**kwargs)\n\n def to_internal_value(self, data):\n \"\"\"\n Create email if not exists.\n \"\"\"\n try:\n email_address, _ = self.get_queryset().get_or_create(pk=data)\n return email_address\n except ObjectDoesNotExist:\n self.fail('does_not_exist', pk_value=data)\n except (TypeError, ValueError):\n self.fail('incorrect_type', data_type=type(data).__name__)\n", "path": "lego/apps/email/fields.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator, RegexValidator\n\nfrom lego.utils.validators import ReservedNameValidator\n\n\ndef validate_email_address(email_address):\n if email_address.email != email_address.email.lower():\n raise ValidationError('Email is not lowercased')\n\n if email_address.is_assigned():\n raise ValidationError('The address is already assigned')\n\n\ndef validate_email_address_content(email_address):\n \"\"\"Make sure we only create valid emails.\"\"\"\n\n regex_validator = RegexValidator(regex=EmailValidator.user_regex)\n reserved_valdator = ReservedNameValidator()\n\n regex_validator(email_address.email)\n reserved_valdator(email_address.email)\n", "path": "lego/apps/email/validators.py"}, {"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom rest_framework import serializers\n\nfrom lego.apps.email.validators import validate_email_address, validate_email_address_content\n\n\nclass EmailAddressField(serializers.PrimaryKeyRelatedField):\n \"\"\"\n Manage the email address as a string.\n \"\"\"\n\n def __init__(self, **kwargs):\n validators = kwargs.get('validators')\n kwargs['validators'] = validators if validators is not None else [validate_email_address]\n kwargs['validators'].append(validate_email_address_content)\n super().__init__(**kwargs)\n\n def to_internal_value(self, data):\n \"\"\"\n Create email if not exists.\n \"\"\"\n try:\n data = data.lower()\n email_address, _ = self.get_queryset().get_or_create(pk=data)\n return email_address\n except ObjectDoesNotExist:\n self.fail('does_not_exist', pk_value=data)\n except (TypeError, ValueError):\n self.fail('incorrect_type', data_type=type(data).__name__)\n", "path": "lego/apps/email/fields.py"}]} | 720 | 205 |
gh_patches_debug_8994 | rasdani/github-patches | git_diff | getnikola__nikola-760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
USE_BUNDLES, attribute 'file_dep' must be {list, tuple}
In Py3K map() and filter() return iterators.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/bundles.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright Β© 2012-2013 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import unicode_literals
28
29 import os
30
31 try:
32 import webassets
33 except ImportError:
34 webassets = None # NOQA
35
36 from nikola.plugin_categories import LateTask
37 from nikola import utils
38
39
40 class BuildBundles(LateTask):
41 """Bundle assets using WebAssets."""
42
43 name = "build_bundles"
44
45 def set_site(self, site):
46 super(BuildBundles, self).set_site(site)
47 if webassets is None:
48 self.site.config['USE_BUNDLES'] = False
49
50 def gen_tasks(self):
51 """Bundle assets using WebAssets."""
52
53 kw = {
54 'filters': self.site.config['FILTERS'],
55 'output_folder': self.site.config['OUTPUT_FOLDER'],
56 'cache_folder': self.site.config['CACHE_FOLDER'],
57 'theme_bundles': get_theme_bundles(self.site.THEMES),
58 'themes': self.site.THEMES,
59 'files_folders': self.site.config['FILES_FOLDERS'],
60 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],
61 }
62
63 def build_bundle(output, inputs):
64 out_dir = os.path.join(kw['output_folder'],
65 os.path.dirname(output))
66 inputs = [i for i in inputs if os.path.isfile(
67 os.path.join(out_dir, i))]
68 cache_dir = os.path.join(kw['cache_folder'], 'webassets')
69 utils.makedirs(cache_dir)
70 env = webassets.Environment(out_dir, os.path.dirname(output),
71 cache=cache_dir)
72 bundle = webassets.Bundle(*inputs, output=os.path.basename(output))
73 env.register(output, bundle)
74 # This generates the file
75 env[output].urls()
76
77 flag = False
78 if (webassets is not None and self.site.config['USE_BUNDLES'] is not
79 False):
80 for name, files in kw['theme_bundles'].items():
81 output_path = os.path.join(kw['output_folder'], name)
82 dname = os.path.dirname(name)
83 file_dep = [os.path.join(kw['output_folder'], dname, fname)
84 for fname in files]
85 file_dep = filter(os.path.isfile, file_dep) # removes missing files
86 task = {
87 'file_dep': file_dep,
88 'task_dep': ['copy_assets'],
89 'basename': str(self.name),
90 'name': str(output_path),
91 'actions': [(build_bundle, (name, files))],
92 'targets': [output_path],
93 'uptodate': [utils.config_changed(kw)],
94 'clean': True,
95 }
96 flag = True
97 yield utils.apply_filters(task, kw['filters'])
98 if flag is False: # No page rendered, yield a dummy task
99 yield {
100 'basename': self.name,
101 'uptodate': [True],
102 'name': 'None',
103 'actions': [],
104 }
105
106
107 def get_theme_bundles(themes):
108 """Given a theme chain, return the bundle definitions."""
109 bundles = {}
110 for theme_name in themes:
111 bundles_path = os.path.join(
112 utils.get_theme_path(theme_name), 'bundles')
113 if os.path.isfile(bundles_path):
114 with open(bundles_path) as fd:
115 for line in fd:
116 name, files = line.split('=')
117 files = [f.strip() for f in files.split(',')]
118 bundles[name.strip()] = files
119 break
120 return bundles
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/task/bundles.py b/nikola/plugins/task/bundles.py
--- a/nikola/plugins/task/bundles.py
+++ b/nikola/plugins/task/bundles.py
@@ -84,7 +84,7 @@
for fname in files]
file_dep = filter(os.path.isfile, file_dep) # removes missing files
task = {
- 'file_dep': file_dep,
+ 'file_dep': list(file_dep),
'task_dep': ['copy_assets'],
'basename': str(self.name),
'name': str(output_path),
| {"golden_diff": "diff --git a/nikola/plugins/task/bundles.py b/nikola/plugins/task/bundles.py\n--- a/nikola/plugins/task/bundles.py\n+++ b/nikola/plugins/task/bundles.py\n@@ -84,7 +84,7 @@\n for fname in files]\n file_dep = filter(os.path.isfile, file_dep) # removes missing files\n task = {\n- 'file_dep': file_dep,\n+ 'file_dep': list(file_dep),\n 'task_dep': ['copy_assets'],\n 'basename': str(self.name),\n 'name': str(output_path),\n", "issue": "USE_BUNDLES, attribute 'file_dep' must be {list, tuple}\nIn Py3K map() and filter() return iterators.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2013 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import unicode_literals\n\nimport os\n\ntry:\n import webassets\nexcept ImportError:\n webassets = None # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola import utils\n\n\nclass BuildBundles(LateTask):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n name = \"build_bundles\"\n\n def set_site(self, site):\n super(BuildBundles, self).set_site(site)\n if webassets is None:\n self.site.config['USE_BUNDLES'] = False\n\n def gen_tasks(self):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n kw = {\n 'filters': self.site.config['FILTERS'],\n 'output_folder': self.site.config['OUTPUT_FOLDER'],\n 'cache_folder': self.site.config['CACHE_FOLDER'],\n 'theme_bundles': get_theme_bundles(self.site.THEMES),\n 'themes': self.site.THEMES,\n 'files_folders': self.site.config['FILES_FOLDERS'],\n 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],\n }\n\n def build_bundle(output, inputs):\n out_dir = os.path.join(kw['output_folder'],\n os.path.dirname(output))\n inputs = [i for i in inputs if os.path.isfile(\n os.path.join(out_dir, i))]\n cache_dir = os.path.join(kw['cache_folder'], 'webassets')\n utils.makedirs(cache_dir)\n env = webassets.Environment(out_dir, os.path.dirname(output),\n cache=cache_dir)\n bundle = webassets.Bundle(*inputs, output=os.path.basename(output))\n env.register(output, bundle)\n # This generates the file\n env[output].urls()\n\n flag = False\n if (webassets is not None and self.site.config['USE_BUNDLES'] is not\n False):\n for name, files in kw['theme_bundles'].items():\n output_path = os.path.join(kw['output_folder'], name)\n dname = os.path.dirname(name)\n file_dep = [os.path.join(kw['output_folder'], dname, fname)\n for fname in files]\n file_dep = filter(os.path.isfile, file_dep) # removes missing files\n task = {\n 'file_dep': file_dep,\n 'task_dep': ['copy_assets'],\n 'basename': str(self.name),\n 'name': str(output_path),\n 'actions': [(build_bundle, (name, files))],\n 'targets': [output_path],\n 'uptodate': [utils.config_changed(kw)],\n 'clean': True,\n }\n flag = True\n yield utils.apply_filters(task, kw['filters'])\n if flag is False: # No page rendered, yield a dummy task\n yield {\n 'basename': self.name,\n 'uptodate': [True],\n 'name': 'None',\n 'actions': [],\n }\n\n\ndef get_theme_bundles(themes):\n \"\"\"Given a theme chain, return the bundle definitions.\"\"\"\n bundles = {}\n for theme_name in themes:\n bundles_path = os.path.join(\n utils.get_theme_path(theme_name), 'bundles')\n if os.path.isfile(bundles_path):\n with open(bundles_path) as fd:\n for line in fd:\n name, files = line.split('=')\n files = [f.strip() for f in files.split(',')]\n bundles[name.strip()] = files\n break\n return bundles\n", "path": "nikola/plugins/task/bundles.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2013 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import unicode_literals\n\nimport os\n\ntry:\n import webassets\nexcept ImportError:\n webassets = None # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola import utils\n\n\nclass BuildBundles(LateTask):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n name = \"build_bundles\"\n\n def set_site(self, site):\n super(BuildBundles, self).set_site(site)\n if webassets is None:\n self.site.config['USE_BUNDLES'] = False\n\n def gen_tasks(self):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n kw = {\n 'filters': self.site.config['FILTERS'],\n 'output_folder': self.site.config['OUTPUT_FOLDER'],\n 'cache_folder': self.site.config['CACHE_FOLDER'],\n 'theme_bundles': get_theme_bundles(self.site.THEMES),\n 'themes': self.site.THEMES,\n 'files_folders': self.site.config['FILES_FOLDERS'],\n 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],\n }\n\n def build_bundle(output, inputs):\n out_dir = os.path.join(kw['output_folder'],\n os.path.dirname(output))\n inputs = [i for i in inputs if os.path.isfile(\n os.path.join(out_dir, i))]\n cache_dir = os.path.join(kw['cache_folder'], 'webassets')\n utils.makedirs(cache_dir)\n env = webassets.Environment(out_dir, os.path.dirname(output),\n cache=cache_dir)\n bundle = webassets.Bundle(*inputs, output=os.path.basename(output))\n env.register(output, bundle)\n # This generates the file\n env[output].urls()\n\n flag = False\n if (webassets is not None and self.site.config['USE_BUNDLES'] is not\n False):\n for name, files in kw['theme_bundles'].items():\n output_path = os.path.join(kw['output_folder'], name)\n dname = os.path.dirname(name)\n file_dep = [os.path.join(kw['output_folder'], dname, fname)\n for fname in files]\n file_dep = filter(os.path.isfile, file_dep) # removes missing files\n task = {\n 'file_dep': list(file_dep),\n 'task_dep': ['copy_assets'],\n 'basename': str(self.name),\n 'name': str(output_path),\n 'actions': [(build_bundle, (name, files))],\n 'targets': [output_path],\n 'uptodate': [utils.config_changed(kw)],\n 'clean': True,\n }\n flag = True\n yield utils.apply_filters(task, kw['filters'])\n if flag is False: # No page rendered, yield a dummy task\n yield {\n 'basename': self.name,\n 'uptodate': [True],\n 'name': 'None',\n 'actions': [],\n }\n\n\ndef get_theme_bundles(themes):\n \"\"\"Given a theme chain, return the bundle definitions.\"\"\"\n bundles = {}\n for theme_name in themes:\n bundles_path = os.path.join(\n utils.get_theme_path(theme_name), 'bundles')\n if os.path.isfile(bundles_path):\n with open(bundles_path) as fd:\n for line in fd:\n name, files = line.split('=')\n files = [f.strip() for f in files.split(',')]\n bundles[name.strip()] = files\n break\n return bundles\n", "path": "nikola/plugins/task/bundles.py"}]} | 1,532 | 128 |
gh_patches_debug_38719 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2460 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider afcurgentcare is broken
During the global build at 2021-05-21-20-28-08, spider **afcurgentcare** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/afcurgentcare.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/afcurgentcare.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/afcurgentcare.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/afcurgentcare.py`
Content:
```
1 import json
2 import re
3 import scrapy
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 class AfcUrgentCareSpider(scrapy.Spider):
8 name = "afcurgentcare"
9 item_attributes = { 'brand': "AFC Urgent Care" }
10 allowed_domains = ["afcurgentcare.com"]
11 download_delay = 0.2
12 start_urls = (
13 'https://www.afcurgentcare.com/locations/',
14 )
15
16 def parse(self, response):
17 for url in response.xpath('//li[@class="location"]/@data-href').extract():
18 yield scrapy.Request(
19 response.urljoin(url),
20 callback=self.parse_store,
21 )
22
23 def parse_store(self, response):
24 properties = {
25 'ref': response.url,
26 'lat': response.xpath('//div[@class="map-container"]/div/@data-latitude').extract_first(),
27 'lon': response.xpath('//div[@class="map-container"]/div/@data-longitude').extract_first(),
28 'phone': response.xpath('//a[@class="phone-link"]/span/text()').extract_first(),
29 'addr_full': response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first().strip(),
30 'name': response.xpath('//meta[@itemprop="name legalName"]/@content').extract_first(),
31 'city': response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first()[:-1],
32 'state': response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first().strip(),
33 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first().strip(),
34 }
35
36 o = OpeningHours()
37 for h in response.css('#LocalMapAreaOpenHourBanner li.h-day'):
38 day = h.xpath('em/span/text()').extract_first().strip()[:2]
39 day_range = h.xpath('em/text()').extract_first().strip(':').strip()
40 open_time, close_time = day_range.split(' - ')
41
42 o.add_range(day, open_time, close_time, '%I:%M %p')
43 properties['opening_hours'] = o.as_opening_hours()
44
45 yield GeojsonPointItem(**properties)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/afcurgentcare.py b/locations/spiders/afcurgentcare.py
--- a/locations/spiders/afcurgentcare.py
+++ b/locations/spiders/afcurgentcare.py
@@ -1,45 +1,48 @@
import json
-import re
+import urllib.parse
+
import scrapy
+
from locations.items import GeojsonPointItem
-from locations.hours import OpeningHours
+from locations.hours import OpeningHours, DAYS
+
class AfcUrgentCareSpider(scrapy.Spider):
name = "afcurgentcare"
- item_attributes = { 'brand': "AFC Urgent Care" }
+ item_attributes = {"brand": "AFC Urgent Care"}
allowed_domains = ["afcurgentcare.com"]
- download_delay = 0.2
start_urls = (
- 'https://www.afcurgentcare.com/locations/',
+ "https://www.afcurgentcare.com/modules/multilocation/?near_lat=39&near_lon=-98",
)
def parse(self, response):
- for url in response.xpath('//li[@class="location"]/@data-href').extract():
- yield scrapy.Request(
- response.urljoin(url),
- callback=self.parse_store,
- )
+ j = json.loads(response.body)
+ if j["meta"]["next"] is not None:
+ qs = "?" + urllib.parse.urlparse(j["meta"]["next"]).query
+ yield scrapy.Request(urllib.parse.urljoin(response.url, qs))
+ for obj in j["objects"]:
+ yield from self.parse_store(obj)
- def parse_store(self, response):
+ def parse_store(self, obj):
properties = {
- 'ref': response.url,
- 'lat': response.xpath('//div[@class="map-container"]/div/@data-latitude').extract_first(),
- 'lon': response.xpath('//div[@class="map-container"]/div/@data-longitude').extract_first(),
- 'phone': response.xpath('//a[@class="phone-link"]/span/text()').extract_first(),
- 'addr_full': response.xpath('//span[@itemprop="streetAddress"]/text()').extract_first().strip(),
- 'name': response.xpath('//meta[@itemprop="name legalName"]/@content').extract_first(),
- 'city': response.xpath('//span[@itemprop="addressLocality"]/text()').extract_first()[:-1],
- 'state': response.xpath('//span[@itemprop="addressRegion"]/text()').extract_first().strip(),
- 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first().strip(),
+ "ref": obj["id"],
+ "lat": obj["lat"],
+ "lon": obj["lon"],
+ "phone": obj["phonemap_e164"].get("phone"),
+ "addr_full": obj["street"],
+ "name": obj["location_name"],
+ "city": obj["city"],
+ "state": obj["state"],
+ "postcode": obj["postal_code"],
+ "website": obj["location_url"],
}
o = OpeningHours()
- for h in response.css('#LocalMapAreaOpenHourBanner li.h-day'):
- day = h.xpath('em/span/text()').extract_first().strip()[:2]
- day_range = h.xpath('em/text()').extract_first().strip(':').strip()
- open_time, close_time = day_range.split(' - ')
-
- o.add_range(day, open_time, close_time, '%I:%M %p')
- properties['opening_hours'] = o.as_opening_hours()
+ for ([h, _], day) in zip(obj["hours_of_operation"], DAYS):
+ if not h:
+ continue
+ open_time, close_time = h
+ o.add_range(day, open_time, close_time, "%H:%M:%S")
+ properties["opening_hours"] = o.as_opening_hours()
yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/afcurgentcare.py b/locations/spiders/afcurgentcare.py\n--- a/locations/spiders/afcurgentcare.py\n+++ b/locations/spiders/afcurgentcare.py\n@@ -1,45 +1,48 @@\n import json\n-import re\n+import urllib.parse\n+\n import scrapy\n+\n from locations.items import GeojsonPointItem\n-from locations.hours import OpeningHours\n+from locations.hours import OpeningHours, DAYS\n+\n \n class AfcUrgentCareSpider(scrapy.Spider):\n name = \"afcurgentcare\"\n- item_attributes = { 'brand': \"AFC Urgent Care\" }\n+ item_attributes = {\"brand\": \"AFC Urgent Care\"}\n allowed_domains = [\"afcurgentcare.com\"]\n- download_delay = 0.2\n start_urls = (\n- 'https://www.afcurgentcare.com/locations/',\n+ \"https://www.afcurgentcare.com/modules/multilocation/?near_lat=39&near_lon=-98\",\n )\n \n def parse(self, response):\n- for url in response.xpath('//li[@class=\"location\"]/@data-href').extract():\n- yield scrapy.Request(\n- response.urljoin(url),\n- callback=self.parse_store,\n- )\n+ j = json.loads(response.body)\n+ if j[\"meta\"][\"next\"] is not None:\n+ qs = \"?\" + urllib.parse.urlparse(j[\"meta\"][\"next\"]).query\n+ yield scrapy.Request(urllib.parse.urljoin(response.url, qs))\n+ for obj in j[\"objects\"]:\n+ yield from self.parse_store(obj)\n \n- def parse_store(self, response):\n+ def parse_store(self, obj):\n properties = {\n- 'ref': response.url,\n- 'lat': response.xpath('//div[@class=\"map-container\"]/div/@data-latitude').extract_first(),\n- 'lon': response.xpath('//div[@class=\"map-container\"]/div/@data-longitude').extract_first(),\n- 'phone': response.xpath('//a[@class=\"phone-link\"]/span/text()').extract_first(),\n- 'addr_full': response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first().strip(),\n- 'name': response.xpath('//meta[@itemprop=\"name legalName\"]/@content').extract_first(),\n- 'city': response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first()[:-1],\n- 'state': response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first().strip(),\n- 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first().strip(),\n+ \"ref\": obj[\"id\"],\n+ \"lat\": obj[\"lat\"],\n+ \"lon\": obj[\"lon\"],\n+ \"phone\": obj[\"phonemap_e164\"].get(\"phone\"),\n+ \"addr_full\": obj[\"street\"],\n+ \"name\": obj[\"location_name\"],\n+ \"city\": obj[\"city\"],\n+ \"state\": obj[\"state\"],\n+ \"postcode\": obj[\"postal_code\"],\n+ \"website\": obj[\"location_url\"],\n }\n \n o = OpeningHours()\n- for h in response.css('#LocalMapAreaOpenHourBanner li.h-day'):\n- day = h.xpath('em/span/text()').extract_first().strip()[:2]\n- day_range = h.xpath('em/text()').extract_first().strip(':').strip()\n- open_time, close_time = day_range.split(' - ')\n-\n- o.add_range(day, open_time, close_time, '%I:%M %p')\n- properties['opening_hours'] = o.as_opening_hours()\n+ for ([h, _], day) in zip(obj[\"hours_of_operation\"], DAYS):\n+ if not h:\n+ continue\n+ open_time, close_time = h\n+ o.add_range(day, open_time, close_time, \"%H:%M:%S\")\n+ properties[\"opening_hours\"] = o.as_opening_hours()\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider afcurgentcare is broken\nDuring the global build at 2021-05-21-20-28-08, spider **afcurgentcare** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/logs/afcurgentcare.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/afcurgentcare.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-21-20-28-08/output/afcurgentcare.geojson))\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nclass AfcUrgentCareSpider(scrapy.Spider):\n name = \"afcurgentcare\"\n item_attributes = { 'brand': \"AFC Urgent Care\" }\n allowed_domains = [\"afcurgentcare.com\"]\n download_delay = 0.2\n start_urls = (\n 'https://www.afcurgentcare.com/locations/',\n )\n\n def parse(self, response):\n for url in response.xpath('//li[@class=\"location\"]/@data-href').extract():\n yield scrapy.Request(\n response.urljoin(url),\n callback=self.parse_store,\n )\n\n def parse_store(self, response):\n properties = {\n 'ref': response.url,\n 'lat': response.xpath('//div[@class=\"map-container\"]/div/@data-latitude').extract_first(),\n 'lon': response.xpath('//div[@class=\"map-container\"]/div/@data-longitude').extract_first(),\n 'phone': response.xpath('//a[@class=\"phone-link\"]/span/text()').extract_first(),\n 'addr_full': response.xpath('//span[@itemprop=\"streetAddress\"]/text()').extract_first().strip(),\n 'name': response.xpath('//meta[@itemprop=\"name legalName\"]/@content').extract_first(),\n 'city': response.xpath('//span[@itemprop=\"addressLocality\"]/text()').extract_first()[:-1],\n 'state': response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract_first().strip(),\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first().strip(),\n }\n\n o = OpeningHours()\n for h in response.css('#LocalMapAreaOpenHourBanner li.h-day'):\n day = h.xpath('em/span/text()').extract_first().strip()[:2]\n day_range = h.xpath('em/text()').extract_first().strip(':').strip()\n open_time, close_time = day_range.split(' - ')\n\n o.add_range(day, open_time, close_time, '%I:%M %p')\n properties['opening_hours'] = o.as_opening_hours()\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/afcurgentcare.py"}], "after_files": [{"content": "import json\nimport urllib.parse\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours, DAYS\n\n\nclass AfcUrgentCareSpider(scrapy.Spider):\n name = \"afcurgentcare\"\n item_attributes = {\"brand\": \"AFC Urgent Care\"}\n allowed_domains = [\"afcurgentcare.com\"]\n start_urls = (\n \"https://www.afcurgentcare.com/modules/multilocation/?near_lat=39&near_lon=-98\",\n )\n\n def parse(self, response):\n j = json.loads(response.body)\n if j[\"meta\"][\"next\"] is not None:\n qs = \"?\" + urllib.parse.urlparse(j[\"meta\"][\"next\"]).query\n yield scrapy.Request(urllib.parse.urljoin(response.url, qs))\n for obj in j[\"objects\"]:\n yield from self.parse_store(obj)\n\n def parse_store(self, obj):\n properties = {\n \"ref\": obj[\"id\"],\n \"lat\": obj[\"lat\"],\n \"lon\": obj[\"lon\"],\n \"phone\": obj[\"phonemap_e164\"].get(\"phone\"),\n \"addr_full\": obj[\"street\"],\n \"name\": obj[\"location_name\"],\n \"city\": obj[\"city\"],\n \"state\": obj[\"state\"],\n \"postcode\": obj[\"postal_code\"],\n \"website\": obj[\"location_url\"],\n }\n\n o = OpeningHours()\n for ([h, _], day) in zip(obj[\"hours_of_operation\"], DAYS):\n if not h:\n continue\n open_time, close_time = h\n o.add_range(day, open_time, close_time, \"%H:%M:%S\")\n properties[\"opening_hours\"] = o.as_opening_hours()\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/afcurgentcare.py"}]} | 1,014 | 884 |
gh_patches_debug_16472 | rasdani/github-patches | git_diff | getsentry__sentry-python-1812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SQLAlchemy integration fails to parse version `2.0.0rc1`
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.12.1
### Steps to Reproduce
Call `sentry_sdk.init()` with SQLAlchemy integration and install SQLAlchemy==2.0.0rc1.
### Expected Result
no error
### Actual Result
```
@staticmethod
def setup_once():
# type: () -> None
try:
version = tuple(map(int, SQLALCHEMY_VERSION.split("b")[0].split(".")))
except (TypeError, ValueError):
> raise DidNotEnable(
"Unparsable SQLAlchemy version: {}".format(SQLALCHEMY_VERSION)
)
E sentry_sdk.integrations.DidNotEnable: Unparsable SQLAlchemy version: 2.0.0rc1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/sqlalchemy.py`
Content:
```
1 from __future__ import absolute_import
2
3 from sentry_sdk._types import MYPY
4 from sentry_sdk.hub import Hub
5 from sentry_sdk.integrations import Integration, DidNotEnable
6 from sentry_sdk.tracing_utils import record_sql_queries
7
8 try:
9 from sqlalchemy.engine import Engine # type: ignore
10 from sqlalchemy.event import listen # type: ignore
11 from sqlalchemy import __version__ as SQLALCHEMY_VERSION # type: ignore
12 except ImportError:
13 raise DidNotEnable("SQLAlchemy not installed.")
14
15 if MYPY:
16 from typing import Any
17 from typing import ContextManager
18 from typing import Optional
19
20 from sentry_sdk.tracing import Span
21
22
23 class SqlalchemyIntegration(Integration):
24 identifier = "sqlalchemy"
25
26 @staticmethod
27 def setup_once():
28 # type: () -> None
29
30 try:
31 version = tuple(map(int, SQLALCHEMY_VERSION.split("b")[0].split(".")))
32 except (TypeError, ValueError):
33 raise DidNotEnable(
34 "Unparsable SQLAlchemy version: {}".format(SQLALCHEMY_VERSION)
35 )
36
37 if version < (1, 2):
38 raise DidNotEnable("SQLAlchemy 1.2 or newer required.")
39
40 listen(Engine, "before_cursor_execute", _before_cursor_execute)
41 listen(Engine, "after_cursor_execute", _after_cursor_execute)
42 listen(Engine, "handle_error", _handle_error)
43
44
45 def _before_cursor_execute(
46 conn, cursor, statement, parameters, context, executemany, *args
47 ):
48 # type: (Any, Any, Any, Any, Any, bool, *Any) -> None
49 hub = Hub.current
50 if hub.get_integration(SqlalchemyIntegration) is None:
51 return
52
53 ctx_mgr = record_sql_queries(
54 hub,
55 cursor,
56 statement,
57 parameters,
58 paramstyle=context and context.dialect and context.dialect.paramstyle or None,
59 executemany=executemany,
60 )
61 context._sentry_sql_span_manager = ctx_mgr
62
63 span = ctx_mgr.__enter__()
64
65 if span is not None:
66 context._sentry_sql_span = span
67
68
69 def _after_cursor_execute(conn, cursor, statement, parameters, context, *args):
70 # type: (Any, Any, Any, Any, Any, *Any) -> None
71 ctx_mgr = getattr(
72 context, "_sentry_sql_span_manager", None
73 ) # type: Optional[ContextManager[Any]]
74
75 if ctx_mgr is not None:
76 context._sentry_sql_span_manager = None
77 ctx_mgr.__exit__(None, None, None)
78
79
80 def _handle_error(context, *args):
81 # type: (Any, *Any) -> None
82 execution_context = context.execution_context
83 if execution_context is None:
84 return
85
86 span = getattr(execution_context, "_sentry_sql_span", None) # type: Optional[Span]
87
88 if span is not None:
89 span.set_status("internal_error")
90
91 # _after_cursor_execute does not get called for crashing SQL stmts. Judging
92 # from SQLAlchemy codebase it does seem like any error coming into this
93 # handler is going to be fatal.
94 ctx_mgr = getattr(
95 execution_context, "_sentry_sql_span_manager", None
96 ) # type: Optional[ContextManager[Any]]
97
98 if ctx_mgr is not None:
99 execution_context._sentry_sql_span_manager = None
100 ctx_mgr.__exit__(None, None, None)
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/sqlalchemy.py b/sentry_sdk/integrations/sqlalchemy.py
--- a/sentry_sdk/integrations/sqlalchemy.py
+++ b/sentry_sdk/integrations/sqlalchemy.py
@@ -1,5 +1,7 @@
from __future__ import absolute_import
+import re
+
from sentry_sdk._types import MYPY
from sentry_sdk.hub import Hub
from sentry_sdk.integrations import Integration, DidNotEnable
@@ -28,7 +30,9 @@
# type: () -> None
try:
- version = tuple(map(int, SQLALCHEMY_VERSION.split("b")[0].split(".")))
+ version = tuple(
+ map(int, re.split("b|rc", SQLALCHEMY_VERSION)[0].split("."))
+ )
except (TypeError, ValueError):
raise DidNotEnable(
"Unparsable SQLAlchemy version: {}".format(SQLALCHEMY_VERSION)
| {"golden_diff": "diff --git a/sentry_sdk/integrations/sqlalchemy.py b/sentry_sdk/integrations/sqlalchemy.py\n--- a/sentry_sdk/integrations/sqlalchemy.py\n+++ b/sentry_sdk/integrations/sqlalchemy.py\n@@ -1,5 +1,7 @@\n from __future__ import absolute_import\n \n+import re\n+\n from sentry_sdk._types import MYPY\n from sentry_sdk.hub import Hub\n from sentry_sdk.integrations import Integration, DidNotEnable\n@@ -28,7 +30,9 @@\n # type: () -> None\n \n try:\n- version = tuple(map(int, SQLALCHEMY_VERSION.split(\"b\")[0].split(\".\")))\n+ version = tuple(\n+ map(int, re.split(\"b|rc\", SQLALCHEMY_VERSION)[0].split(\".\"))\n+ )\n except (TypeError, ValueError):\n raise DidNotEnable(\n \"Unparsable SQLAlchemy version: {}\".format(SQLALCHEMY_VERSION)\n", "issue": "SQLAlchemy integration fails to parse version `2.0.0rc1`\n### How do you use Sentry?\n\nSentry Saas (sentry.io)\n\n### Version\n\n1.12.1\n\n### Steps to Reproduce\n\nCall `sentry_sdk.init()` with SQLAlchemy integration and install SQLAlchemy==2.0.0rc1.\n\n### Expected Result\n\nno error\n\n### Actual Result\n\n```\r\n @staticmethod\r\n def setup_once():\r\n # type: () -> None\r\n \r\n try:\r\n version = tuple(map(int, SQLALCHEMY_VERSION.split(\"b\")[0].split(\".\")))\r\n except (TypeError, ValueError):\r\n > raise DidNotEnable(\r\n \"Unparsable SQLAlchemy version: {}\".format(SQLALCHEMY_VERSION)\r\n )\r\n E sentry_sdk.integrations.DidNotEnable: Unparsable SQLAlchemy version: 2.0.0rc1\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.tracing_utils import record_sql_queries\n\ntry:\n from sqlalchemy.engine import Engine # type: ignore\n from sqlalchemy.event import listen # type: ignore\n from sqlalchemy import __version__ as SQLALCHEMY_VERSION # type: ignore\nexcept ImportError:\n raise DidNotEnable(\"SQLAlchemy not installed.\")\n\nif MYPY:\n from typing import Any\n from typing import ContextManager\n from typing import Optional\n\n from sentry_sdk.tracing import Span\n\n\nclass SqlalchemyIntegration(Integration):\n identifier = \"sqlalchemy\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n try:\n version = tuple(map(int, SQLALCHEMY_VERSION.split(\"b\")[0].split(\".\")))\n except (TypeError, ValueError):\n raise DidNotEnable(\n \"Unparsable SQLAlchemy version: {}\".format(SQLALCHEMY_VERSION)\n )\n\n if version < (1, 2):\n raise DidNotEnable(\"SQLAlchemy 1.2 or newer required.\")\n\n listen(Engine, \"before_cursor_execute\", _before_cursor_execute)\n listen(Engine, \"after_cursor_execute\", _after_cursor_execute)\n listen(Engine, \"handle_error\", _handle_error)\n\n\ndef _before_cursor_execute(\n conn, cursor, statement, parameters, context, executemany, *args\n):\n # type: (Any, Any, Any, Any, Any, bool, *Any) -> None\n hub = Hub.current\n if hub.get_integration(SqlalchemyIntegration) is None:\n return\n\n ctx_mgr = record_sql_queries(\n hub,\n cursor,\n statement,\n parameters,\n paramstyle=context and context.dialect and context.dialect.paramstyle or None,\n executemany=executemany,\n )\n context._sentry_sql_span_manager = ctx_mgr\n\n span = ctx_mgr.__enter__()\n\n if span is not None:\n context._sentry_sql_span = span\n\n\ndef _after_cursor_execute(conn, cursor, statement, parameters, context, *args):\n # type: (Any, Any, Any, Any, Any, *Any) -> None\n ctx_mgr = getattr(\n context, \"_sentry_sql_span_manager\", None\n ) # type: Optional[ContextManager[Any]]\n\n if ctx_mgr is not None:\n context._sentry_sql_span_manager = None\n ctx_mgr.__exit__(None, None, None)\n\n\ndef _handle_error(context, *args):\n # type: (Any, *Any) -> None\n execution_context = context.execution_context\n if execution_context is None:\n return\n\n span = getattr(execution_context, \"_sentry_sql_span\", None) # type: Optional[Span]\n\n if span is not None:\n span.set_status(\"internal_error\")\n\n # _after_cursor_execute does not get called for crashing SQL stmts. Judging\n # from SQLAlchemy codebase it does seem like any error coming into this\n # handler is going to be fatal.\n ctx_mgr = getattr(\n execution_context, \"_sentry_sql_span_manager\", None\n ) # type: Optional[ContextManager[Any]]\n\n if ctx_mgr is not None:\n execution_context._sentry_sql_span_manager = None\n ctx_mgr.__exit__(None, None, None)\n", "path": "sentry_sdk/integrations/sqlalchemy.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport re\n\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.tracing_utils import record_sql_queries\n\ntry:\n from sqlalchemy.engine import Engine # type: ignore\n from sqlalchemy.event import listen # type: ignore\n from sqlalchemy import __version__ as SQLALCHEMY_VERSION # type: ignore\nexcept ImportError:\n raise DidNotEnable(\"SQLAlchemy not installed.\")\n\nif MYPY:\n from typing import Any\n from typing import ContextManager\n from typing import Optional\n\n from sentry_sdk.tracing import Span\n\n\nclass SqlalchemyIntegration(Integration):\n identifier = \"sqlalchemy\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n try:\n version = tuple(\n map(int, re.split(\"b|rc\", SQLALCHEMY_VERSION)[0].split(\".\"))\n )\n except (TypeError, ValueError):\n raise DidNotEnable(\n \"Unparsable SQLAlchemy version: {}\".format(SQLALCHEMY_VERSION)\n )\n\n if version < (1, 2):\n raise DidNotEnable(\"SQLAlchemy 1.2 or newer required.\")\n\n listen(Engine, \"before_cursor_execute\", _before_cursor_execute)\n listen(Engine, \"after_cursor_execute\", _after_cursor_execute)\n listen(Engine, \"handle_error\", _handle_error)\n\n\ndef _before_cursor_execute(\n conn, cursor, statement, parameters, context, executemany, *args\n):\n # type: (Any, Any, Any, Any, Any, bool, *Any) -> None\n hub = Hub.current\n if hub.get_integration(SqlalchemyIntegration) is None:\n return\n\n ctx_mgr = record_sql_queries(\n hub,\n cursor,\n statement,\n parameters,\n paramstyle=context and context.dialect and context.dialect.paramstyle or None,\n executemany=executemany,\n )\n context._sentry_sql_span_manager = ctx_mgr\n\n span = ctx_mgr.__enter__()\n\n if span is not None:\n context._sentry_sql_span = span\n\n\ndef _after_cursor_execute(conn, cursor, statement, parameters, context, *args):\n # type: (Any, Any, Any, Any, Any, *Any) -> None\n ctx_mgr = getattr(\n context, \"_sentry_sql_span_manager\", None\n ) # type: Optional[ContextManager[Any]]\n\n if ctx_mgr is not None:\n context._sentry_sql_span_manager = None\n ctx_mgr.__exit__(None, None, None)\n\n\ndef _handle_error(context, *args):\n # type: (Any, *Any) -> None\n execution_context = context.execution_context\n if execution_context is None:\n return\n\n span = getattr(execution_context, \"_sentry_sql_span\", None) # type: Optional[Span]\n\n if span is not None:\n span.set_status(\"internal_error\")\n\n # _after_cursor_execute does not get called for crashing SQL stmts. Judging\n # from SQLAlchemy codebase it does seem like any error coming into this\n # handler is going to be fatal.\n ctx_mgr = getattr(\n execution_context, \"_sentry_sql_span_manager\", None\n ) # type: Optional[ContextManager[Any]]\n\n if ctx_mgr is not None:\n execution_context._sentry_sql_span_manager = None\n ctx_mgr.__exit__(None, None, None)\n", "path": "sentry_sdk/integrations/sqlalchemy.py"}]} | 1,404 | 205 |
gh_patches_debug_3445 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2671 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
1.0 is not a boolean
## Description
The software detects `1.0` as a boolean instead of a numeric, which makes the system fail.
UI: `When making an XHR request, the server responded with an error, but the response body was not valid JSON.`
Database: `ERROR: 1.0 is not a boolean`
## Expected behavior
The software should cast `1.0` to numeric.
If the auto-casting mechanism fails, an informative error should be shown.
## To Reproduce
Create a test CSV file and import it.
```column
1.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/columns/operations/infer_types.py`
Content:
```
1 import logging
2
3 from sqlalchemy import VARCHAR, TEXT, Text
4 from sqlalchemy.exc import DatabaseError
5
6 from db.columns.exceptions import DagCycleError
7 from db.columns.operations.alter import alter_column_type
8 from db.tables.operations.select import get_oid_from_table, reflect_table
9 from db.types.base import PostgresType, MathesarCustomType, get_available_known_db_types
10 from db.metadata import get_empty_metadata
11
12
13 logger = logging.getLogger(__name__)
14
15 MAX_INFERENCE_DAG_DEPTH = 100
16
17 TYPE_INFERENCE_DAG = {
18 PostgresType.BOOLEAN: [],
19 MathesarCustomType.EMAIL: [],
20 PostgresType.INTERVAL: [],
21 PostgresType.NUMERIC: [
22 PostgresType.BOOLEAN,
23 ],
24 PostgresType.TEXT: [
25 PostgresType.BOOLEAN,
26 PostgresType.DATE,
27 PostgresType.NUMERIC,
28 MathesarCustomType.MATHESAR_MONEY,
29 PostgresType.TIMESTAMP_WITHOUT_TIME_ZONE,
30 PostgresType.TIMESTAMP_WITH_TIME_ZONE,
31 # We only infer to TIME_WITHOUT_TIME_ZONE as time zones don't make much sense
32 # without additional date information. See postgres documentation for further
33 # details: https://www.postgresql.org/docs/13/datatype-datetime.html
34 PostgresType.TIME_WITHOUT_TIME_ZONE,
35 PostgresType.INTERVAL,
36 MathesarCustomType.EMAIL,
37 MathesarCustomType.URI,
38 MathesarCustomType.MATHESAR_JSON_ARRAY,
39 MathesarCustomType.MATHESAR_JSON_OBJECT,
40 ],
41 }
42
43
44 def infer_column_type(schema, table_name, column_name, engine, depth=0, type_inference_dag=None, metadata=None, columns_might_have_defaults=True):
45 """
46 Attempts to cast the column to the best type for it, given the mappings defined in TYPE_INFERENCE_DAG
47 and _get_type_classes_mapped_to_dag_nodes. Returns the resulting column type's class.
48
49 Algorithm:
50 1. reflect the column's type class;
51 2. use _get_type_classes_mapped_to_dag_nodes to map it to a TYPE_INFERENCE_DAG key;
52 3. look up the sequence of types referred to by that key on the TYPE_INFERENCE_DAG;
53 - if there's no such key on the TYPE_INFERENCE_DAG dict, or if its value is an empty
54 list, return the current column type's class;
55 4. iterate through that sequence of types trying to alter the column's type to them;
56 - if the column's type is altered successfully, break iteration and return the output
57 of running infer_column_type again (trigger tail recursion);
58 - if none of the column type alterations succeed, return the current column type's
59 class.
60 """
61 if type_inference_dag is None:
62 type_inference_dag = TYPE_INFERENCE_DAG
63 metadata = metadata if metadata else get_empty_metadata()
64 if depth > MAX_INFERENCE_DAG_DEPTH:
65 raise DagCycleError("The type_inference_dag likely has a cycle")
66 type_classes_to_dag_nodes = _get_type_classes_mapped_to_dag_nodes(engine)
67 column_type_class = _get_column_class(
68 engine=engine,
69 schema=schema,
70 table_name=table_name,
71 column_name=column_name,
72 metadata=metadata,
73 )
74 # a DAG node will be a DatabaseType Enum
75 dag_node = type_classes_to_dag_nodes.get(column_type_class)
76 logger.debug(f"dag_node: {dag_node}")
77 types_to_cast_to = type_inference_dag.get(dag_node, [])
78 table_oid = get_oid_from_table(table_name, schema, engine)
79 for db_type in types_to_cast_to:
80 try:
81 with engine.begin() as conn:
82 alter_column_type(
83 table_oid,
84 column_name,
85 engine,
86 conn,
87 db_type,
88 metadata=metadata,
89 columns_might_have_defaults=columns_might_have_defaults,
90 )
91 logger.info(f"Column {column_name} altered to type {db_type.id}")
92 column_type_class = infer_column_type(
93 schema,
94 table_name,
95 column_name,
96 engine,
97 depth=depth + 1,
98 type_inference_dag=type_inference_dag,
99 metadata=metadata
100 )
101 break
102 # It's expected we catch this error when the test to see whether
103 # a type is appropriate for a column fails.
104 except DatabaseError:
105 logger.info(
106 f"Cannot alter column {column_name} to type {db_type.id}"
107 )
108 return column_type_class
109
110
111 def _get_column_class(engine, schema, table_name, column_name, metadata):
112 # Metadata can be reused because reflect_table fetches the table details again
113 table = reflect_table(table_name, schema, engine, metadata=metadata)
114 column_type_class = table.columns[column_name].type.__class__
115 return column_type_class
116
117
118 def _get_type_classes_mapped_to_dag_nodes(engine):
119 """
120 Returns SA type classes mapped to TYPE_INFERENCE_DAG nodes.
121
122 Purpose of this mapping is to find the wanted position on the TYPE_INFERENCE_DAG, given a
123 column's SA type class.
124 """
125 type_classes_to_enums = {
126 db_type.get_sa_class(engine): db_type
127 for db_type
128 in get_available_known_db_types(engine)
129 }
130 # NOTE: below dict merge sets some keys to PostgresType.TEXT, which, in infer_column_type,
131 # maps these classes to the types grouped under TYPE_INFERENCE_DAG[PostgresType.TEXT].
132 type_classes_to_dag_nodes = (
133 type_classes_to_enums | {
134 Text: PostgresType.TEXT,
135 TEXT: PostgresType.TEXT,
136 VARCHAR: PostgresType.TEXT,
137 }
138 )
139 return type_classes_to_dag_nodes
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/columns/operations/infer_types.py b/db/columns/operations/infer_types.py
--- a/db/columns/operations/infer_types.py
+++ b/db/columns/operations/infer_types.py
@@ -18,9 +18,7 @@
PostgresType.BOOLEAN: [],
MathesarCustomType.EMAIL: [],
PostgresType.INTERVAL: [],
- PostgresType.NUMERIC: [
- PostgresType.BOOLEAN,
- ],
+ PostgresType.NUMERIC: [],
PostgresType.TEXT: [
PostgresType.BOOLEAN,
PostgresType.DATE,
| {"golden_diff": "diff --git a/db/columns/operations/infer_types.py b/db/columns/operations/infer_types.py\n--- a/db/columns/operations/infer_types.py\n+++ b/db/columns/operations/infer_types.py\n@@ -18,9 +18,7 @@\n PostgresType.BOOLEAN: [],\n MathesarCustomType.EMAIL: [],\n PostgresType.INTERVAL: [],\n- PostgresType.NUMERIC: [\n- PostgresType.BOOLEAN,\n- ],\n+ PostgresType.NUMERIC: [],\n PostgresType.TEXT: [\n PostgresType.BOOLEAN,\n PostgresType.DATE,\n", "issue": "1.0 is not a boolean\n## Description\r\nThe software detects `1.0` as a boolean instead of a numeric, which makes the system fail.\r\n\r\nUI: `When making an XHR request, the server responded with an error, but the response body was not valid JSON.`\r\nDatabase: `ERROR: 1.0 is not a boolean`\r\n\r\n## Expected behavior\r\n\r\nThe software should cast `1.0` to numeric.\r\nIf the auto-casting mechanism fails, an informative error should be shown.\r\n\r\n## To Reproduce\r\n\r\nCreate a test CSV file and import it.\r\n```column\r\n1.0\r\n```\n", "before_files": [{"content": "import logging\n\nfrom sqlalchemy import VARCHAR, TEXT, Text\nfrom sqlalchemy.exc import DatabaseError\n\nfrom db.columns.exceptions import DagCycleError\nfrom db.columns.operations.alter import alter_column_type\nfrom db.tables.operations.select import get_oid_from_table, reflect_table\nfrom db.types.base import PostgresType, MathesarCustomType, get_available_known_db_types\nfrom db.metadata import get_empty_metadata\n\n\nlogger = logging.getLogger(__name__)\n\nMAX_INFERENCE_DAG_DEPTH = 100\n\nTYPE_INFERENCE_DAG = {\n PostgresType.BOOLEAN: [],\n MathesarCustomType.EMAIL: [],\n PostgresType.INTERVAL: [],\n PostgresType.NUMERIC: [\n PostgresType.BOOLEAN,\n ],\n PostgresType.TEXT: [\n PostgresType.BOOLEAN,\n PostgresType.DATE,\n PostgresType.NUMERIC,\n MathesarCustomType.MATHESAR_MONEY,\n PostgresType.TIMESTAMP_WITHOUT_TIME_ZONE,\n PostgresType.TIMESTAMP_WITH_TIME_ZONE,\n # We only infer to TIME_WITHOUT_TIME_ZONE as time zones don't make much sense\n # without additional date information. See postgres documentation for further\n # details: https://www.postgresql.org/docs/13/datatype-datetime.html\n PostgresType.TIME_WITHOUT_TIME_ZONE,\n PostgresType.INTERVAL,\n MathesarCustomType.EMAIL,\n MathesarCustomType.URI,\n MathesarCustomType.MATHESAR_JSON_ARRAY,\n MathesarCustomType.MATHESAR_JSON_OBJECT,\n ],\n}\n\n\ndef infer_column_type(schema, table_name, column_name, engine, depth=0, type_inference_dag=None, metadata=None, columns_might_have_defaults=True):\n \"\"\"\n Attempts to cast the column to the best type for it, given the mappings defined in TYPE_INFERENCE_DAG\n and _get_type_classes_mapped_to_dag_nodes. Returns the resulting column type's class.\n\n Algorithm:\n 1. reflect the column's type class;\n 2. use _get_type_classes_mapped_to_dag_nodes to map it to a TYPE_INFERENCE_DAG key;\n 3. look up the sequence of types referred to by that key on the TYPE_INFERENCE_DAG;\n - if there's no such key on the TYPE_INFERENCE_DAG dict, or if its value is an empty\n list, return the current column type's class;\n 4. iterate through that sequence of types trying to alter the column's type to them;\n - if the column's type is altered successfully, break iteration and return the output\n of running infer_column_type again (trigger tail recursion);\n - if none of the column type alterations succeed, return the current column type's\n class.\n \"\"\"\n if type_inference_dag is None:\n type_inference_dag = TYPE_INFERENCE_DAG\n metadata = metadata if metadata else get_empty_metadata()\n if depth > MAX_INFERENCE_DAG_DEPTH:\n raise DagCycleError(\"The type_inference_dag likely has a cycle\")\n type_classes_to_dag_nodes = _get_type_classes_mapped_to_dag_nodes(engine)\n column_type_class = _get_column_class(\n engine=engine,\n schema=schema,\n table_name=table_name,\n column_name=column_name,\n metadata=metadata,\n )\n # a DAG node will be a DatabaseType Enum\n dag_node = type_classes_to_dag_nodes.get(column_type_class)\n logger.debug(f\"dag_node: {dag_node}\")\n types_to_cast_to = type_inference_dag.get(dag_node, [])\n table_oid = get_oid_from_table(table_name, schema, engine)\n for db_type in types_to_cast_to:\n try:\n with engine.begin() as conn:\n alter_column_type(\n table_oid,\n column_name,\n engine,\n conn,\n db_type,\n metadata=metadata,\n columns_might_have_defaults=columns_might_have_defaults,\n )\n logger.info(f\"Column {column_name} altered to type {db_type.id}\")\n column_type_class = infer_column_type(\n schema,\n table_name,\n column_name,\n engine,\n depth=depth + 1,\n type_inference_dag=type_inference_dag,\n metadata=metadata\n )\n break\n # It's expected we catch this error when the test to see whether\n # a type is appropriate for a column fails.\n except DatabaseError:\n logger.info(\n f\"Cannot alter column {column_name} to type {db_type.id}\"\n )\n return column_type_class\n\n\ndef _get_column_class(engine, schema, table_name, column_name, metadata):\n # Metadata can be reused because reflect_table fetches the table details again\n table = reflect_table(table_name, schema, engine, metadata=metadata)\n column_type_class = table.columns[column_name].type.__class__\n return column_type_class\n\n\ndef _get_type_classes_mapped_to_dag_nodes(engine):\n \"\"\"\n Returns SA type classes mapped to TYPE_INFERENCE_DAG nodes.\n\n Purpose of this mapping is to find the wanted position on the TYPE_INFERENCE_DAG, given a\n column's SA type class.\n \"\"\"\n type_classes_to_enums = {\n db_type.get_sa_class(engine): db_type\n for db_type\n in get_available_known_db_types(engine)\n }\n # NOTE: below dict merge sets some keys to PostgresType.TEXT, which, in infer_column_type,\n # maps these classes to the types grouped under TYPE_INFERENCE_DAG[PostgresType.TEXT].\n type_classes_to_dag_nodes = (\n type_classes_to_enums | {\n Text: PostgresType.TEXT,\n TEXT: PostgresType.TEXT,\n VARCHAR: PostgresType.TEXT,\n }\n )\n return type_classes_to_dag_nodes\n", "path": "db/columns/operations/infer_types.py"}], "after_files": [{"content": "import logging\n\nfrom sqlalchemy import VARCHAR, TEXT, Text\nfrom sqlalchemy.exc import DatabaseError\n\nfrom db.columns.exceptions import DagCycleError\nfrom db.columns.operations.alter import alter_column_type\nfrom db.tables.operations.select import get_oid_from_table, reflect_table\nfrom db.types.base import PostgresType, MathesarCustomType, get_available_known_db_types\nfrom db.metadata import get_empty_metadata\n\n\nlogger = logging.getLogger(__name__)\n\nMAX_INFERENCE_DAG_DEPTH = 100\n\nTYPE_INFERENCE_DAG = {\n PostgresType.BOOLEAN: [],\n MathesarCustomType.EMAIL: [],\n PostgresType.INTERVAL: [],\n PostgresType.NUMERIC: [],\n PostgresType.TEXT: [\n PostgresType.BOOLEAN,\n PostgresType.DATE,\n PostgresType.NUMERIC,\n MathesarCustomType.MATHESAR_MONEY,\n PostgresType.TIMESTAMP_WITHOUT_TIME_ZONE,\n PostgresType.TIMESTAMP_WITH_TIME_ZONE,\n # We only infer to TIME_WITHOUT_TIME_ZONE as time zones don't make much sense\n # without additional date information. See postgres documentation for further\n # details: https://www.postgresql.org/docs/13/datatype-datetime.html\n PostgresType.TIME_WITHOUT_TIME_ZONE,\n PostgresType.INTERVAL,\n MathesarCustomType.EMAIL,\n MathesarCustomType.URI,\n MathesarCustomType.MATHESAR_JSON_ARRAY,\n MathesarCustomType.MATHESAR_JSON_OBJECT,\n ],\n}\n\n\ndef infer_column_type(schema, table_name, column_name, engine, depth=0, type_inference_dag=None, metadata=None, columns_might_have_defaults=True):\n \"\"\"\n Attempts to cast the column to the best type for it, given the mappings defined in TYPE_INFERENCE_DAG\n and _get_type_classes_mapped_to_dag_nodes. Returns the resulting column type's class.\n\n Algorithm:\n 1. reflect the column's type class;\n 2. use _get_type_classes_mapped_to_dag_nodes to map it to a TYPE_INFERENCE_DAG key;\n 3. look up the sequence of types referred to by that key on the TYPE_INFERENCE_DAG;\n - if there's no such key on the TYPE_INFERENCE_DAG dict, or if its value is an empty\n list, return the current column type's class;\n 4. iterate through that sequence of types trying to alter the column's type to them;\n - if the column's type is altered successfully, break iteration and return the output\n of running infer_column_type again (trigger tail recursion);\n - if none of the column type alterations succeed, return the current column type's\n class.\n \"\"\"\n if type_inference_dag is None:\n type_inference_dag = TYPE_INFERENCE_DAG\n metadata = metadata if metadata else get_empty_metadata()\n if depth > MAX_INFERENCE_DAG_DEPTH:\n raise DagCycleError(\"The type_inference_dag likely has a cycle\")\n type_classes_to_dag_nodes = _get_type_classes_mapped_to_dag_nodes(engine)\n column_type_class = _get_column_class(\n engine=engine,\n schema=schema,\n table_name=table_name,\n column_name=column_name,\n metadata=metadata,\n )\n # a DAG node will be a DatabaseType Enum\n dag_node = type_classes_to_dag_nodes.get(column_type_class)\n logger.debug(f\"dag_node: {dag_node}\")\n types_to_cast_to = type_inference_dag.get(dag_node, [])\n table_oid = get_oid_from_table(table_name, schema, engine)\n for db_type in types_to_cast_to:\n try:\n with engine.begin() as conn:\n alter_column_type(\n table_oid,\n column_name,\n engine,\n conn,\n db_type,\n metadata=metadata,\n columns_might_have_defaults=columns_might_have_defaults,\n )\n logger.info(f\"Column {column_name} altered to type {db_type.id}\")\n column_type_class = infer_column_type(\n schema,\n table_name,\n column_name,\n engine,\n depth=depth + 1,\n type_inference_dag=type_inference_dag,\n metadata=metadata\n )\n break\n # It's expected we catch this error when the test to see whether\n # a type is appropriate for a column fails.\n except DatabaseError:\n logger.info(\n f\"Cannot alter column {column_name} to type {db_type.id}\"\n )\n return column_type_class\n\n\ndef _get_column_class(engine, schema, table_name, column_name, metadata):\n # Metadata can be reused because reflect_table fetches the table details again\n table = reflect_table(table_name, schema, engine, metadata=metadata)\n column_type_class = table.columns[column_name].type.__class__\n return column_type_class\n\n\ndef _get_type_classes_mapped_to_dag_nodes(engine):\n \"\"\"\n Returns SA type classes mapped to TYPE_INFERENCE_DAG nodes.\n\n Purpose of this mapping is to find the wanted position on the TYPE_INFERENCE_DAG, given a\n column's SA type class.\n \"\"\"\n type_classes_to_enums = {\n db_type.get_sa_class(engine): db_type\n for db_type\n in get_available_known_db_types(engine)\n }\n # NOTE: below dict merge sets some keys to PostgresType.TEXT, which, in infer_column_type,\n # maps these classes to the types grouped under TYPE_INFERENCE_DAG[PostgresType.TEXT].\n type_classes_to_dag_nodes = (\n type_classes_to_enums | {\n Text: PostgresType.TEXT,\n TEXT: PostgresType.TEXT,\n VARCHAR: PostgresType.TEXT,\n }\n )\n return type_classes_to_dag_nodes\n", "path": "db/columns/operations/infer_types.py"}]} | 1,942 | 133 |
gh_patches_debug_27444 | rasdani/github-patches | git_diff | instadeepai__Mava-433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEATURE] Abstract builder class for Jax-based systems
### Feature
Abstract class for system building for Jax-based Mava systems.
### Proposal
The builder should take care of building essentially elements of a MARL system that run in different processes. These include the data server, variable server, executor (and evaluator) and trainer.
### Testing
Tests will only consider the proper inheritance of the abstract builder class.
### Definition of done
All abstract methods are defined and have input and return types specified.
### Mandatory checklist before making a PR
* [x] The success criteria laid down in βDefinition of doneβ are met.
* [x] Code is documented - docstrings for methods and classes, static types for arguments.
* [x] Code is tested - unit, integration and/or functional tests are added.
* [x] Documentation is updated - README, CONTRIBUTING, or other documentation.
* [x] All functional tests are green.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mava/core_jax.py`
Content:
```
1 # python3
2 # Copyright 2021 InstaDeep Ltd. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 """Core Mava interfaces for Jax systems."""
18
19 import abc
20 from types import SimpleNamespace
21 from typing import Any, List
22
23
24 class BaseSystem(abc.ABC):
25 """Abstract system object."""
26
27 @abc.abstractmethod
28 def design(self) -> SimpleNamespace:
29 """System design specifying the list of components to use.
30
31 Returns:
32 system callback components
33 """
34
35 @abc.abstractmethod
36 def update(self, component: Any) -> None:
37 """Update a component that has already been added to the system.
38
39 Args:
40 component : system callback component
41 """
42
43 @abc.abstractmethod
44 def add(self, component: Any) -> None:
45 """Add a new component to the system.
46
47 Args:
48 component : system callback component
49 """
50
51 @abc.abstractmethod
52 def configure(self, **kwargs: Any) -> None:
53 """Configure system hyperparameters."""
54
55 @abc.abstractmethod
56 def launch(
57 self,
58 num_executors: int,
59 nodes_on_gpu: List[str],
60 multi_process: bool = True,
61 name: str = "system",
62 ) -> None:
63 """Run the system.
64
65 Args:
66 num_executors : number of executor processes to run in parallel
67 nodes_on_gpu : which processes to run on gpu
68 multi_process : whether to run single or multi process, single process runs
69 are primarily for debugging
70 name : name of the system
71 """
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mava/core_jax.py b/mava/core_jax.py
--- a/mava/core_jax.py
+++ b/mava/core_jax.py
@@ -69,3 +69,59 @@
are primarily for debugging
name : name of the system
"""
+
+
+class SystemBuilder(abc.ABC):
+ """Abstract system builder."""
+
+ @abc.abstractmethod
+ def data_server(self) -> List[Any]:
+ """Data server to store and serve transition data from and to system.
+
+ Returns:
+ System data server
+ """
+
+ @abc.abstractmethod
+ def parameter_server(self) -> Any:
+ """Parameter server to store and serve system network parameters.
+
+ Returns:
+ System parameter server
+ """
+
+ @abc.abstractmethod
+ def executor(
+ self, executor_id: str, data_server_client: Any, parameter_server_client: Any
+ ) -> Any:
+ """Executor, a collection of agents in an environment to gather experience.
+
+ Args:
+ executor_id : id to identify the executor process for logging purposes
+ data_server_client : data server client for pushing transition data
+ parameter_server_client : parameter server client for pulling parameters
+ Returns:
+ System executor
+ """
+
+ @abc.abstractmethod
+ def trainer(
+ self, trainer_id: str, data_server_client: Any, parameter_server_client: Any
+ ) -> Any:
+ """Trainer, a system process for updating agent specific network parameters.
+
+ Args:
+ trainer_id : id to identify the trainer process for logging purposes
+ data_server_client : data server client for pulling transition data
+ parameter_server_client : parameter server client for pushing parameters
+ Returns:
+ System trainer
+ """
+
+ @abc.abstractmethod
+ def build(self) -> None:
+ """Construct program nodes."""
+
+ @abc.abstractmethod
+ def launch(self) -> None:
+ """Run the graph program."""
| {"golden_diff": "diff --git a/mava/core_jax.py b/mava/core_jax.py\n--- a/mava/core_jax.py\n+++ b/mava/core_jax.py\n@@ -69,3 +69,59 @@\n are primarily for debugging\n name : name of the system\n \"\"\"\n+\n+\n+class SystemBuilder(abc.ABC):\n+ \"\"\"Abstract system builder.\"\"\"\n+\n+ @abc.abstractmethod\n+ def data_server(self) -> List[Any]:\n+ \"\"\"Data server to store and serve transition data from and to system.\n+\n+ Returns:\n+ System data server\n+ \"\"\"\n+\n+ @abc.abstractmethod\n+ def parameter_server(self) -> Any:\n+ \"\"\"Parameter server to store and serve system network parameters.\n+\n+ Returns:\n+ System parameter server\n+ \"\"\"\n+\n+ @abc.abstractmethod\n+ def executor(\n+ self, executor_id: str, data_server_client: Any, parameter_server_client: Any\n+ ) -> Any:\n+ \"\"\"Executor, a collection of agents in an environment to gather experience.\n+\n+ Args:\n+ executor_id : id to identify the executor process for logging purposes\n+ data_server_client : data server client for pushing transition data\n+ parameter_server_client : parameter server client for pulling parameters\n+ Returns:\n+ System executor\n+ \"\"\"\n+\n+ @abc.abstractmethod\n+ def trainer(\n+ self, trainer_id: str, data_server_client: Any, parameter_server_client: Any\n+ ) -> Any:\n+ \"\"\"Trainer, a system process for updating agent specific network parameters.\n+\n+ Args:\n+ trainer_id : id to identify the trainer process for logging purposes\n+ data_server_client : data server client for pulling transition data\n+ parameter_server_client : parameter server client for pushing parameters\n+ Returns:\n+ System trainer\n+ \"\"\"\n+\n+ @abc.abstractmethod\n+ def build(self) -> None:\n+ \"\"\"Construct program nodes.\"\"\"\n+\n+ @abc.abstractmethod\n+ def launch(self) -> None:\n+ \"\"\"Run the graph program.\"\"\"\n", "issue": "[FEATURE] Abstract builder class for Jax-based systems\n### Feature\r\nAbstract class for system building for Jax-based Mava systems.\r\n\r\n### Proposal\r\nThe builder should take care of building essentially elements of a MARL system that run in different processes. These include the data server, variable server, executor (and evaluator) and trainer. \r\n\r\n### Testing\r\nTests will only consider the proper inheritance of the abstract builder class.\r\n\r\n### Definition of done\r\nAll abstract methods are defined and have input and return types specified. \r\n\r\n### Mandatory checklist before making a PR\r\n* [x] The success criteria laid down in \u201cDefinition of done\u201d are met.\r\n* [x] Code is documented - docstrings for methods and classes, static types for arguments.\r\n* [x] Code is tested - unit, integration and/or functional tests are added.\r\n* [x] Documentation is updated - README, CONTRIBUTING, or other documentation.\r\n* [x] All functional tests are green.\r\n\n", "before_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"Core Mava interfaces for Jax systems.\"\"\"\n\nimport abc\nfrom types import SimpleNamespace\nfrom typing import Any, List\n\n\nclass BaseSystem(abc.ABC):\n \"\"\"Abstract system object.\"\"\"\n\n @abc.abstractmethod\n def design(self) -> SimpleNamespace:\n \"\"\"System design specifying the list of components to use.\n\n Returns:\n system callback components\n \"\"\"\n\n @abc.abstractmethod\n def update(self, component: Any) -> None:\n \"\"\"Update a component that has already been added to the system.\n\n Args:\n component : system callback component\n \"\"\"\n\n @abc.abstractmethod\n def add(self, component: Any) -> None:\n \"\"\"Add a new component to the system.\n\n Args:\n component : system callback component\n \"\"\"\n\n @abc.abstractmethod\n def configure(self, **kwargs: Any) -> None:\n \"\"\"Configure system hyperparameters.\"\"\"\n\n @abc.abstractmethod\n def launch(\n self,\n num_executors: int,\n nodes_on_gpu: List[str],\n multi_process: bool = True,\n name: str = \"system\",\n ) -> None:\n \"\"\"Run the system.\n\n Args:\n num_executors : number of executor processes to run in parallel\n nodes_on_gpu : which processes to run on gpu\n multi_process : whether to run single or multi process, single process runs\n are primarily for debugging\n name : name of the system\n \"\"\"\n", "path": "mava/core_jax.py"}], "after_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n\"\"\"Core Mava interfaces for Jax systems.\"\"\"\n\nimport abc\nfrom types import SimpleNamespace\nfrom typing import Any, List\n\n\nclass BaseSystem(abc.ABC):\n \"\"\"Abstract system object.\"\"\"\n\n @abc.abstractmethod\n def design(self) -> SimpleNamespace:\n \"\"\"System design specifying the list of components to use.\n\n Returns:\n system callback components\n \"\"\"\n\n @abc.abstractmethod\n def update(self, component: Any) -> None:\n \"\"\"Update a component that has already been added to the system.\n\n Args:\n component : system callback component\n \"\"\"\n\n @abc.abstractmethod\n def add(self, component: Any) -> None:\n \"\"\"Add a new component to the system.\n\n Args:\n component : system callback component\n \"\"\"\n\n @abc.abstractmethod\n def configure(self, **kwargs: Any) -> None:\n \"\"\"Configure system hyperparameters.\"\"\"\n\n @abc.abstractmethod\n def launch(\n self,\n num_executors: int,\n nodes_on_gpu: List[str],\n multi_process: bool = True,\n name: str = \"system\",\n ) -> None:\n \"\"\"Run the system.\n\n Args:\n num_executors : number of executor processes to run in parallel\n nodes_on_gpu : which processes to run on gpu\n multi_process : whether to run single or multi process, single process runs\n are primarily for debugging\n name : name of the system\n \"\"\"\n\n\nclass SystemBuilder(abc.ABC):\n \"\"\"Abstract system builder.\"\"\"\n\n @abc.abstractmethod\n def data_server(self) -> List[Any]:\n \"\"\"Data server to store and serve transition data from and to system.\n\n Returns:\n System data server\n \"\"\"\n\n @abc.abstractmethod\n def parameter_server(self) -> Any:\n \"\"\"Parameter server to store and serve system network parameters.\n\n Returns:\n System parameter server\n \"\"\"\n\n @abc.abstractmethod\n def executor(\n self, executor_id: str, data_server_client: Any, parameter_server_client: Any\n ) -> Any:\n \"\"\"Executor, a collection of agents in an environment to gather experience.\n\n Args:\n executor_id : id to identify the executor process for logging purposes\n data_server_client : data server client for pushing transition data\n parameter_server_client : parameter server client for pulling parameters\n Returns:\n System executor\n \"\"\"\n\n @abc.abstractmethod\n def trainer(\n self, trainer_id: str, data_server_client: Any, parameter_server_client: Any\n ) -> Any:\n \"\"\"Trainer, a system process for updating agent specific network parameters.\n\n Args:\n trainer_id : id to identify the trainer process for logging purposes\n data_server_client : data server client for pulling transition data\n parameter_server_client : parameter server client for pushing parameters\n Returns:\n System trainer\n \"\"\"\n\n @abc.abstractmethod\n def build(self) -> None:\n \"\"\"Construct program nodes.\"\"\"\n\n @abc.abstractmethod\n def launch(self) -> None:\n \"\"\"Run the graph program.\"\"\"\n", "path": "mava/core_jax.py"}]} | 1,049 | 456 |
gh_patches_debug_31759 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1513 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Users applying for membership changes with field of study gets expiredate in the past
If you apply for membership today with a Bachelor started in 2011, you will get an expiry date of 2014-09-16.
The given expiration date for a membership should be adjusted upward if the suggested duration of a field of study puts this date in the past.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/approval/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 import datetime
4
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.http import Http404
8 from django.shortcuts import get_object_or_404, redirect
9 from django.utils.translation import ugettext as _
10
11 from apps.approval.forms import FieldOfStudyApplicationForm
12 from apps.approval.models import MembershipApproval
13 from apps.authentication.models import AllowedUsername, get_length_of_field_of_study
14
15
16 @login_required
17 def create_fos_application(request):
18 if request.method == 'POST':
19 if not request.user.ntnu_username:
20 messages.error(request, _("Du mΓ₯ knytte et NTNU-brukernavn til kontoen din."))
21 return redirect('profiles_active', active_tab='membership')
22
23 form = FieldOfStudyApplicationForm(request.POST)
24 if form.is_valid():
25 cleaned = form.cleaned_data
26
27 field_of_study = int(cleaned['field_of_study'])
28
29 if field_of_study == 0:
30 messages.warning(request, _("Denne studieretningen (Gjest) er ikke et gyldig alternativ."))
31 return redirect('profiles_active', active_tab='membership')
32
33 started_day = 1
34 started_month = 0
35 started_year = int(cleaned['started_year'])
36
37 if cleaned['started_semester'] == "h":
38 started_month = 7
39 if cleaned['started_semester'] == "v":
40 started_month = 1
41
42 started_date = datetime.date(started_year, started_month, started_day)
43
44 # Does the user already have a field of study and started date?
45 if request.user.started_date and request.user.field_of_study:
46 # If there is no change from the current settings, ignore the request
47 if request.user.started_date == started_date and request.user.field_of_study == field_of_study:
48 messages.error(
49 request,
50 _("Du er allerede registrert med denne studieretningen og denne startdatoen.")
51 )
52 return redirect('profiles_active', active_tab='membership')
53
54 application = MembershipApproval(
55 applicant=request.user,
56 field_of_study=field_of_study,
57 started_date=started_date
58 )
59
60 length_of_fos = get_length_of_field_of_study(field_of_study)
61 if length_of_fos > 0:
62 # Expiry dates should be 15th September, so that we have tiem to get new lists from NTNU
63 application.new_expiry_date = datetime.date(
64 started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)
65 application.save()
66
67 messages.success(request, _("SΓΈknad om bytte av studieretning er sendt."))
68
69 return redirect('profiles_active', active_tab='membership')
70 raise Http404
71
72
73 @login_required
74 def create_membership_application(request):
75 if request.method == 'POST':
76 if not request.user.has_expiring_membership:
77 messages.error(request, _("Din bruker har ikke et utlΓΈpende medlemskap."))
78 return redirect('profiles_active', active_tab='membership')
79
80 if not request.user.ntnu_username:
81 messages.error(request, _("Du mΓ₯ knytte et NTNU-brukernavn til kontoen din."))
82 return redirect('profiles_active', active_tab='membership')
83
84 # Extend length of membership by 1 year
85 membership = AllowedUsername.objects.get(username=request.user.ntnu_username)
86 new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
87
88 application = MembershipApproval(
89 applicant=request.user,
90 new_expiry_date=new_expiration_date,
91 )
92 application.save()
93
94 messages.success(request, _("SΓΈknad om ett Γ₯rs forlenget medlemskap er sendt."))
95
96 return redirect('profiles_active', active_tab='membership')
97 raise Http404
98
99
100 @login_required
101 def cancel_application(request, application_id):
102 app = get_object_or_404(MembershipApproval, pk=application_id)
103
104 if app.applicant != request.user:
105 messages.error(request, _("Bare sΓΈkeren selv kan slette en sΓΈknad."))
106 return redirect('profiles_active', active_tab='membership')
107
108 if app.processed:
109 messages.error(request, _("Denne sΓΈknaden er behandlet og kan ikke slettes."))
110 return redirect('profiles_active', active_tab='membership')
111
112 app.delete()
113
114 return redirect('profiles_active', active_tab='membership')
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/approval/views.py b/apps/approval/views.py
--- a/apps/approval/views.py
+++ b/apps/approval/views.py
@@ -6,6 +6,7 @@
from django.contrib.auth.decorators import login_required
from django.http import Http404
from django.shortcuts import get_object_or_404, redirect
+from django.utils import timezone
from django.utils.translation import ugettext as _
from apps.approval.forms import FieldOfStudyApplicationForm
@@ -59,9 +60,7 @@
length_of_fos = get_length_of_field_of_study(field_of_study)
if length_of_fos > 0:
- # Expiry dates should be 15th September, so that we have tiem to get new lists from NTNU
- application.new_expiry_date = datetime.date(
- started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)
+ application.new_expiry_date = get_expiry_date(started_year, length_of_fos)
application.save()
messages.success(request, _("SΓΈknad om bytte av studieretning er sendt."))
@@ -70,6 +69,21 @@
raise Http404
+def get_expiry_date(started_year, length_of_fos):
+ today = timezone.now().date()
+ # Expiry dates should be 15th September, so that we have time to get new lists from NTNU
+ new_expiry_date = datetime.date(
+ started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)
+ # Expiry dates in the past sets the expiry date to next september
+ if new_expiry_date < today:
+ if today < datetime.date(today.year, 9, 15):
+ new_expiry_date = datetime.date(today.year, 9, 15)
+ else:
+ new_expiry_date = datetime.date(
+ today.year, 9, 16) + datetime.timedelta(days=365)
+ return new_expiry_date
+
+
@login_required
def create_membership_application(request):
if request.method == 'POST':
| {"golden_diff": "diff --git a/apps/approval/views.py b/apps/approval/views.py\n--- a/apps/approval/views.py\n+++ b/apps/approval/views.py\n@@ -6,6 +6,7 @@\n from django.contrib.auth.decorators import login_required\n from django.http import Http404\n from django.shortcuts import get_object_or_404, redirect\n+from django.utils import timezone\n from django.utils.translation import ugettext as _\n \n from apps.approval.forms import FieldOfStudyApplicationForm\n@@ -59,9 +60,7 @@\n \n length_of_fos = get_length_of_field_of_study(field_of_study)\n if length_of_fos > 0:\n- # Expiry dates should be 15th September, so that we have tiem to get new lists from NTNU\n- application.new_expiry_date = datetime.date(\n- started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n+ application.new_expiry_date = get_expiry_date(started_year, length_of_fos)\n application.save()\n \n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n@@ -70,6 +69,21 @@\n raise Http404\n \n \n+def get_expiry_date(started_year, length_of_fos):\n+ today = timezone.now().date()\n+ # Expiry dates should be 15th September, so that we have time to get new lists from NTNU\n+ new_expiry_date = datetime.date(\n+ started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n+ # Expiry dates in the past sets the expiry date to next september\n+ if new_expiry_date < today:\n+ if today < datetime.date(today.year, 9, 15):\n+ new_expiry_date = datetime.date(today.year, 9, 15)\n+ else:\n+ new_expiry_date = datetime.date(\n+ today.year, 9, 16) + datetime.timedelta(days=365)\n+ return new_expiry_date\n+\n+\n @login_required\n def create_membership_application(request):\n if request.method == 'POST':\n", "issue": "Users applying for membership changes with field of study gets expiredate in the past\nIf you apply for membership today with a Bachelor started in 2011, you will get an expiry date of 2014-09-16.\n\nThe given expiration date for a membership should be adjusted upward if the suggested duration of a field of study puts this date in the past.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import AllowedUsername, get_length_of_field_of_study\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == 'POST':\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n form = FieldOfStudyApplicationForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned['field_of_study'])\n\n if field_of_study == 0:\n messages.warning(request, _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"))\n return redirect('profiles_active', active_tab='membership')\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned['started_year'])\n\n if cleaned['started_semester'] == \"h\":\n started_month = 7\n if cleaned['started_semester'] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the request\n if request.user.started_date == started_date and request.user.field_of_study == field_of_study:\n messages.error(\n request,\n _(\"Du er allerede registrert med denne studieretningen og denne startdatoen.\")\n )\n return redirect('profiles_active', active_tab='membership')\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date\n )\n\n length_of_fos = get_length_of_field_of_study(field_of_study)\n if length_of_fos > 0:\n # Expiry dates should be 15th September, so that we have tiem to get new lists from NTNU\n application.new_expiry_date = datetime.date(\n started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == 'POST':\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n # Extend length of membership by 1 year\n membership = AllowedUsername.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect('profiles_active', active_tab='membership')\n\n app.delete()\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/approval/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import AllowedUsername, get_length_of_field_of_study\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == 'POST':\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n form = FieldOfStudyApplicationForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned['field_of_study'])\n\n if field_of_study == 0:\n messages.warning(request, _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"))\n return redirect('profiles_active', active_tab='membership')\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned['started_year'])\n\n if cleaned['started_semester'] == \"h\":\n started_month = 7\n if cleaned['started_semester'] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the request\n if request.user.started_date == started_date and request.user.field_of_study == field_of_study:\n messages.error(\n request,\n _(\"Du er allerede registrert med denne studieretningen og denne startdatoen.\")\n )\n return redirect('profiles_active', active_tab='membership')\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date\n )\n\n length_of_fos = get_length_of_field_of_study(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(started_year, length_of_fos)\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new lists from NTNU\n new_expiry_date = datetime.date(\n started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(\n today.year, 9, 16) + datetime.timedelta(days=365)\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == 'POST':\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n # Extend length of membership by 1 year\n membership = AllowedUsername.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect('profiles_active', active_tab='membership')\n\n app.delete()\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/approval/views.py"}]} | 1,555 | 486 |
gh_patches_debug_28422 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-1314 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: Cannot synchronize grads of shared parameters cross pipeline stages when using ZERO-3
### π Describe the bug
@FrankLeeeee @ver217
Hi, in line 36 of _pipeline_parallel_gradient_handler.py:
https://github.com/hpcaitech/ColossalAI/blob/1aad903c1537eafb73fac1729b6df30b7006312f/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py#L36
the condition "param.grad is not None" will not work properly with ZERO-3, because after ZERO-3 synchronized grads, all parameters's grads were set to "colo_attr"οΌgrads are None and buckets is empty here!
This line also has the problem:
https://github.com/hpcaitech/ColossalAI/blob/1aad903c1537eafb73fac1729b6df30b7006312f/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py#L43
### Environment
colossalai latest version
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py`
Content:
```
1 #!/usr/bin/env python
2
3 from collections import defaultdict
4
5 import torch
6 import torch.distributed as dist
7 from colossalai.core import global_context as gpc
8 from colossalai.registry import GRADIENT_HANDLER
9 from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
10
11 from ._base_gradient_handler import BaseGradientHandler
12
13
14 @GRADIENT_HANDLER.register_module
15 class PipelineSharedModuleGradientHandler(BaseGradientHandler):
16 """A helper class to handle all-reduce operations in sub parallel groups.
17 A all-reduce collective communication will be operated in
18 :func:`handle_gradient` among all sub pipeline parallel groups.
19 For better performance, it bucketizes the gradients of all parameters that are
20 the same type to improve the efficiency of communication.
21
22 Args:
23 model (Module): Model where the gradients accumulate.
24 optimizer (Optimizer): Optimizer for updating the parameters.
25 """
26
27 def handle_gradient(self):
28 """A method running a all-reduce operation in sub pipeline parallel groups.
29 """
30 if gpc.pipeline_parallel_size > 1:
31 # bucketize and all-reduce
32 buckets = defaultdict(lambda: defaultdict(list))
33 # Pack the buckets.
34 for param in self._model.parameters():
35 group = getattr(param, 'pipeline_shared_module_pg', None)
36 if param.requires_grad and param.grad is not None and group is not None:
37 tp = param.data.type()
38 buckets[group][tp].append(param)
39
40 # For each bucket, all-reduce and copy all-reduced grads.
41 for group, group_buckets in buckets.items():
42 for tp, bucket in group_buckets.items():
43 grads = [param.grad.data for param in bucket]
44 coalesced = _flatten_dense_tensors(grads).to(torch.cuda.current_device())
45 dist.all_reduce(coalesced, op=dist.ReduceOp.SUM, group=group)
46 for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)):
47 buf.copy_(synced)
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py b/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py
--- a/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py
+++ b/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py
@@ -33,14 +33,19 @@
# Pack the buckets.
for param in self._model.parameters():
group = getattr(param, 'pipeline_shared_module_pg', None)
- if param.requires_grad and param.grad is not None and group is not None:
+ if param.requires_grad and group is not None and (
+ (hasattr(param, 'colo_attr') and not param.colo_attr.saved_grad.is_null())
+ or param.grad is not None):
tp = param.data.type()
buckets[group][tp].append(param)
# For each bucket, all-reduce and copy all-reduced grads.
for group, group_buckets in buckets.items():
for tp, bucket in group_buckets.items():
- grads = [param.grad.data for param in bucket]
+ grads = [
+ param.colo_attr.grad_payload if hasattr(param, 'colo_attr') else param.grad.data
+ for param in bucket
+ ]
coalesced = _flatten_dense_tensors(grads).to(torch.cuda.current_device())
dist.all_reduce(coalesced, op=dist.ReduceOp.SUM, group=group)
for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)):
| {"golden_diff": "diff --git a/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py b/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py\n--- a/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py\n+++ b/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py\n@@ -33,14 +33,19 @@\n # Pack the buckets.\n for param in self._model.parameters():\n group = getattr(param, 'pipeline_shared_module_pg', None)\n- if param.requires_grad and param.grad is not None and group is not None:\n+ if param.requires_grad and group is not None and (\n+ (hasattr(param, 'colo_attr') and not param.colo_attr.saved_grad.is_null())\n+ or param.grad is not None):\n tp = param.data.type()\n buckets[group][tp].append(param)\n \n # For each bucket, all-reduce and copy all-reduced grads.\n for group, group_buckets in buckets.items():\n for tp, bucket in group_buckets.items():\n- grads = [param.grad.data for param in bucket]\n+ grads = [\n+ param.colo_attr.grad_payload if hasattr(param, 'colo_attr') else param.grad.data\n+ for param in bucket\n+ ]\n coalesced = _flatten_dense_tensors(grads).to(torch.cuda.current_device())\n dist.all_reduce(coalesced, op=dist.ReduceOp.SUM, group=group)\n for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)):\n", "issue": "[BUG]: Cannot synchronize grads of shared parameters cross pipeline stages when using ZERO-3\n### \ud83d\udc1b Describe the bug\r\n\r\n @FrankLeeeee @ver217 \r\nHi, in line 36 of _pipeline_parallel_gradient_handler.py:\r\nhttps://github.com/hpcaitech/ColossalAI/blob/1aad903c1537eafb73fac1729b6df30b7006312f/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py#L36\r\n\r\nthe condition \"param.grad is not None\" will not work properly with ZERO-3, because after ZERO-3 synchronized grads, all parameters's grads were set to \"colo_attr\"\uff0cgrads are None and buckets is empty here! \r\n\r\nThis line also has the problem:\r\nhttps://github.com/hpcaitech/ColossalAI/blob/1aad903c1537eafb73fac1729b6df30b7006312f/colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py#L43\r\n\r\n### Environment\r\n\r\ncolossalai latest version\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom collections import defaultdict\n\nimport torch\nimport torch.distributed as dist\nfrom colossalai.core import global_context as gpc\nfrom colossalai.registry import GRADIENT_HANDLER\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom ._base_gradient_handler import BaseGradientHandler\n\n\n@GRADIENT_HANDLER.register_module\nclass PipelineSharedModuleGradientHandler(BaseGradientHandler):\n \"\"\"A helper class to handle all-reduce operations in sub parallel groups.\n A all-reduce collective communication will be operated in \n :func:`handle_gradient` among all sub pipeline parallel groups.\n For better performance, it bucketizes the gradients of all parameters that are \n the same type to improve the efficiency of communication.\n\n Args:\n model (Module): Model where the gradients accumulate.\n optimizer (Optimizer): Optimizer for updating the parameters.\n \"\"\"\n\n def handle_gradient(self):\n \"\"\"A method running a all-reduce operation in sub pipeline parallel groups.\n \"\"\"\n if gpc.pipeline_parallel_size > 1:\n # bucketize and all-reduce\n buckets = defaultdict(lambda: defaultdict(list))\n # Pack the buckets.\n for param in self._model.parameters():\n group = getattr(param, 'pipeline_shared_module_pg', None)\n if param.requires_grad and param.grad is not None and group is not None:\n tp = param.data.type()\n buckets[group][tp].append(param)\n\n # For each bucket, all-reduce and copy all-reduced grads.\n for group, group_buckets in buckets.items():\n for tp, bucket in group_buckets.items():\n grads = [param.grad.data for param in bucket]\n coalesced = _flatten_dense_tensors(grads).to(torch.cuda.current_device())\n dist.all_reduce(coalesced, op=dist.ReduceOp.SUM, group=group)\n for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)):\n buf.copy_(synced)\n", "path": "colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom collections import defaultdict\n\nimport torch\nimport torch.distributed as dist\nfrom colossalai.core import global_context as gpc\nfrom colossalai.registry import GRADIENT_HANDLER\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom ._base_gradient_handler import BaseGradientHandler\n\n\n@GRADIENT_HANDLER.register_module\nclass PipelineSharedModuleGradientHandler(BaseGradientHandler):\n \"\"\"A helper class to handle all-reduce operations in sub parallel groups.\n A all-reduce collective communication will be operated in \n :func:`handle_gradient` among all sub pipeline parallel groups.\n For better performance, it bucketizes the gradients of all parameters that are \n the same type to improve the efficiency of communication.\n\n Args:\n model (Module): Model where the gradients accumulate.\n optimizer (Optimizer): Optimizer for updating the parameters.\n \"\"\"\n\n def handle_gradient(self):\n \"\"\"A method running a all-reduce operation in sub pipeline parallel groups.\n \"\"\"\n if gpc.pipeline_parallel_size > 1:\n # bucketize and all-reduce\n buckets = defaultdict(lambda: defaultdict(list))\n # Pack the buckets.\n for param in self._model.parameters():\n group = getattr(param, 'pipeline_shared_module_pg', None)\n if param.requires_grad and group is not None and (\n (hasattr(param, 'colo_attr') and not param.colo_attr.saved_grad.is_null())\n or param.grad is not None):\n tp = param.data.type()\n buckets[group][tp].append(param)\n\n # For each bucket, all-reduce and copy all-reduced grads.\n for group, group_buckets in buckets.items():\n for tp, bucket in group_buckets.items():\n grads = [\n param.colo_attr.grad_payload if hasattr(param, 'colo_attr') else param.grad.data\n for param in bucket\n ]\n coalesced = _flatten_dense_tensors(grads).to(torch.cuda.current_device())\n dist.all_reduce(coalesced, op=dist.ReduceOp.SUM, group=group)\n for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)):\n buf.copy_(synced)\n", "path": "colossalai/engine/gradient_handler/_pipeline_parallel_gradient_handler.py"}]} | 1,025 | 335 |
gh_patches_debug_39002 | rasdani/github-patches | git_diff | hylang__hy-2565 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stop using Read the Docs
Having ads in the manual is extraordinarily tacky. We should probably just host the web versions of Hy and Hyrule's manuals on Arfer.net, where I also host [the new Hylang.org](http://hylang.org). For simplicity, we can serve only the stable release of the manual. We would then just rebuild it as part of the release process.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # This file is execfile()d with the current directory set to its containing dir.
2
3 import html
4 import os
5 import re
6 import sys
7 import time
8
9 sys.path.insert(0, os.path.abspath(".."))
10
11 extensions = [
12 "sphinx.ext.napoleon",
13 "sphinx.ext.intersphinx",
14 "sphinx.ext.autodoc",
15 "sphinx.ext.viewcode",
16 "sphinxcontrib.hydomain",
17 ]
18
19 import warnings; import sphinx.deprecation as SD
20 for c in (SD.RemovedInSphinx60Warning, SD.RemovedInSphinx70Warning):
21 warnings.filterwarnings('ignore', category = c)
22
23 from get_version import __version__ as hy_version
24
25 # Read the Docs might dirty its checkout, so strip the dirty flag.
26 hy_version = re.sub(r"[+.]dirty\Z", "", hy_version)
27
28 templates_path = ["_templates"]
29 source_suffix = ".rst"
30
31 master_doc = "index"
32
33 # General information about the project.
34 project = "hy"
35 copyright = "%s the authors" % time.strftime("%Y")
36
37 # The version info for the project you're documenting, acts as replacement for
38 # |version| and |release|, also used in various other places throughout the
39 # built documents.
40 #
41 # The short X.Y version.
42 version = ".".join(hy_version.split(".")[:-1])
43 # The full version, including alpha/beta/rc tags.
44 release = hy_version
45 hy_descriptive_version = html.escape(hy_version)
46 if "+" in hy_version:
47 hy_descriptive_version += " <strong style='color: red;'>(unstable)</strong>"
48
49 exclude_patterns = ["_build", "coreteam.rst"]
50 add_module_names = True
51
52 pygments_style = "sphinx"
53
54 import sphinx_rtd_theme
55
56 html_theme = "sphinx_rtd_theme"
57 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
58
59 # Add any paths that contain custom static files (such as style sheets) here,
60 # relative to this directory. They are copied after the builtin static files,
61 # so a file named "default.css" will overwrite the builtin "default.css".
62 html_static_path = ["_static"]
63
64 html_use_smartypants = False
65 html_show_sphinx = False
66
67 html_context = dict(
68 hy_descriptive_version=hy_descriptive_version)
69
70 highlight_language = "hylang"
71
72 intersphinx_mapping = dict(
73 py=("https://docs.python.org/3/", None),
74 hyrule=("https://hyrule.readthedocs.io/en/master/", None),
75 )
76
77 import hy
78 hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`
79
80
81 # ** Sphinx App Setup
82
83
84 def setup(app):
85 app.add_css_file("overrides.css")
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -1,20 +1,14 @@
-# This file is execfile()d with the current directory set to its containing dir.
+import os, re, sys, time, html
-import html
-import os
-import re
-import sys
-import time
+sys.path.insert(0, os.path.abspath('..'))
-sys.path.insert(0, os.path.abspath(".."))
+import hy; hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`
extensions = [
- "sphinx.ext.napoleon",
- "sphinx.ext.intersphinx",
- "sphinx.ext.autodoc",
- "sphinx.ext.viewcode",
- "sphinxcontrib.hydomain",
-]
+ 'sphinx.ext.napoleon',
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.autodoc',
+ 'sphinxcontrib.hydomain']
import warnings; import sphinx.deprecation as SD
for c in (SD.RemovedInSphinx60Warning, SD.RemovedInSphinx70Warning):
@@ -22,64 +16,33 @@
from get_version import __version__ as hy_version
-# Read the Docs might dirty its checkout, so strip the dirty flag.
-hy_version = re.sub(r"[+.]dirty\Z", "", hy_version)
-
-templates_path = ["_templates"]
-source_suffix = ".rst"
-
-master_doc = "index"
-
-# General information about the project.
-project = "hy"
-copyright = "%s the authors" % time.strftime("%Y")
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = ".".join(hy_version.split(".")[:-1])
-# The full version, including alpha/beta/rc tags.
+project = 'Hy'
+copyright = '%s the authors' % time.strftime('%Y')
+html_title = f'Hy {hy_version} manual'
+version = '.'.join(hy_version.split('.')[:-1])
+ # The short dotted version identifier
release = hy_version
-hy_descriptive_version = html.escape(hy_version)
-if "+" in hy_version:
- hy_descriptive_version += " <strong style='color: red;'>(unstable)</strong>"
-
-exclude_patterns = ["_build", "coreteam.rst"]
-add_module_names = True
-
-pygments_style = "sphinx"
-
-import sphinx_rtd_theme
-
-html_theme = "sphinx_rtd_theme"
-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ["_static"]
-
+ # The full version identifier, including alpha, beta, and RC tags
+
+source_suffix = '.rst'
+master_doc = 'index'
+exclude_patterns = ['_build', 'coreteam.rst']
+
+html_theme = 'nature'
+html_theme_options = dict(
+ nosidebar = True,
+ body_min_width = 0,
+ body_max_width = 'none')
+html_css_files = ['custom.css']
+html_static_path = ['_static']
html_use_smartypants = False
+html_copy_source = False
html_show_sphinx = False
-html_context = dict(
- hy_descriptive_version=hy_descriptive_version)
+add_module_names = True
-highlight_language = "hylang"
+highlight_language = 'hylang'
intersphinx_mapping = dict(
- py=("https://docs.python.org/3/", None),
- hyrule=("https://hyrule.readthedocs.io/en/master/", None),
-)
-
-import hy
-hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`
-
-
-# ** Sphinx App Setup
-
-
-def setup(app):
- app.add_css_file("overrides.css")
+ py = ('https://docs.python.org/3/', None),
+ hyrule = ('https://hyrule.readthedocs.io/en/master/', None))
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -1,20 +1,14 @@\n-# This file is execfile()d with the current directory set to its containing dir.\n+import os, re, sys, time, html\n \n-import html\n-import os\n-import re\n-import sys\n-import time\n+sys.path.insert(0, os.path.abspath('..'))\n \n-sys.path.insert(0, os.path.abspath(\"..\"))\n+import hy; hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`\n \n extensions = [\n- \"sphinx.ext.napoleon\",\n- \"sphinx.ext.intersphinx\",\n- \"sphinx.ext.autodoc\",\n- \"sphinx.ext.viewcode\",\n- \"sphinxcontrib.hydomain\",\n-]\n+ 'sphinx.ext.napoleon',\n+ 'sphinx.ext.intersphinx',\n+ 'sphinx.ext.autodoc',\n+ 'sphinxcontrib.hydomain']\n \n import warnings; import sphinx.deprecation as SD\n for c in (SD.RemovedInSphinx60Warning, SD.RemovedInSphinx70Warning):\n@@ -22,64 +16,33 @@\n \n from get_version import __version__ as hy_version\n \n-# Read the Docs might dirty its checkout, so strip the dirty flag.\n-hy_version = re.sub(r\"[+.]dirty\\Z\", \"\", hy_version)\n-\n-templates_path = [\"_templates\"]\n-source_suffix = \".rst\"\n-\n-master_doc = \"index\"\n-\n-# General information about the project.\n-project = \"hy\"\n-copyright = \"%s the authors\" % time.strftime(\"%Y\")\n-\n-# The version info for the project you're documenting, acts as replacement for\n-# |version| and |release|, also used in various other places throughout the\n-# built documents.\n-#\n-# The short X.Y version.\n-version = \".\".join(hy_version.split(\".\")[:-1])\n-# The full version, including alpha/beta/rc tags.\n+project = 'Hy'\n+copyright = '%s the authors' % time.strftime('%Y')\n+html_title = f'Hy {hy_version} manual'\n+version = '.'.join(hy_version.split('.')[:-1])\n+ # The short dotted version identifier\n release = hy_version\n-hy_descriptive_version = html.escape(hy_version)\n-if \"+\" in hy_version:\n- hy_descriptive_version += \" <strong style='color: red;'>(unstable)</strong>\"\n-\n-exclude_patterns = [\"_build\", \"coreteam.rst\"]\n-add_module_names = True\n-\n-pygments_style = \"sphinx\"\n-\n-import sphinx_rtd_theme\n-\n-html_theme = \"sphinx_rtd_theme\"\n-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n-\n-# Add any paths that contain custom static files (such as style sheets) here,\n-# relative to this directory. They are copied after the builtin static files,\n-# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n-html_static_path = [\"_static\"]\n-\n+ # The full version identifier, including alpha, beta, and RC tags\n+\n+source_suffix = '.rst'\n+master_doc = 'index'\n+exclude_patterns = ['_build', 'coreteam.rst']\n+\n+html_theme = 'nature'\n+html_theme_options = dict(\n+ nosidebar = True,\n+ body_min_width = 0,\n+ body_max_width = 'none')\n+html_css_files = ['custom.css']\n+html_static_path = ['_static']\n html_use_smartypants = False\n+html_copy_source = False\n html_show_sphinx = False\n \n-html_context = dict(\n- hy_descriptive_version=hy_descriptive_version)\n+add_module_names = True\n \n-highlight_language = \"hylang\"\n+highlight_language = 'hylang'\n \n intersphinx_mapping = dict(\n- py=(\"https://docs.python.org/3/\", None),\n- hyrule=(\"https://hyrule.readthedocs.io/en/master/\", None),\n-)\n-\n-import hy\n-hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`\n-\n-\n-# ** Sphinx App Setup\n-\n-\n-def setup(app):\n- app.add_css_file(\"overrides.css\")\n+ py = ('https://docs.python.org/3/', None),\n+ hyrule = ('https://hyrule.readthedocs.io/en/master/', None))\n", "issue": "Stop using Read the Docs\nHaving ads in the manual is extraordinarily tacky. We should probably just host the web versions of Hy and Hyrule's manuals on Arfer.net, where I also host [the new Hylang.org](http://hylang.org). For simplicity, we can serve only the stable release of the manual. We would then just rebuild it as part of the release process.\n", "before_files": [{"content": "# This file is execfile()d with the current directory set to its containing dir.\n\nimport html\nimport os\nimport re\nimport sys\nimport time\n\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nextensions = [\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinxcontrib.hydomain\",\n]\n\nimport warnings; import sphinx.deprecation as SD\nfor c in (SD.RemovedInSphinx60Warning, SD.RemovedInSphinx70Warning):\n warnings.filterwarnings('ignore', category = c)\n\nfrom get_version import __version__ as hy_version\n\n# Read the Docs might dirty its checkout, so strip the dirty flag.\nhy_version = re.sub(r\"[+.]dirty\\Z\", \"\", hy_version)\n\ntemplates_path = [\"_templates\"]\nsource_suffix = \".rst\"\n\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"hy\"\ncopyright = \"%s the authors\" % time.strftime(\"%Y\")\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \".\".join(hy_version.split(\".\")[:-1])\n# The full version, including alpha/beta/rc tags.\nrelease = hy_version\nhy_descriptive_version = html.escape(hy_version)\nif \"+\" in hy_version:\n hy_descriptive_version += \" <strong style='color: red;'>(unstable)</strong>\"\n\nexclude_patterns = [\"_build\", \"coreteam.rst\"]\nadd_module_names = True\n\npygments_style = \"sphinx\"\n\nimport sphinx_rtd_theme\n\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_use_smartypants = False\nhtml_show_sphinx = False\n\nhtml_context = dict(\n hy_descriptive_version=hy_descriptive_version)\n\nhighlight_language = \"hylang\"\n\nintersphinx_mapping = dict(\n py=(\"https://docs.python.org/3/\", None),\n hyrule=(\"https://hyrule.readthedocs.io/en/master/\", None),\n)\n\nimport hy\nhy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`\n\n\n# ** Sphinx App Setup\n\n\ndef setup(app):\n app.add_css_file(\"overrides.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "import os, re, sys, time, html\n\nsys.path.insert(0, os.path.abspath('..'))\n\nimport hy; hy.I = type(hy.I) # A trick to enable `hy:autoclass:: hy.I`\n\nextensions = [\n 'sphinx.ext.napoleon',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.autodoc',\n 'sphinxcontrib.hydomain']\n\nimport warnings; import sphinx.deprecation as SD\nfor c in (SD.RemovedInSphinx60Warning, SD.RemovedInSphinx70Warning):\n warnings.filterwarnings('ignore', category = c)\n\nfrom get_version import __version__ as hy_version\n\nproject = 'Hy'\ncopyright = '%s the authors' % time.strftime('%Y')\nhtml_title = f'Hy {hy_version} manual'\nversion = '.'.join(hy_version.split('.')[:-1])\n # The short dotted version identifier\nrelease = hy_version\n # The full version identifier, including alpha, beta, and RC tags\n\nsource_suffix = '.rst'\nmaster_doc = 'index'\nexclude_patterns = ['_build', 'coreteam.rst']\n\nhtml_theme = 'nature'\nhtml_theme_options = dict(\n nosidebar = True,\n body_min_width = 0,\n body_max_width = 'none')\nhtml_css_files = ['custom.css']\nhtml_static_path = ['_static']\nhtml_use_smartypants = False\nhtml_copy_source = False\nhtml_show_sphinx = False\n\nadd_module_names = True\n\nhighlight_language = 'hylang'\n\nintersphinx_mapping = dict(\n py = ('https://docs.python.org/3/', None),\n hyrule = ('https://hyrule.readthedocs.io/en/master/', None))\n", "path": "docs/conf.py"}]} | 1,102 | 977 |
gh_patches_debug_5936 | rasdani/github-patches | git_diff | streamlit__streamlit-2248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Request for tar file for lib in pypi.
_(Note, you don't have to fill out every section here. They're just here for guidance. That said, nicely detailed feature requests are more likely to get eng attention sooner)_
### Problem
As of now for streamlit , we have only wheels file in pypi. Can the streamlit team add tar file as well.
https://pypi.org/project/streamlit/#files
### Solution
Create a sdist ( tar ) of the package and make it avaliable in pypi
https://realpython.com/pypi-publish-python-package/
### Additional context
Add any other context or screenshots about the feature request here. For example, did this FR come from https://discuss.streamlit.io or another site? Link the original source here!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/setup.py`
Content:
```
1 import os
2 import platform
3 import setuptools
4 import subprocess
5 import sys
6
7 from pipenv.project import Project
8 from pipenv.utils import convert_deps_to_pip
9 from setuptools.command.install import install
10
11 VERSION = "0.69.2" # PEP-440
12
13 NAME = "streamlit"
14
15 DESCRIPTION = "The fastest way to build data apps in Python"
16
17 LONG_DESCRIPTION = (
18 "Streamlit's open-source app framework is the easiest way "
19 "for data scientists and machine learning engineers to "
20 "create beautiful, performant apps in only a few hours! "
21 "All in pure Python. All for free."
22 )
23
24 pipfile = Project(chdir=False).parsed_pipfile
25
26 packages = pipfile["packages"].copy()
27 requirements = convert_deps_to_pip(packages, r=False)
28
29 # Check whether xcode tools are available before making watchdog a
30 # dependency (only if the current system is a Mac).
31 if platform.system() == "Darwin":
32 has_xcode = subprocess.call(["xcode-select", "--version"], shell=False) == 0
33 has_gcc = subprocess.call(["gcc", "--version"], shell=False) == 0
34
35 if not (has_xcode and has_gcc):
36 try:
37 requirements.remove("watchdog")
38 except ValueError:
39 pass
40
41
42 class VerifyVersionCommand(install):
43 """Custom command to verify that the git tag matches our version"""
44
45 description = "verify that the git tag matches our version"
46
47 def run(self):
48 tag = os.getenv("CIRCLE_TAG")
49
50 if tag != VERSION:
51 info = "Git tag: {0} does not match the version of this app: {1}".format(
52 tag, VERSION
53 )
54 sys.exit(info)
55
56
57 setuptools.setup(
58 name=NAME,
59 version=VERSION,
60 description=DESCRIPTION,
61 long_description=LONG_DESCRIPTION,
62 url="https://streamlit.io",
63 author="Streamlit Inc",
64 author_email="[email protected]",
65 python_requires=">=3.6",
66 license="Apache 2",
67 packages=setuptools.find_packages(exclude=["tests", "tests.*"]),
68 # Requirements
69 install_requires=requirements,
70 zip_safe=False, # install source files not egg
71 include_package_data=True, # copy html and friends
72 entry_points={"console_scripts": ["streamlit = streamlit.cli:main"]},
73 # For Windows so that streamlit * commands work ie.
74 # - streamlit version
75 # - streamlit hello
76 scripts=["bin/streamlit.cmd"],
77 cmdclass={
78 "verify": VerifyVersionCommand,
79 },
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/setup.py b/lib/setup.py
--- a/lib/setup.py
+++ b/lib/setup.py
@@ -4,10 +4,17 @@
import subprocess
import sys
-from pipenv.project import Project
-from pipenv.utils import convert_deps_to_pip
from setuptools.command.install import install
+try:
+ from pipenv.project import Project
+ from pipenv.utils import convert_deps_to_pip
+except:
+ exit_msg = (
+ "pipenv is required to package Streamlit. Please install pipenv and try again"
+ )
+ sys.exit(exit_msg)
+
VERSION = "0.69.2" # PEP-440
NAME = "streamlit"
| {"golden_diff": "diff --git a/lib/setup.py b/lib/setup.py\n--- a/lib/setup.py\n+++ b/lib/setup.py\n@@ -4,10 +4,17 @@\n import subprocess\n import sys\n \n-from pipenv.project import Project\n-from pipenv.utils import convert_deps_to_pip\n from setuptools.command.install import install\n \n+try:\n+ from pipenv.project import Project\n+ from pipenv.utils import convert_deps_to_pip\n+except:\n+ exit_msg = (\n+ \"pipenv is required to package Streamlit. Please install pipenv and try again\"\n+ )\n+ sys.exit(exit_msg)\n+\n VERSION = \"0.69.2\" # PEP-440\n \n NAME = \"streamlit\"\n", "issue": "Request for tar file for lib in pypi.\n_(Note, you don't have to fill out every section here. They're just here for guidance. That said, nicely detailed feature requests are more likely to get eng attention sooner)_\r\n\r\n### Problem\r\n\r\nAs of now for streamlit , we have only wheels file in pypi. Can the streamlit team add tar file as well. \r\n\r\nhttps://pypi.org/project/streamlit/#files \r\n\r\n### Solution\r\n\r\nCreate a sdist ( tar ) of the package and make it avaliable in pypi\r\nhttps://realpython.com/pypi-publish-python-package/ \r\n\r\n### Additional context\r\n\r\nAdd any other context or screenshots about the feature request here. For example, did this FR come from https://discuss.streamlit.io or another site? Link the original source here!\r\n\n", "before_files": [{"content": "import os\nimport platform\nimport setuptools\nimport subprocess\nimport sys\n\nfrom pipenv.project import Project\nfrom pipenv.utils import convert_deps_to_pip\nfrom setuptools.command.install import install\n\nVERSION = \"0.69.2\" # PEP-440\n\nNAME = \"streamlit\"\n\nDESCRIPTION = \"The fastest way to build data apps in Python\"\n\nLONG_DESCRIPTION = (\n \"Streamlit's open-source app framework is the easiest way \"\n \"for data scientists and machine learning engineers to \"\n \"create beautiful, performant apps in only a few hours! \"\n \"All in pure Python. All for free.\"\n)\n\npipfile = Project(chdir=False).parsed_pipfile\n\npackages = pipfile[\"packages\"].copy()\nrequirements = convert_deps_to_pip(packages, r=False)\n\n# Check whether xcode tools are available before making watchdog a\n# dependency (only if the current system is a Mac).\nif platform.system() == \"Darwin\":\n has_xcode = subprocess.call([\"xcode-select\", \"--version\"], shell=False) == 0\n has_gcc = subprocess.call([\"gcc\", \"--version\"], shell=False) == 0\n\n if not (has_xcode and has_gcc):\n try:\n requirements.remove(\"watchdog\")\n except ValueError:\n pass\n\n\nclass VerifyVersionCommand(install):\n \"\"\"Custom command to verify that the git tag matches our version\"\"\"\n\n description = \"verify that the git tag matches our version\"\n\n def run(self):\n tag = os.getenv(\"CIRCLE_TAG\")\n\n if tag != VERSION:\n info = \"Git tag: {0} does not match the version of this app: {1}\".format(\n tag, VERSION\n )\n sys.exit(info)\n\n\nsetuptools.setup(\n name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n url=\"https://streamlit.io\",\n author=\"Streamlit Inc\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n license=\"Apache 2\",\n packages=setuptools.find_packages(exclude=[\"tests\", \"tests.*\"]),\n # Requirements\n install_requires=requirements,\n zip_safe=False, # install source files not egg\n include_package_data=True, # copy html and friends\n entry_points={\"console_scripts\": [\"streamlit = streamlit.cli:main\"]},\n # For Windows so that streamlit * commands work ie.\n # - streamlit version\n # - streamlit hello\n scripts=[\"bin/streamlit.cmd\"],\n cmdclass={\n \"verify\": VerifyVersionCommand,\n },\n)\n", "path": "lib/setup.py"}], "after_files": [{"content": "import os\nimport platform\nimport setuptools\nimport subprocess\nimport sys\n\nfrom setuptools.command.install import install\n\ntry:\n from pipenv.project import Project\n from pipenv.utils import convert_deps_to_pip\nexcept:\n exit_msg = (\n \"pipenv is required to package Streamlit. Please install pipenv and try again\"\n )\n sys.exit(exit_msg)\n\nVERSION = \"0.69.2\" # PEP-440\n\nNAME = \"streamlit\"\n\nDESCRIPTION = \"The fastest way to build data apps in Python\"\n\nLONG_DESCRIPTION = (\n \"Streamlit's open-source app framework is the easiest way \"\n \"for data scientists and machine learning engineers to \"\n \"create beautiful, performant apps in only a few hours! \"\n \"All in pure Python. All for free.\"\n)\n\npipfile = Project(chdir=False).parsed_pipfile\n\npackages = pipfile[\"packages\"].copy()\nrequirements = convert_deps_to_pip(packages, r=False)\n\n# Check whether xcode tools are available before making watchdog a\n# dependency (only if the current system is a Mac).\nif platform.system() == \"Darwin\":\n has_xcode = subprocess.call([\"xcode-select\", \"--version\"], shell=False) == 0\n has_gcc = subprocess.call([\"gcc\", \"--version\"], shell=False) == 0\n\n if not (has_xcode and has_gcc):\n try:\n requirements.remove(\"watchdog\")\n except ValueError:\n pass\n\n\nclass VerifyVersionCommand(install):\n \"\"\"Custom command to verify that the git tag matches our version\"\"\"\n\n description = \"verify that the git tag matches our version\"\n\n def run(self):\n tag = os.getenv(\"CIRCLE_TAG\")\n\n if tag != VERSION:\n info = \"Git tag: {0} does not match the version of this app: {1}\".format(\n tag, VERSION\n )\n sys.exit(info)\n\n\nsetuptools.setup(\n name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n url=\"https://streamlit.io\",\n author=\"Streamlit Inc\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n license=\"Apache 2\",\n packages=setuptools.find_packages(exclude=[\"tests\", \"tests.*\"]),\n # Requirements\n install_requires=requirements,\n zip_safe=False, # install source files not egg\n include_package_data=True, # copy html and friends\n entry_points={\"console_scripts\": [\"streamlit = streamlit.cli:main\"]},\n # For Windows so that streamlit * commands work ie.\n # - streamlit version\n # - streamlit hello\n scripts=[\"bin/streamlit.cmd\"],\n cmdclass={\n \"verify\": VerifyVersionCommand,\n },\n)\n", "path": "lib/setup.py"}]} | 1,144 | 157 |
gh_patches_debug_41263 | rasdani/github-patches | git_diff | hydroshare__hydroshare-5233 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Management command for published resources
**Describe the feature you'd like and what it will do**
We should have a management command to report on resources published within a timeframe.
Consider also adding to our metrics system
**Additional context**
HS 2.9.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/management/commands/list_published_by_year.py`
Content:
```
1 """Lists all the resources published in a given year.
2 """
3
4 from django.core.management.base import BaseCommand
5 from django.contrib.auth.models import User
6 from hs_core.models import BaseResource
7 from hs_access_control.models import PrivilegeCodes
8 from hs_core import hydroshare
9 from django.db.models import F
10 from datetime import timedelta
11 from django.utils import timezone
12 from django.core.exceptions import ObjectDoesNotExist
13
14
15 class Command(BaseCommand):
16 help = "Print resource information"
17
18 def add_arguments(self, parser):
19 # Named (optional) arguments
20 parser.add_argument(
21 '--year',
22 dest='year',
23 help='limit to resources published in a given year'
24 )
25
26 parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')
27
28 parser.add_argument(
29 '--type',
30 dest='type',
31 help='limit to resources of a particular type'
32 )
33
34 parser.add_argument(
35 '--owned_by',
36 dest='owned_by',
37 help='limit to resources owned by specific user'
38 )
39
40 def handle(self, *args, **options):
41 days = options['days']
42 resources = BaseResource.objects.filter(raccess__published=True)
43 owner = options['owned_by']
44 type = options['type']
45
46 if owner is not None:
47 try:
48 owner = User.objects.get(username=owner)
49 resources.filter(r2urp__user=owner,
50 r2urp__privilege=PrivilegeCodes.OWNER)
51 except ObjectDoesNotExist:
52 print(f"User matching {owner} not found")
53
54 if type is not None:
55 if type in ["CompositeResource", "CollectionResource"]:
56 resources.filter(resource_type=type)
57 else:
58 print(f"Type {type} is not supported. Must be 'CompositeResource' or 'CollectionResource'")
59
60 resources = resources.order_by(F('updated').asc(nulls_first=True))
61
62 for resource in resources:
63 pub_date = self.get_publication_date(resource)
64 if options['year']:
65 if pub_date.year != int(options['year']):
66 continue
67 if days:
68 cuttoff_time = timezone.now() - timedelta(days)
69 if not pub_date >= cuttoff_time:
70 continue
71 self.print_resource(resource, pub_date)
72
73 def get_publication_date(self, resource):
74 published_date = resource.metadata.dates.filter(type="published").first()
75 if not published_date:
76 print(f"Publication date not found for {resource.short_id}")
77 return published_date
78
79 def print_resource(self, res, pub_date):
80 site_url = hydroshare.utils.current_site_url()
81 res_url = site_url + res.absolute_url
82 funding_agencies = res.metadata.funding_agencies.all()
83 print("*" * 100)
84 print(f"{res_url}")
85 print(res.metadata.title.value)
86 print(f"Resource type: {res.resource_type}")
87 if pub_date:
88 print(f"Published on {pub_date}")
89 else:
90 print("Resource has no publication date")
91
92 if funding_agencies:
93 print("Funding agency/agencies:")
94 for f in funding_agencies:
95 print(f.agency_name)
96 else:
97 print("Resource has no funding agency")
98
99 if res.doi:
100 print(res.doi)
101 else:
102 print("Resource has no doi")
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/management/commands/list_published_by_year.py b/hs_core/management/commands/list_published_by_year.py
--- a/hs_core/management/commands/list_published_by_year.py
+++ b/hs_core/management/commands/list_published_by_year.py
@@ -41,7 +41,7 @@
days = options['days']
resources = BaseResource.objects.filter(raccess__published=True)
owner = options['owned_by']
- type = options['type']
+ res_type = options['type']
if owner is not None:
try:
@@ -51,11 +51,11 @@
except ObjectDoesNotExist:
print(f"User matching {owner} not found")
- if type is not None:
- if type in ["CompositeResource", "CollectionResource"]:
- resources.filter(resource_type=type)
+ if res_type is not None:
+ if res_type in ["CompositeResource", "CollectionResource"]:
+ resources.filter(resource_type=res_type)
else:
- print(f"Type {type} is not supported. Must be 'CompositeResource' or 'CollectionResource'")
+ print(f"Type {res_type} is not supported. Must be 'CompositeResource' or 'CollectionResource'")
resources = resources.order_by(F('updated').asc(nulls_first=True))
@@ -74,14 +74,19 @@
published_date = resource.metadata.dates.filter(type="published").first()
if not published_date:
print(f"Publication date not found for {resource.short_id}")
- return published_date
+ return published_date.start_date
def print_resource(self, res, pub_date):
site_url = hydroshare.utils.current_site_url()
res_url = site_url + res.absolute_url
funding_agencies = res.metadata.funding_agencies.all()
+ print("\n")
print("*" * 100)
print(f"{res_url}")
+ if res.doi:
+ print(res.doi)
+ else:
+ print("Resource has no doi")
print(res.metadata.title.value)
print(f"Resource type: {res.resource_type}")
if pub_date:
@@ -90,13 +95,24 @@
print("Resource has no publication date")
if funding_agencies:
- print("Funding agency/agencies:")
- for f in funding_agencies:
- print(f.agency_name)
+ print(f"Found {len(funding_agencies)} funder(s):")
+ for count, f in enumerate(funding_agencies, 1):
+ print(f"--- Funder #{count} ---")
+ if f.agency_name:
+ print(f"Agency name: {f.agency_name}")
+ else:
+ print("No agency name")
+ if f.agency_url:
+ print(f"Agency url: {f.agency_url}")
+ else:
+ print("No agency url")
+ if f.award_title:
+ print(f"Award title: {f.award_title}")
+ else:
+ print("No award title")
+ if f.award_number:
+ print(f"Award number: {f.award_number}")
+ else:
+ print("No award number")
else:
- print("Resource has no funding agency")
-
- if res.doi:
- print(res.doi)
- else:
- print("Resource has no doi")
+ print("Resource has no funding information")
| {"golden_diff": "diff --git a/hs_core/management/commands/list_published_by_year.py b/hs_core/management/commands/list_published_by_year.py\n--- a/hs_core/management/commands/list_published_by_year.py\n+++ b/hs_core/management/commands/list_published_by_year.py\n@@ -41,7 +41,7 @@\n days = options['days']\n resources = BaseResource.objects.filter(raccess__published=True)\n owner = options['owned_by']\n- type = options['type']\n+ res_type = options['type']\n \n if owner is not None:\n try:\n@@ -51,11 +51,11 @@\n except ObjectDoesNotExist:\n print(f\"User matching {owner} not found\")\n \n- if type is not None:\n- if type in [\"CompositeResource\", \"CollectionResource\"]:\n- resources.filter(resource_type=type)\n+ if res_type is not None:\n+ if res_type in [\"CompositeResource\", \"CollectionResource\"]:\n+ resources.filter(resource_type=res_type)\n else:\n- print(f\"Type {type} is not supported. Must be 'CompositeResource' or 'CollectionResource'\")\n+ print(f\"Type {res_type} is not supported. Must be 'CompositeResource' or 'CollectionResource'\")\n \n resources = resources.order_by(F('updated').asc(nulls_first=True))\n \n@@ -74,14 +74,19 @@\n published_date = resource.metadata.dates.filter(type=\"published\").first()\n if not published_date:\n print(f\"Publication date not found for {resource.short_id}\")\n- return published_date\n+ return published_date.start_date\n \n def print_resource(self, res, pub_date):\n site_url = hydroshare.utils.current_site_url()\n res_url = site_url + res.absolute_url\n funding_agencies = res.metadata.funding_agencies.all()\n+ print(\"\\n\")\n print(\"*\" * 100)\n print(f\"{res_url}\")\n+ if res.doi:\n+ print(res.doi)\n+ else:\n+ print(\"Resource has no doi\")\n print(res.metadata.title.value)\n print(f\"Resource type: {res.resource_type}\")\n if pub_date:\n@@ -90,13 +95,24 @@\n print(\"Resource has no publication date\")\n \n if funding_agencies:\n- print(\"Funding agency/agencies:\")\n- for f in funding_agencies:\n- print(f.agency_name)\n+ print(f\"Found {len(funding_agencies)} funder(s):\")\n+ for count, f in enumerate(funding_agencies, 1):\n+ print(f\"--- Funder #{count} ---\")\n+ if f.agency_name:\n+ print(f\"Agency name: {f.agency_name}\")\n+ else:\n+ print(\"No agency name\")\n+ if f.agency_url:\n+ print(f\"Agency url: {f.agency_url}\")\n+ else:\n+ print(\"No agency url\")\n+ if f.award_title:\n+ print(f\"Award title: {f.award_title}\")\n+ else:\n+ print(\"No award title\")\n+ if f.award_number:\n+ print(f\"Award number: {f.award_number}\")\n+ else:\n+ print(\"No award number\")\n else:\n- print(\"Resource has no funding agency\")\n-\n- if res.doi:\n- print(res.doi)\n- else:\n- print(\"Resource has no doi\")\n+ print(\"Resource has no funding information\")\n", "issue": "Management command for published resources\n**Describe the feature you'd like and what it will do**\r\nWe should have a management command to report on resources published within a timeframe.\r\nConsider also adding to our metrics system\r\n\r\n**Additional context**\r\nHS 2.9.2\r\n\n", "before_files": [{"content": "\"\"\"Lists all the resources published in a given year.\n\"\"\"\n\nfrom django.core.management.base import BaseCommand\nfrom django.contrib.auth.models import User\nfrom hs_core.models import BaseResource\nfrom hs_access_control.models import PrivilegeCodes\nfrom hs_core import hydroshare\nfrom django.db.models import F\nfrom datetime import timedelta\nfrom django.utils import timezone\nfrom django.core.exceptions import ObjectDoesNotExist\n\n\nclass Command(BaseCommand):\n help = \"Print resource information\"\n\n def add_arguments(self, parser):\n # Named (optional) arguments\n parser.add_argument(\n '--year',\n dest='year',\n help='limit to resources published in a given year'\n )\n\n parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')\n\n parser.add_argument(\n '--type',\n dest='type',\n help='limit to resources of a particular type'\n )\n\n parser.add_argument(\n '--owned_by',\n dest='owned_by',\n help='limit to resources owned by specific user'\n )\n\n def handle(self, *args, **options):\n days = options['days']\n resources = BaseResource.objects.filter(raccess__published=True)\n owner = options['owned_by']\n type = options['type']\n\n if owner is not None:\n try:\n owner = User.objects.get(username=owner)\n resources.filter(r2urp__user=owner,\n r2urp__privilege=PrivilegeCodes.OWNER)\n except ObjectDoesNotExist:\n print(f\"User matching {owner} not found\")\n\n if type is not None:\n if type in [\"CompositeResource\", \"CollectionResource\"]:\n resources.filter(resource_type=type)\n else:\n print(f\"Type {type} is not supported. Must be 'CompositeResource' or 'CollectionResource'\")\n\n resources = resources.order_by(F('updated').asc(nulls_first=True))\n\n for resource in resources:\n pub_date = self.get_publication_date(resource)\n if options['year']:\n if pub_date.year != int(options['year']):\n continue\n if days:\n cuttoff_time = timezone.now() - timedelta(days)\n if not pub_date >= cuttoff_time:\n continue\n self.print_resource(resource, pub_date)\n\n def get_publication_date(self, resource):\n published_date = resource.metadata.dates.filter(type=\"published\").first()\n if not published_date:\n print(f\"Publication date not found for {resource.short_id}\")\n return published_date\n\n def print_resource(self, res, pub_date):\n site_url = hydroshare.utils.current_site_url()\n res_url = site_url + res.absolute_url\n funding_agencies = res.metadata.funding_agencies.all()\n print(\"*\" * 100)\n print(f\"{res_url}\")\n print(res.metadata.title.value)\n print(f\"Resource type: {res.resource_type}\")\n if pub_date:\n print(f\"Published on {pub_date}\")\n else:\n print(\"Resource has no publication date\")\n\n if funding_agencies:\n print(\"Funding agency/agencies:\")\n for f in funding_agencies:\n print(f.agency_name)\n else:\n print(\"Resource has no funding agency\")\n\n if res.doi:\n print(res.doi)\n else:\n print(\"Resource has no doi\")\n", "path": "hs_core/management/commands/list_published_by_year.py"}], "after_files": [{"content": "\"\"\"Lists all the resources published in a given year.\n\"\"\"\n\nfrom django.core.management.base import BaseCommand\nfrom django.contrib.auth.models import User\nfrom hs_core.models import BaseResource\nfrom hs_access_control.models import PrivilegeCodes\nfrom hs_core import hydroshare\nfrom django.db.models import F\nfrom datetime import timedelta\nfrom django.utils import timezone\nfrom django.core.exceptions import ObjectDoesNotExist\n\n\nclass Command(BaseCommand):\n help = \"Print resource information\"\n\n def add_arguments(self, parser):\n # Named (optional) arguments\n parser.add_argument(\n '--year',\n dest='year',\n help='limit to resources published in a given year'\n )\n\n parser.add_argument('--days', type=int, dest='days', help='include resources updated in the last X days')\n\n parser.add_argument(\n '--type',\n dest='type',\n help='limit to resources of a particular type'\n )\n\n parser.add_argument(\n '--owned_by',\n dest='owned_by',\n help='limit to resources owned by specific user'\n )\n\n def handle(self, *args, **options):\n days = options['days']\n resources = BaseResource.objects.filter(raccess__published=True)\n owner = options['owned_by']\n res_type = options['type']\n\n if owner is not None:\n try:\n owner = User.objects.get(username=owner)\n resources.filter(r2urp__user=owner,\n r2urp__privilege=PrivilegeCodes.OWNER)\n except ObjectDoesNotExist:\n print(f\"User matching {owner} not found\")\n\n if res_type is not None:\n if res_type in [\"CompositeResource\", \"CollectionResource\"]:\n resources.filter(resource_type=res_type)\n else:\n print(f\"Type {res_type} is not supported. Must be 'CompositeResource' or 'CollectionResource'\")\n\n resources = resources.order_by(F('updated').asc(nulls_first=True))\n\n for resource in resources:\n pub_date = self.get_publication_date(resource)\n if options['year']:\n if pub_date.year != int(options['year']):\n continue\n if days:\n cuttoff_time = timezone.now() - timedelta(days)\n if not pub_date >= cuttoff_time:\n continue\n self.print_resource(resource, pub_date)\n\n def get_publication_date(self, resource):\n published_date = resource.metadata.dates.filter(type=\"published\").first()\n if not published_date:\n print(f\"Publication date not found for {resource.short_id}\")\n return published_date.start_date\n\n def print_resource(self, res, pub_date):\n site_url = hydroshare.utils.current_site_url()\n res_url = site_url + res.absolute_url\n funding_agencies = res.metadata.funding_agencies.all()\n print(\"\\n\")\n print(\"*\" * 100)\n print(f\"{res_url}\")\n if res.doi:\n print(res.doi)\n else:\n print(\"Resource has no doi\")\n print(res.metadata.title.value)\n print(f\"Resource type: {res.resource_type}\")\n if pub_date:\n print(f\"Published on {pub_date}\")\n else:\n print(\"Resource has no publication date\")\n\n if funding_agencies:\n print(f\"Found {len(funding_agencies)} funder(s):\")\n for count, f in enumerate(funding_agencies, 1):\n print(f\"--- Funder #{count} ---\")\n if f.agency_name:\n print(f\"Agency name: {f.agency_name}\")\n else:\n print(\"No agency name\")\n if f.agency_url:\n print(f\"Agency url: {f.agency_url}\")\n else:\n print(\"No agency url\")\n if f.award_title:\n print(f\"Award title: {f.award_title}\")\n else:\n print(\"No award title\")\n if f.award_number:\n print(f\"Award number: {f.award_number}\")\n else:\n print(\"No award number\")\n else:\n print(\"Resource has no funding information\")\n", "path": "hs_core/management/commands/list_published_by_year.py"}]} | 1,237 | 767 |
gh_patches_debug_29593 | rasdani/github-patches | git_diff | ManageIQ__integration_tests-91 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add SCP support
The ssh_client fixture supports running a rake command. Some of these commands presuppose a local file exists on the appliance, such as when importing an Automate custom namespace. Extending ssh_client would be ideal.
Here's my os.system hack:
```
import os
...
os.system("sshpass -p '%s' scp %s@%s:/root/" % \
(mozwebqa.credentials['ssh']['password'],
mozwebqa.credentials['ssh']['username'],
soap_client.evm_server_hostname))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/ssh.py`
Content:
```
1 import paramiko
2
3 class SSHClient(paramiko.SSHClient):
4 """paramiko.SSHClient wrapper
5
6 Allows copying/overriding and use as a context manager
7 Constructor kwargs are handed directly to paramiko.SSHClient.connect()
8 """
9 def __init__(self, **connect_kwargs):
10 super(SSHClient, self).__init__()
11 self.set_missing_host_key_policy(paramiko.AutoAddPolicy())
12
13 # Set up some sane defaults
14 if 'timeout' not in connect_kwargs:
15 connect_kwargs['timeout'] = 10
16 if 'allow_agent' not in connect_kwargs:
17 connect_kwargs['allow_agent'] = False
18 self._connect_kwargs = connect_kwargs
19
20 def __call__(self, **connect_kwargs):
21 # Update a copy of this instance's connect kwargs with passed in kwargs,
22 # then return a new instance with the updated kwargs
23 new_connect_kwargs = dict(self._connect_kwargs)
24 new_connect_kwargs.update(connect_kwargs)
25 new_client = SSHClient(**new_connect_kwargs)
26 return new_client
27
28 def __enter__(self):
29 self.connect(**self._connect_kwargs)
30 return self
31
32 def __exit__(self, *args, **kwargs):
33 self.close()
34
35 def run_command(self, command):
36 return command_runner(self, command)
37
38 def run_rails_command(self, command):
39 return rails_runner(self, command)
40
41 def run_rake_command(self, command):
42 return rake_runner(self, command)
43
44
45 def command_runner(client, command):
46 template = '%s\n'
47 command = template % command
48 with client as ctx:
49 transport = ctx.get_transport()
50 session = transport.open_session()
51 session.set_combine_stderr(True)
52 session.exec_command(command)
53 exit_status = session.recv_exit_status()
54 output = session.recv(-1)
55 return exit_status, output
56
57 # Returning two things so tuple unpacking the return works even if the ssh client fails
58 return None, None
59
60 def rails_runner(client, command):
61 template = '/var/www/miq/vmdb/script/rails runner %s'
62 return command_runner(client, template % command)
63
64 def rake_runner(client, command):
65 template = '/var/www/miq/vmdb/script/rake -f /var/www/miq/vmdb/Rakefile %s'
66 return rails_runner(client, template % command)
67
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/ssh.py b/utils/ssh.py
--- a/utils/ssh.py
+++ b/utils/ssh.py
@@ -1,4 +1,5 @@
import paramiko
+from scp import SCPClient
class SSHClient(paramiko.SSHClient):
"""paramiko.SSHClient wrapper
@@ -15,6 +16,8 @@
connect_kwargs['timeout'] = 10
if 'allow_agent' not in connect_kwargs:
connect_kwargs['allow_agent'] = False
+ if 'look_for_keys' not in connect_kwargs:
+ connect_kwargs['look_for_keys'] = False
self._connect_kwargs = connect_kwargs
def __call__(self, **connect_kwargs):
@@ -41,6 +44,11 @@
def run_rake_command(self, command):
return rake_runner(self, command)
+ def put_file(self, local_file, remote_file = '.'):
+ return scp_putter(self, local_file, remote_file)
+
+ def get_file(self, remote_file, local_path = ''):
+ return scp_getter(self, remote_file, local_path)
def command_runner(client, command):
template = '%s\n'
@@ -65,3 +73,13 @@
template = '/var/www/miq/vmdb/script/rake -f /var/www/miq/vmdb/Rakefile %s'
return rails_runner(client, template % command)
+def scp_putter(client, local_file, remote_file):
+ with client as ctx:
+ transport = ctx.get_transport()
+ SCPClient(transport).put(local_file, remote_file)
+
+def scp_getter(client, remote_file, local_path):
+ with client as ctx:
+ transport = ctx.get_transport()
+ SCPClient(transport).get(remote_file, local_path)
+
| {"golden_diff": "diff --git a/utils/ssh.py b/utils/ssh.py\n--- a/utils/ssh.py\n+++ b/utils/ssh.py\n@@ -1,4 +1,5 @@\n import paramiko\n+from scp import SCPClient\n \n class SSHClient(paramiko.SSHClient):\n \"\"\"paramiko.SSHClient wrapper\n@@ -15,6 +16,8 @@\n connect_kwargs['timeout'] = 10\n if 'allow_agent' not in connect_kwargs:\n connect_kwargs['allow_agent'] = False\n+ if 'look_for_keys' not in connect_kwargs:\n+ connect_kwargs['look_for_keys'] = False\n self._connect_kwargs = connect_kwargs\n \n def __call__(self, **connect_kwargs):\n@@ -41,6 +44,11 @@\n def run_rake_command(self, command):\n return rake_runner(self, command)\n \n+ def put_file(self, local_file, remote_file = '.'):\n+ return scp_putter(self, local_file, remote_file)\n+\n+ def get_file(self, remote_file, local_path = ''):\n+ return scp_getter(self, remote_file, local_path)\n \n def command_runner(client, command):\n template = '%s\\n'\n@@ -65,3 +73,13 @@\n template = '/var/www/miq/vmdb/script/rake -f /var/www/miq/vmdb/Rakefile %s'\n return rails_runner(client, template % command)\n \n+def scp_putter(client, local_file, remote_file):\n+ with client as ctx:\n+ transport = ctx.get_transport()\n+ SCPClient(transport).put(local_file, remote_file)\n+\n+def scp_getter(client, remote_file, local_path):\n+ with client as ctx:\n+ transport = ctx.get_transport()\n+ SCPClient(transport).get(remote_file, local_path)\n+\n", "issue": "Add SCP support\nThe ssh_client fixture supports running a rake command. Some of these commands presuppose a local file exists on the appliance, such as when importing an Automate custom namespace. Extending ssh_client would be ideal.\n\nHere's my os.system hack:\n\n```\nimport os\n...\nos.system(\"sshpass -p '%s' scp %s@%s:/root/\" % \\\n (mozwebqa.credentials['ssh']['password'], \n mozwebqa.credentials['ssh']['username'], \n soap_client.evm_server_hostname))\n```\n\n", "before_files": [{"content": "import paramiko\n\nclass SSHClient(paramiko.SSHClient):\n \"\"\"paramiko.SSHClient wrapper\n\n Allows copying/overriding and use as a context manager\n Constructor kwargs are handed directly to paramiko.SSHClient.connect()\n \"\"\"\n def __init__(self, **connect_kwargs):\n super(SSHClient, self).__init__()\n self.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\n # Set up some sane defaults\n if 'timeout' not in connect_kwargs:\n connect_kwargs['timeout'] = 10\n if 'allow_agent' not in connect_kwargs:\n connect_kwargs['allow_agent'] = False\n self._connect_kwargs = connect_kwargs\n\n def __call__(self, **connect_kwargs):\n # Update a copy of this instance's connect kwargs with passed in kwargs,\n # then return a new instance with the updated kwargs\n new_connect_kwargs = dict(self._connect_kwargs)\n new_connect_kwargs.update(connect_kwargs)\n new_client = SSHClient(**new_connect_kwargs)\n return new_client\n\n def __enter__(self):\n self.connect(**self._connect_kwargs)\n return self\n\n def __exit__(self, *args, **kwargs):\n self.close()\n\n def run_command(self, command):\n return command_runner(self, command)\n\n def run_rails_command(self, command):\n return rails_runner(self, command)\n\n def run_rake_command(self, command):\n return rake_runner(self, command)\n\n\ndef command_runner(client, command):\n template = '%s\\n'\n command = template % command\n with client as ctx:\n transport = ctx.get_transport()\n session = transport.open_session()\n session.set_combine_stderr(True)\n session.exec_command(command)\n exit_status = session.recv_exit_status()\n output = session.recv(-1)\n return exit_status, output\n\n # Returning two things so tuple unpacking the return works even if the ssh client fails\n return None, None\n\ndef rails_runner(client, command):\n template = '/var/www/miq/vmdb/script/rails runner %s'\n return command_runner(client, template % command)\n\ndef rake_runner(client, command):\n template = '/var/www/miq/vmdb/script/rake -f /var/www/miq/vmdb/Rakefile %s'\n return rails_runner(client, template % command)\n\n", "path": "utils/ssh.py"}], "after_files": [{"content": "import paramiko\nfrom scp import SCPClient\n\nclass SSHClient(paramiko.SSHClient):\n \"\"\"paramiko.SSHClient wrapper\n\n Allows copying/overriding and use as a context manager\n Constructor kwargs are handed directly to paramiko.SSHClient.connect()\n \"\"\"\n def __init__(self, **connect_kwargs):\n super(SSHClient, self).__init__()\n self.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\n # Set up some sane defaults\n if 'timeout' not in connect_kwargs:\n connect_kwargs['timeout'] = 10\n if 'allow_agent' not in connect_kwargs:\n connect_kwargs['allow_agent'] = False\n if 'look_for_keys' not in connect_kwargs:\n connect_kwargs['look_for_keys'] = False\n self._connect_kwargs = connect_kwargs\n\n def __call__(self, **connect_kwargs):\n # Update a copy of this instance's connect kwargs with passed in kwargs,\n # then return a new instance with the updated kwargs\n new_connect_kwargs = dict(self._connect_kwargs)\n new_connect_kwargs.update(connect_kwargs)\n new_client = SSHClient(**new_connect_kwargs)\n return new_client\n\n def __enter__(self):\n self.connect(**self._connect_kwargs)\n return self\n\n def __exit__(self, *args, **kwargs):\n self.close()\n\n def run_command(self, command):\n return command_runner(self, command)\n\n def run_rails_command(self, command):\n return rails_runner(self, command)\n\n def run_rake_command(self, command):\n return rake_runner(self, command)\n\n def put_file(self, local_file, remote_file = '.'):\n return scp_putter(self, local_file, remote_file)\n\n def get_file(self, remote_file, local_path = ''):\n return scp_getter(self, remote_file, local_path)\n\ndef command_runner(client, command):\n template = '%s\\n'\n command = template % command\n with client as ctx:\n transport = ctx.get_transport()\n session = transport.open_session()\n session.set_combine_stderr(True)\n session.exec_command(command)\n exit_status = session.recv_exit_status()\n output = session.recv(-1)\n return exit_status, output\n\n # Returning two things so tuple unpacking the return works even if the ssh client fails\n return None, None\n\ndef rails_runner(client, command):\n template = '/var/www/miq/vmdb/script/rails runner %s'\n return command_runner(client, template % command)\n\ndef rake_runner(client, command):\n template = '/var/www/miq/vmdb/script/rake -f /var/www/miq/vmdb/Rakefile %s'\n return rails_runner(client, template % command)\n\ndef scp_putter(client, local_file, remote_file):\n with client as ctx:\n transport = ctx.get_transport()\n SCPClient(transport).put(local_file, remote_file)\n\ndef scp_getter(client, remote_file, local_path):\n with client as ctx:\n transport = ctx.get_transport()\n SCPClient(transport).get(remote_file, local_path)\n\n", "path": "utils/ssh.py"}]} | 1,002 | 400 |
gh_patches_debug_6010 | rasdani/github-patches | git_diff | ethereum__web3.py-460 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add field name of formatter failure to apply_formatters_to_dict
* Version: 3.x & 4.x
* Python: 2.7/3.4/3.5
* OS: osx/linux/win
### What was wrong?
When applying formatters to a dict, if one of the values fails, it's impossible to tell from the trace which one did.
### How can it be fixed?
Catch ValueError exceptions in `apply_formatters_to_dict` and rethrow with the name of the field that failed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/utils/formatters.py`
Content:
```
1 from collections import (
2 Iterable,
3 Mapping,
4 )
5 import sys
6
7 from cytoolz.functoolz import (
8 curry,
9 compose,
10 )
11
12 from eth_utils import (
13 is_string,
14 to_list,
15 to_dict,
16 )
17
18 from web3.utils.decorators import (
19 reject_recursive_repeats,
20 )
21
22
23 def hex_to_integer(value):
24 return int(value, 16)
25
26
27 if sys.version_info.major == 2:
28 def integer_to_hex(value):
29 return hex(value).rstrip('L')
30 else:
31 integer_to_hex = hex
32
33
34 @curry
35 @to_list
36 def apply_formatter_at_index(formatter, at_index, value):
37 if at_index + 1 > len(value):
38 raise IndexError(
39 "Not enough values in iterable to apply formatter. Got: {0}. "
40 "Need: {1}".format(len(value), at_index)
41 )
42 for index, item in enumerate(value):
43 if index == at_index:
44 yield formatter(item)
45 else:
46 yield item
47
48
49 def apply_formatters_to_args(*formatters):
50 return compose(*(
51 apply_formatter_at_index(formatter, index)
52 for index, formatter
53 in enumerate(formatters)
54 ))
55
56
57 @curry
58 def apply_formatter_if(condition, formatter, value):
59 if condition(value):
60 return formatter(value)
61 else:
62 return value
63
64
65 @curry
66 @to_dict
67 def apply_formatters_to_dict(formatters, value):
68 for key, item in value.items():
69 if key in formatters:
70 yield key, formatters[key](item)
71 else:
72 yield key, item
73
74
75 @curry
76 @to_list
77 def apply_formatter_to_array(formatter, value):
78 for item in value:
79 yield formatter(item)
80
81
82 @curry
83 def apply_one_of_formatters(formatter_condition_pairs, value):
84 for formatter, condition in formatter_condition_pairs:
85 if condition(value):
86 return formatter(value)
87 else:
88 raise ValueError("The provided value did not satisfy any of the formatter conditions")
89
90
91 def map_collection(func, collection):
92 '''
93 Apply func to each element of a collection, or value of a dictionary.
94 If the value is not a collection, return it unmodified
95 '''
96 datatype = type(collection)
97 if isinstance(collection, Mapping):
98 return datatype((key, func(val)) for key, val in collection.items())
99 if is_string(collection):
100 return collection
101 elif isinstance(collection, Iterable):
102 return datatype(map(func, collection))
103 else:
104 return collection
105
106
107 @reject_recursive_repeats
108 def recursive_map(func, data):
109 '''
110 Apply func to data, and any collection items inside data (using map_collection).
111 Define func so that it only applies to the type of value that you want it to apply to.
112 '''
113 def recurse(item):
114 return recursive_map(func, item)
115 items_mapped = map_collection(recurse, data)
116 return func(items_mapped)
117
118
119 def static_return(value):
120 def inner(*args, **kwargs):
121 return value
122 return inner
123
124
125 def static_result(value):
126 def inner(*args, **kwargs):
127 return {'result': value}
128 return inner
129
130
131 @curry
132 @to_dict
133 def apply_key_map(key_mappings, value):
134 for key, item in value.items():
135 if key in key_mappings:
136 yield key_mappings[key], item
137 else:
138 yield key, item
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/utils/formatters.py b/web3/utils/formatters.py
--- a/web3/utils/formatters.py
+++ b/web3/utils/formatters.py
@@ -67,7 +67,10 @@
def apply_formatters_to_dict(formatters, value):
for key, item in value.items():
if key in formatters:
- yield key, formatters[key](item)
+ try:
+ yield key, formatters[key](item)
+ except (TypeError, ValueError) as exc:
+ raise type(exc)("Could not format value %r as field %r" % (item, key)) from exc
else:
yield key, item
| {"golden_diff": "diff --git a/web3/utils/formatters.py b/web3/utils/formatters.py\n--- a/web3/utils/formatters.py\n+++ b/web3/utils/formatters.py\n@@ -67,7 +67,10 @@\n def apply_formatters_to_dict(formatters, value):\n for key, item in value.items():\n if key in formatters:\n- yield key, formatters[key](item)\n+ try:\n+ yield key, formatters[key](item)\n+ except (TypeError, ValueError) as exc:\n+ raise type(exc)(\"Could not format value %r as field %r\" % (item, key)) from exc\n else:\n yield key, item\n", "issue": "Add field name of formatter failure to apply_formatters_to_dict\n* Version: 3.x & 4.x\r\n* Python: 2.7/3.4/3.5\r\n* OS: osx/linux/win\r\n\r\n\r\n### What was wrong?\r\n\r\nWhen applying formatters to a dict, if one of the values fails, it's impossible to tell from the trace which one did.\r\n\r\n### How can it be fixed?\r\n\r\nCatch ValueError exceptions in `apply_formatters_to_dict` and rethrow with the name of the field that failed.\n", "before_files": [{"content": "from collections import (\n Iterable,\n Mapping,\n)\nimport sys\n\nfrom cytoolz.functoolz import (\n curry,\n compose,\n)\n\nfrom eth_utils import (\n is_string,\n to_list,\n to_dict,\n)\n\nfrom web3.utils.decorators import (\n reject_recursive_repeats,\n)\n\n\ndef hex_to_integer(value):\n return int(value, 16)\n\n\nif sys.version_info.major == 2:\n def integer_to_hex(value):\n return hex(value).rstrip('L')\nelse:\n integer_to_hex = hex\n\n\n@curry\n@to_list\ndef apply_formatter_at_index(formatter, at_index, value):\n if at_index + 1 > len(value):\n raise IndexError(\n \"Not enough values in iterable to apply formatter. Got: {0}. \"\n \"Need: {1}\".format(len(value), at_index)\n )\n for index, item in enumerate(value):\n if index == at_index:\n yield formatter(item)\n else:\n yield item\n\n\ndef apply_formatters_to_args(*formatters):\n return compose(*(\n apply_formatter_at_index(formatter, index)\n for index, formatter\n in enumerate(formatters)\n ))\n\n\n@curry\ndef apply_formatter_if(condition, formatter, value):\n if condition(value):\n return formatter(value)\n else:\n return value\n\n\n@curry\n@to_dict\ndef apply_formatters_to_dict(formatters, value):\n for key, item in value.items():\n if key in formatters:\n yield key, formatters[key](item)\n else:\n yield key, item\n\n\n@curry\n@to_list\ndef apply_formatter_to_array(formatter, value):\n for item in value:\n yield formatter(item)\n\n\n@curry\ndef apply_one_of_formatters(formatter_condition_pairs, value):\n for formatter, condition in formatter_condition_pairs:\n if condition(value):\n return formatter(value)\n else:\n raise ValueError(\"The provided value did not satisfy any of the formatter conditions\")\n\n\ndef map_collection(func, collection):\n '''\n Apply func to each element of a collection, or value of a dictionary.\n If the value is not a collection, return it unmodified\n '''\n datatype = type(collection)\n if isinstance(collection, Mapping):\n return datatype((key, func(val)) for key, val in collection.items())\n if is_string(collection):\n return collection\n elif isinstance(collection, Iterable):\n return datatype(map(func, collection))\n else:\n return collection\n\n\n@reject_recursive_repeats\ndef recursive_map(func, data):\n '''\n Apply func to data, and any collection items inside data (using map_collection).\n Define func so that it only applies to the type of value that you want it to apply to.\n '''\n def recurse(item):\n return recursive_map(func, item)\n items_mapped = map_collection(recurse, data)\n return func(items_mapped)\n\n\ndef static_return(value):\n def inner(*args, **kwargs):\n return value\n return inner\n\n\ndef static_result(value):\n def inner(*args, **kwargs):\n return {'result': value}\n return inner\n\n\n@curry\n@to_dict\ndef apply_key_map(key_mappings, value):\n for key, item in value.items():\n if key in key_mappings:\n yield key_mappings[key], item\n else:\n yield key, item\n", "path": "web3/utils/formatters.py"}], "after_files": [{"content": "from collections import (\n Iterable,\n Mapping,\n)\nimport sys\n\nfrom cytoolz.functoolz import (\n curry,\n compose,\n)\n\nfrom eth_utils import (\n is_string,\n to_list,\n to_dict,\n)\n\nfrom web3.utils.decorators import (\n reject_recursive_repeats,\n)\n\n\ndef hex_to_integer(value):\n return int(value, 16)\n\n\nif sys.version_info.major == 2:\n def integer_to_hex(value):\n return hex(value).rstrip('L')\nelse:\n integer_to_hex = hex\n\n\n@curry\n@to_list\ndef apply_formatter_at_index(formatter, at_index, value):\n if at_index + 1 > len(value):\n raise IndexError(\n \"Not enough values in iterable to apply formatter. Got: {0}. \"\n \"Need: {1}\".format(len(value), at_index)\n )\n for index, item in enumerate(value):\n if index == at_index:\n yield formatter(item)\n else:\n yield item\n\n\ndef apply_formatters_to_args(*formatters):\n return compose(*(\n apply_formatter_at_index(formatter, index)\n for index, formatter\n in enumerate(formatters)\n ))\n\n\n@curry\ndef apply_formatter_if(condition, formatter, value):\n if condition(value):\n return formatter(value)\n else:\n return value\n\n\n@curry\n@to_dict\ndef apply_formatters_to_dict(formatters, value):\n for key, item in value.items():\n if key in formatters:\n try:\n yield key, formatters[key](item)\n except (TypeError, ValueError) as exc:\n raise type(exc)(\"Could not format value %r as field %r\" % (item, key)) from exc\n else:\n yield key, item\n\n\n@curry\n@to_list\ndef apply_formatter_to_array(formatter, value):\n for item in value:\n yield formatter(item)\n\n\n@curry\ndef apply_one_of_formatters(formatter_condition_pairs, value):\n for formatter, condition in formatter_condition_pairs:\n if condition(value):\n return formatter(value)\n else:\n raise ValueError(\"The provided value did not satisfy any of the formatter conditions\")\n\n\ndef map_collection(func, collection):\n '''\n Apply func to each element of a collection, or value of a dictionary.\n If the value is not a collection, return it unmodified\n '''\n datatype = type(collection)\n if isinstance(collection, Mapping):\n return datatype((key, func(val)) for key, val in collection.items())\n if is_string(collection):\n return collection\n elif isinstance(collection, Iterable):\n return datatype(map(func, collection))\n else:\n return collection\n\n\n@reject_recursive_repeats\ndef recursive_map(func, data):\n '''\n Apply func to data, and any collection items inside data (using map_collection).\n Define func so that it only applies to the type of value that you want it to apply to.\n '''\n def recurse(item):\n return recursive_map(func, item)\n items_mapped = map_collection(recurse, data)\n return func(items_mapped)\n\n\ndef static_return(value):\n def inner(*args, **kwargs):\n return value\n return inner\n\n\ndef static_result(value):\n def inner(*args, **kwargs):\n return {'result': value}\n return inner\n\n\n@curry\n@to_dict\ndef apply_key_map(key_mappings, value):\n for key, item in value.items():\n if key in key_mappings:\n yield key_mappings[key], item\n else:\n yield key, item\n", "path": "web3/utils/formatters.py"}]} | 1,411 | 148 |
gh_patches_debug_37604 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-974 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Logos in base/static/logos should be more organized
Currently, all logos (partner+project logos) are stored together in the `static/logos` directory. Introduce two sub-dirs. called `partern` and `project` .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `base/views.py`
Content:
```
1 import os
2 import re
3
4 import markdown2
5 from django.core.mail import send_mail
6 from django.shortcuts import render
7 from django.views.generic import View
8
9 try:
10 import oeplatform.securitysettings as sec
11 except:
12 import logging
13 logging.error("No securitysettings found. Triggerd in base/views.py")
14
15 from base.forms import ContactForm
16
17 # Create your views here.
18
19 SITE_ROOT = os.path.dirname(os.path.realpath(__file__))
20
21
22 class Welcome(View):
23 def get(self, request):
24 os.path.dirname(os.path.realpath(__file__))
25 version_expr = r"^(?P<major>\d+)\.(?P<minor>\d+)+\.(?P<patch>\d+)$"
26 markdowner = markdown2.Markdown()
27 with open(os.path.join(SITE_ROOT, "..", "VERSION")) as version_file:
28 match = re.match(version_expr, version_file.read())
29 major, minor, patch = match.groups()
30 with open(
31 os.path.join(
32 SITE_ROOT,
33 "..",
34 "versions/changelogs/%s_%s_%s.md" % (major, minor, patch),
35 )
36 ) as change_file:
37 changes = markdowner.convert(
38 "\n".join(line for line in change_file.readlines())
39 )
40 return render(
41 request,
42 "base/index.html",
43 {"version": "%s.%s.%s" % (major, minor, patch), "changes": changes},
44 )
45
46
47 def get_logs(request):
48 version_expr = r"^(?P<major>\d+)_(?P<major>\d+)+_(?P<major>\d+)\.md$"
49 for file in os.listdir("../versions/changelogs"):
50 match = re.match(version_expr, file)
51 markdowner = markdown2.Markdown()
52 if match:
53 major, minor, patch = match.groups()
54 with open("versions/changelogs" + file) as f:
55 logs[(major, minor, patch)] = markdowner.convert(
56 "\n".join(line for line in f.readlines())
57 )
58
59
60 def redir(request, target):
61 return render(request, "base/{target}.html".format(target=target), {})
62
63
64 class ContactView(View):
65 error_css_class = "error"
66 required_css_class = "required"
67
68 def post(self, request):
69 form = ContactForm(data=request.POST)
70 if form.is_valid():
71 receps = sec.CONTACT_ADDRESSES.get(
72 request.POST["contact_category"], "technical"
73 )
74 send_mail(
75 request.POST.get("contact_topic"),
76 f"{request.POST.get('contact_name')} ({request.POST.get('contact_email')}) wrote: \n"
77 + request.POST.get("content"),
78 sec.DEFAULT_FROM_EMAIL,
79 receps,
80 fail_silently=False,
81 )
82 return render(
83 request, "base/contact.html", {"form": ContactForm(), "success": True}
84 )
85 else:
86 return render(
87 request, "base/contact.html", {"form": form, "success": False}
88 )
89
90 def get(self, request):
91 return render(
92 request, "base/contact.html", {"form": ContactForm(), "success": False}
93 )
94
95
96 def robot(request):
97 return render(request, "base/robots.txt", {}, content_type="text/plain")
98
99
100 def handler500(request):
101 response = render(request, "base/500.html", {})
102 response.status_code = 500
103 return response
104
105
106 def handler404(request, exception):
107 response = render(request, "base/404.html", {})
108 response.status_code = 404
109 return response
110
```
Path: `base/urls.py`
Content:
```
1 from django.conf.urls import url, include
2 from django.urls import path
3 from base import views
4
5 urlpatterns = [
6 url(r"^robots.txt$", views.robot),
7 url(r"^$", views.Welcome.as_view(), name="index"),
8 url(r"^about/$", views.redir, {"target": "about"}, name="index"),
9 url(r"^faq/$", views.redir, {"target": "faq"}, name="index"),
10 url(r"^discussion/$", views.redir, {"target": "discussion"}, name="index"),
11 url(r"^contact/$", views.ContactView.as_view(), name="index"),
12 url(r"^legal/privacy_policy/$", views.redir, {"target": "privacy_policy"}, name="index"),
13 url(r"^legal/tou/$", views.redir, {"target": "terms_of_use"}, name="index"),
14 ] + [path('captcha/', include('captcha.urls'))]
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/base/urls.py b/base/urls.py
--- a/base/urls.py
+++ b/base/urls.py
@@ -5,7 +5,8 @@
urlpatterns = [
url(r"^robots.txt$", views.robot),
url(r"^$", views.Welcome.as_view(), name="index"),
- url(r"^about/$", views.redir, {"target": "about"}, name="index"),
+ url(r"^about/$", views.AboutPage.as_view(), name="index"),
+ url(r"^about/project-detail/(?P<project_id>[\w\-]+)/$", views.AboutProjectDetail.as_view(), name="project_detail"),
url(r"^faq/$", views.redir, {"target": "faq"}, name="index"),
url(r"^discussion/$", views.redir, {"target": "discussion"}, name="index"),
url(r"^contact/$", views.ContactView.as_view(), name="index"),
diff --git a/base/views.py b/base/views.py
--- a/base/views.py
+++ b/base/views.py
@@ -1,5 +1,8 @@
+import json
import os
import re
+import pathlib
+from django.apps import apps
import markdown2
from django.core.mail import send_mail
@@ -107,3 +110,55 @@
response = render(request, "base/404.html", {})
response.status_code = 404
return response
+
+
+def get_json_content(path, json_id=None):
+ """ Parse all jsons from given path and return as
+ list or return a single parsed json by id ->
+ The json must have a field called id.
+
+ Args:
+ path (string): path to directory like 'static/project_pages_content/'
+ json_id (string, optional): ID value that must match the value of json[id]. Defaults to None.
+
+ Returns:
+ list[object]: List of all deserialized json files in path
+ or
+ object: single json python object
+ """
+
+ if path is not None:
+ all_jsons=[]
+ for _json in os.listdir(path=path):
+ with open(os.path.join(path, _json), "r", encoding='utf-8') as json_content:
+ content = json.load(json_content)
+ all_jsons.append(content)
+
+ if json_id is None:
+ return all_jsons
+ else:
+ content_by_id = [i for i in all_jsons if json_id == i["id"] and "template" != i["id"]]
+ return content_by_id[0]
+ # TODO: catch the exception if path is none
+ else:
+ return {"error": "Path cant be None. Please provide the path to '/static/project_detail_pages_content/' . You can create a new Project by adding an JSON file like the '/static/project_detail_pages_content/PROJECT_TEMPLATE.json'."}
+
+class AboutPage(View):
+# docstring
+ projects_content_static = "project_detail_pages_content"
+ projects_content_path = os.path.join(sec.STATIC_ROOT, projects_content_static)
+
+ def get(self, request, projects_content_path=projects_content_path):
+ projects = get_json_content(path=projects_content_path)
+
+ return render(request, "base/about.html", {"projects": projects})
+
+class AboutProjectDetail(AboutPage):
+# docstring
+
+ def get(self, request, project_id):
+ project = get_json_content(path=self.projects_content_path, json_id=project_id)
+
+ return render(request, "base/project-detail.html", {"project": project})
+
+
\ No newline at end of file
| {"golden_diff": "diff --git a/base/urls.py b/base/urls.py\n--- a/base/urls.py\n+++ b/base/urls.py\n@@ -5,7 +5,8 @@\n urlpatterns = [\n url(r\"^robots.txt$\", views.robot),\n url(r\"^$\", views.Welcome.as_view(), name=\"index\"),\n- url(r\"^about/$\", views.redir, {\"target\": \"about\"}, name=\"index\"),\n+ url(r\"^about/$\", views.AboutPage.as_view(), name=\"index\"),\n+ url(r\"^about/project-detail/(?P<project_id>[\\w\\-]+)/$\", views.AboutProjectDetail.as_view(), name=\"project_detail\"),\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\ndiff --git a/base/views.py b/base/views.py\n--- a/base/views.py\n+++ b/base/views.py\n@@ -1,5 +1,8 @@\n+import json\n import os\n import re\n+import pathlib\n+from django.apps import apps\n \n import markdown2\n from django.core.mail import send_mail\n@@ -107,3 +110,55 @@\n response = render(request, \"base/404.html\", {})\n response.status_code = 404\n return response\n+\n+\n+def get_json_content(path, json_id=None):\n+ \"\"\" Parse all jsons from given path and return as \n+ list or return a single parsed json by id -> \n+ The json must have a field called id. \n+\n+ Args:\n+ path (string): path to directory like 'static/project_pages_content/'\n+ json_id (string, optional): ID value that must match the value of json[id]. Defaults to None.\n+\n+ Returns:\n+ list[object]: List of all deserialized json files in path \n+ or\n+ object: single json python object\n+ \"\"\"\n+ \n+ if path is not None:\n+ all_jsons=[]\n+ for _json in os.listdir(path=path):\n+ with open(os.path.join(path, _json), \"r\", encoding='utf-8') as json_content:\n+ content = json.load(json_content)\n+ all_jsons.append(content)\n+\n+ if json_id is None:\n+ return all_jsons\n+ else:\n+ content_by_id = [i for i in all_jsons if json_id == i[\"id\"] and \"template\" != i[\"id\"]]\n+ return content_by_id[0]\n+ # TODO: catch the exception if path is none \n+ else:\n+ return {\"error\": \"Path cant be None. Please provide the path to '/static/project_detail_pages_content/' . You can create a new Project by adding an JSON file like the '/static/project_detail_pages_content/PROJECT_TEMPLATE.json'.\"}\n+\n+class AboutPage(View):\n+# docstring\n+ projects_content_static = \"project_detail_pages_content\"\n+ projects_content_path = os.path.join(sec.STATIC_ROOT, projects_content_static)\n+\n+ def get(self, request, projects_content_path=projects_content_path):\n+ projects = get_json_content(path=projects_content_path)\n+\n+ return render(request, \"base/about.html\", {\"projects\": projects})\n+\n+class AboutProjectDetail(AboutPage):\n+# docstring\n+\n+ def get(self, request, project_id):\n+ project = get_json_content(path=self.projects_content_path, json_id=project_id)\n+\n+ return render(request, \"base/project-detail.html\", {\"project\": project})\n+ \n+ \n\\ No newline at end of file\n", "issue": "Logos in base/static/logos should be more organized\nCurrently, all logos (partner+project logos) are stored together in the `static/logos` directory. Introduce two sub-dirs. called `partern` and `project` .\n", "before_files": [{"content": "import os\nimport re\n\nimport markdown2\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render\nfrom django.views.generic import View\n\ntry:\n import oeplatform.securitysettings as sec\nexcept:\n import logging\n logging.error(\"No securitysettings found. Triggerd in base/views.py\")\n\nfrom base.forms import ContactForm\n\n# Create your views here.\n\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\n\nclass Welcome(View):\n def get(self, request):\n os.path.dirname(os.path.realpath(__file__))\n version_expr = r\"^(?P<major>\\d+)\\.(?P<minor>\\d+)+\\.(?P<patch>\\d+)$\"\n markdowner = markdown2.Markdown()\n with open(os.path.join(SITE_ROOT, \"..\", \"VERSION\")) as version_file:\n match = re.match(version_expr, version_file.read())\n major, minor, patch = match.groups()\n with open(\n os.path.join(\n SITE_ROOT,\n \"..\",\n \"versions/changelogs/%s_%s_%s.md\" % (major, minor, patch),\n )\n ) as change_file:\n changes = markdowner.convert(\n \"\\n\".join(line for line in change_file.readlines())\n )\n return render(\n request,\n \"base/index.html\",\n {\"version\": \"%s.%s.%s\" % (major, minor, patch), \"changes\": changes},\n )\n\n\ndef get_logs(request):\n version_expr = r\"^(?P<major>\\d+)_(?P<major>\\d+)+_(?P<major>\\d+)\\.md$\"\n for file in os.listdir(\"../versions/changelogs\"):\n match = re.match(version_expr, file)\n markdowner = markdown2.Markdown()\n if match:\n major, minor, patch = match.groups()\n with open(\"versions/changelogs\" + file) as f:\n logs[(major, minor, patch)] = markdowner.convert(\n \"\\n\".join(line for line in f.readlines())\n )\n\n\ndef redir(request, target):\n return render(request, \"base/{target}.html\".format(target=target), {})\n\n\nclass ContactView(View):\n error_css_class = \"error\"\n required_css_class = \"required\"\n\n def post(self, request):\n form = ContactForm(data=request.POST)\n if form.is_valid():\n receps = sec.CONTACT_ADDRESSES.get(\n request.POST[\"contact_category\"], \"technical\"\n )\n send_mail(\n request.POST.get(\"contact_topic\"),\n f\"{request.POST.get('contact_name')} ({request.POST.get('contact_email')}) wrote: \\n\"\n + request.POST.get(\"content\"),\n sec.DEFAULT_FROM_EMAIL,\n receps,\n fail_silently=False,\n )\n return render(\n request, \"base/contact.html\", {\"form\": ContactForm(), \"success\": True}\n )\n else:\n return render(\n request, \"base/contact.html\", {\"form\": form, \"success\": False}\n )\n\n def get(self, request):\n return render(\n request, \"base/contact.html\", {\"form\": ContactForm(), \"success\": False}\n )\n\n\ndef robot(request):\n return render(request, \"base/robots.txt\", {}, content_type=\"text/plain\")\n\n\ndef handler500(request):\n response = render(request, \"base/500.html\", {})\n response.status_code = 500\n return response\n\n\ndef handler404(request, exception):\n response = render(request, \"base/404.html\", {})\n response.status_code = 404\n return response\n", "path": "base/views.py"}, {"content": "from django.conf.urls import url, include\nfrom django.urls import path\nfrom base import views\n\nurlpatterns = [\n url(r\"^robots.txt$\", views.robot),\n url(r\"^$\", views.Welcome.as_view(), name=\"index\"),\n url(r\"^about/$\", views.redir, {\"target\": \"about\"}, name=\"index\"),\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\n url(r\"^legal/privacy_policy/$\", views.redir, {\"target\": \"privacy_policy\"}, name=\"index\"),\n url(r\"^legal/tou/$\", views.redir, {\"target\": \"terms_of_use\"}, name=\"index\"),\n] + [path('captcha/', include('captcha.urls'))]\n", "path": "base/urls.py"}], "after_files": [{"content": "import json\nimport os\nimport re\nimport pathlib\nfrom django.apps import apps\n\nimport markdown2\nfrom django.core.mail import send_mail\nfrom django.shortcuts import render\nfrom django.views.generic import View\n\ntry:\n import oeplatform.securitysettings as sec\nexcept:\n import logging\n logging.error(\"No securitysettings found. Triggerd in base/views.py\")\n\nfrom base.forms import ContactForm\n\n# Create your views here.\n\nSITE_ROOT = os.path.dirname(os.path.realpath(__file__))\n\n\nclass Welcome(View):\n def get(self, request):\n os.path.dirname(os.path.realpath(__file__))\n version_expr = r\"^(?P<major>\\d+)\\.(?P<minor>\\d+)+\\.(?P<patch>\\d+)$\"\n markdowner = markdown2.Markdown()\n with open(os.path.join(SITE_ROOT, \"..\", \"VERSION\")) as version_file:\n match = re.match(version_expr, version_file.read())\n major, minor, patch = match.groups()\n with open(\n os.path.join(\n SITE_ROOT,\n \"..\",\n \"versions/changelogs/%s_%s_%s.md\" % (major, minor, patch),\n )\n ) as change_file:\n changes = markdowner.convert(\n \"\\n\".join(line for line in change_file.readlines())\n )\n return render(\n request,\n \"base/index.html\",\n {\"version\": \"%s.%s.%s\" % (major, minor, patch), \"changes\": changes},\n )\n\n\ndef get_logs(request):\n version_expr = r\"^(?P<major>\\d+)_(?P<major>\\d+)+_(?P<major>\\d+)\\.md$\"\n for file in os.listdir(\"../versions/changelogs\"):\n match = re.match(version_expr, file)\n markdowner = markdown2.Markdown()\n if match:\n major, minor, patch = match.groups()\n with open(\"versions/changelogs\" + file) as f:\n logs[(major, minor, patch)] = markdowner.convert(\n \"\\n\".join(line for line in f.readlines())\n )\n\n\ndef redir(request, target):\n return render(request, \"base/{target}.html\".format(target=target), {})\n\n\nclass ContactView(View):\n error_css_class = \"error\"\n required_css_class = \"required\"\n\n def post(self, request):\n form = ContactForm(data=request.POST)\n if form.is_valid():\n receps = sec.CONTACT_ADDRESSES.get(\n request.POST[\"contact_category\"], \"technical\"\n )\n send_mail(\n request.POST.get(\"contact_topic\"),\n f\"{request.POST.get('contact_name')} ({request.POST.get('contact_email')}) wrote: \\n\"\n + request.POST.get(\"content\"),\n sec.DEFAULT_FROM_EMAIL,\n receps,\n fail_silently=False,\n )\n return render(\n request, \"base/contact.html\", {\"form\": ContactForm(), \"success\": True}\n )\n else:\n return render(\n request, \"base/contact.html\", {\"form\": form, \"success\": False}\n )\n\n def get(self, request):\n return render(\n request, \"base/contact.html\", {\"form\": ContactForm(), \"success\": False}\n )\n\n\ndef robot(request):\n return render(request, \"base/robots.txt\", {}, content_type=\"text/plain\")\n\n\ndef handler500(request):\n response = render(request, \"base/500.html\", {})\n response.status_code = 500\n return response\n\n\ndef handler404(request, exception):\n response = render(request, \"base/404.html\", {})\n response.status_code = 404\n return response\n\n\ndef get_json_content(path, json_id=None):\n \"\"\" Parse all jsons from given path and return as \n list or return a single parsed json by id -> \n The json must have a field called id. \n\n Args:\n path (string): path to directory like 'static/project_pages_content/'\n json_id (string, optional): ID value that must match the value of json[id]. Defaults to None.\n\n Returns:\n list[object]: List of all deserialized json files in path \n or\n object: single json python object\n \"\"\"\n \n if path is not None:\n all_jsons=[]\n for _json in os.listdir(path=path):\n with open(os.path.join(path, _json), \"r\", encoding='utf-8') as json_content:\n content = json.load(json_content)\n all_jsons.append(content)\n\n if json_id is None:\n return all_jsons\n else:\n content_by_id = [i for i in all_jsons if json_id == i[\"id\"] and \"template\" != i[\"id\"]]\n return content_by_id[0]\n # TODO: catch the exception if path is none \n else:\n return {\"error\": \"Path cant be None. Please provide the path to '/static/project_detail_pages_content/' . You can create a new Project by adding an JSON file like the '/static/project_detail_pages_content/PROJECT_TEMPLATE.json'.\"}\n\nclass AboutPage(View):\n# docstring\n projects_content_static = \"project_detail_pages_content\"\n projects_content_path = os.path.join(sec.STATIC_ROOT, projects_content_static)\n\n def get(self, request, projects_content_path=projects_content_path):\n projects = get_json_content(path=projects_content_path)\n\n return render(request, \"base/about.html\", {\"projects\": projects})\n\nclass AboutProjectDetail(AboutPage):\n# docstring\n\n def get(self, request, project_id):\n project = get_json_content(path=self.projects_content_path, json_id=project_id)\n\n return render(request, \"base/project-detail.html\", {\"project\": project})\n \n ", "path": "base/views.py"}, {"content": "from django.conf.urls import url, include\nfrom django.urls import path\nfrom base import views\n\nurlpatterns = [\n url(r\"^robots.txt$\", views.robot),\n url(r\"^$\", views.Welcome.as_view(), name=\"index\"),\n url(r\"^about/$\", views.AboutPage.as_view(), name=\"index\"),\n url(r\"^about/project-detail/(?P<project_id>[\\w\\-]+)/$\", views.AboutProjectDetail.as_view(), name=\"project_detail\"),\n url(r\"^faq/$\", views.redir, {\"target\": \"faq\"}, name=\"index\"),\n url(r\"^discussion/$\", views.redir, {\"target\": \"discussion\"}, name=\"index\"),\n url(r\"^contact/$\", views.ContactView.as_view(), name=\"index\"),\n url(r\"^legal/privacy_policy/$\", views.redir, {\"target\": \"privacy_policy\"}, name=\"index\"),\n url(r\"^legal/tou/$\", views.redir, {\"target\": \"terms_of_use\"}, name=\"index\"),\n] + [path('captcha/', include('captcha.urls'))]\n", "path": "base/urls.py"}]} | 1,542 | 803 |
gh_patches_debug_19853 | rasdani/github-patches | git_diff | urllib3__urllib3-1855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raw HTML in README.rst causing problems with uploading dists
See: https://travis-ci.org/github/urllib3/urllib3/builds/675807537
For now I've manually uploaded the dists.
We should strip this section from our `long_description` field and maybe run `twine check` within our CI to make sure we don't run into this issue on release time in the future.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup
4
5 import os
6 import re
7 import codecs
8
9 base_path = os.path.dirname(__file__)
10
11 # Get the version (borrowed from SQLAlchemy)
12 with open(os.path.join(base_path, "src", "urllib3", "__init__.py")) as fp:
13 VERSION = (
14 re.compile(r""".*__version__ = ["'](.*?)['"]""", re.S).match(fp.read()).group(1)
15 )
16
17
18 with codecs.open("README.rst", encoding="utf-8") as fp:
19 readme = fp.read()
20
21 with codecs.open("CHANGES.rst", encoding="utf-8") as fp:
22 changes = fp.read()
23
24 version = VERSION
25
26 setup(
27 name="urllib3",
28 version=version,
29 description="HTTP library with thread-safe connection pooling, file post, and more.",
30 long_description=u"\n\n".join([readme, changes]),
31 classifiers=[
32 "Environment :: Web Environment",
33 "Intended Audience :: Developers",
34 "License :: OSI Approved :: MIT License",
35 "Operating System :: OS Independent",
36 "Programming Language :: Python",
37 "Programming Language :: Python :: 2",
38 "Programming Language :: Python :: 2.7",
39 "Programming Language :: Python :: 3",
40 "Programming Language :: Python :: 3.5",
41 "Programming Language :: Python :: 3.6",
42 "Programming Language :: Python :: 3.7",
43 "Programming Language :: Python :: 3.8",
44 "Programming Language :: Python :: 3.9",
45 "Programming Language :: Python :: Implementation :: CPython",
46 "Programming Language :: Python :: Implementation :: PyPy",
47 "Topic :: Internet :: WWW/HTTP",
48 "Topic :: Software Development :: Libraries",
49 ],
50 keywords="urllib httplib threadsafe filepost http https ssl pooling",
51 author="Andrey Petrov",
52 author_email="[email protected]",
53 url="https://urllib3.readthedocs.io/",
54 project_urls={
55 "Documentation": "https://urllib3.readthedocs.io/",
56 "Code": "https://github.com/urllib3/urllib3",
57 "Issue tracker": "https://github.com/urllib3/urllib3/issues",
58 },
59 license="MIT",
60 packages=[
61 "urllib3",
62 "urllib3.packages",
63 "urllib3.packages.ssl_match_hostname",
64 "urllib3.packages.backports",
65 "urllib3.contrib",
66 "urllib3.contrib._securetransport",
67 "urllib3.util",
68 ],
69 package_dir={"": "src"},
70 requires=[],
71 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4",
72 extras_require={
73 "brotli": ["brotlipy>=0.6.0"],
74 "secure": [
75 "pyOpenSSL>=0.14",
76 "cryptography>=1.3.4",
77 "idna>=2.0.0",
78 "certifi",
79 "ipaddress; python_version=='2.7'",
80 ],
81 "socks": ["PySocks>=1.5.6,<2.0,!=1.5.7"],
82 },
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,18 @@
with codecs.open("README.rst", encoding="utf-8") as fp:
- readme = fp.read()
+ # remove reST raw directive from README
+ mode = None
+ lines = []
+ for line in fp:
+ if line.startswith(".. raw"):
+ mode = "ignore_raw"
+ elif line == "\n":
+ mode = None
+
+ if mode != "ignore_raw":
+ lines.append(line)
+ readme = "".join(lines)
with codecs.open("CHANGES.rst", encoding="utf-8") as fp:
changes = fp.read()
@@ -28,6 +39,7 @@
version=version,
description="HTTP library with thread-safe connection pooling, file post, and more.",
long_description=u"\n\n".join([readme, changes]),
+ long_description_content_type="text/x-rst",
classifiers=[
"Environment :: Web Environment",
"Intended Audience :: Developers",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,18 @@\n \n \n with codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n- readme = fp.read()\n+ # remove reST raw directive from README\n+ mode = None\n+ lines = []\n+ for line in fp:\n+ if line.startswith(\".. raw\"):\n+ mode = \"ignore_raw\"\n+ elif line == \"\\n\":\n+ mode = None\n+\n+ if mode != \"ignore_raw\":\n+ lines.append(line)\n+ readme = \"\".join(lines)\n \n with codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n@@ -28,6 +39,7 @@\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u\"\\n\\n\".join([readme, changes]),\n+ long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n", "issue": "Raw HTML in README.rst causing problems with uploading dists\nSee: https://travis-ci.org/github/urllib3/urllib3/builds/675807537\r\n\r\nFor now I've manually uploaded the dists.\r\n\r\nWe should strip this section from our `long_description` field and maybe run `twine check` within our CI to make sure we don't run into this issue on release time in the future.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport os\nimport re\nimport codecs\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, \"src\", \"urllib3\", \"__init__.py\")) as fp:\n VERSION = (\n re.compile(r\"\"\".*__version__ = [\"'](.*?)['\"]\"\"\", re.S).match(fp.read()).group(1)\n )\n\n\nwith codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n readme = fp.read()\n\nwith codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n\nversion = VERSION\n\nsetup(\n name=\"urllib3\",\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u\"\\n\\n\".join([readme, changes]),\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries\",\n ],\n keywords=\"urllib httplib threadsafe filepost http https ssl pooling\",\n author=\"Andrey Petrov\",\n author_email=\"[email protected]\",\n url=\"https://urllib3.readthedocs.io/\",\n project_urls={\n \"Documentation\": \"https://urllib3.readthedocs.io/\",\n \"Code\": \"https://github.com/urllib3/urllib3\",\n \"Issue tracker\": \"https://github.com/urllib3/urllib3/issues\",\n },\n license=\"MIT\",\n packages=[\n \"urllib3\",\n \"urllib3.packages\",\n \"urllib3.packages.ssl_match_hostname\",\n \"urllib3.packages.backports\",\n \"urllib3.contrib\",\n \"urllib3.contrib._securetransport\",\n \"urllib3.util\",\n ],\n package_dir={\"\": \"src\"},\n requires=[],\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4\",\n extras_require={\n \"brotli\": [\"brotlipy>=0.6.0\"],\n \"secure\": [\n \"pyOpenSSL>=0.14\",\n \"cryptography>=1.3.4\",\n \"idna>=2.0.0\",\n \"certifi\",\n \"ipaddress; python_version=='2.7'\",\n ],\n \"socks\": [\"PySocks>=1.5.6,<2.0,!=1.5.7\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport os\nimport re\nimport codecs\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, \"src\", \"urllib3\", \"__init__.py\")) as fp:\n VERSION = (\n re.compile(r\"\"\".*__version__ = [\"'](.*?)['\"]\"\"\", re.S).match(fp.read()).group(1)\n )\n\n\nwith codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n # remove reST raw directive from README\n mode = None\n lines = []\n for line in fp:\n if line.startswith(\".. raw\"):\n mode = \"ignore_raw\"\n elif line == \"\\n\":\n mode = None\n\n if mode != \"ignore_raw\":\n lines.append(line)\n readme = \"\".join(lines)\n\nwith codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n\nversion = VERSION\n\nsetup(\n name=\"urllib3\",\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u\"\\n\\n\".join([readme, changes]),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries\",\n ],\n keywords=\"urllib httplib threadsafe filepost http https ssl pooling\",\n author=\"Andrey Petrov\",\n author_email=\"[email protected]\",\n url=\"https://urllib3.readthedocs.io/\",\n project_urls={\n \"Documentation\": \"https://urllib3.readthedocs.io/\",\n \"Code\": \"https://github.com/urllib3/urllib3\",\n \"Issue tracker\": \"https://github.com/urllib3/urllib3/issues\",\n },\n license=\"MIT\",\n packages=[\n \"urllib3\",\n \"urllib3.packages\",\n \"urllib3.packages.ssl_match_hostname\",\n \"urllib3.packages.backports\",\n \"urllib3.contrib\",\n \"urllib3.contrib._securetransport\",\n \"urllib3.util\",\n ],\n package_dir={\"\": \"src\"},\n requires=[],\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4\",\n extras_require={\n \"brotli\": [\"brotlipy>=0.6.0\"],\n \"secure\": [\n \"pyOpenSSL>=0.14\",\n \"cryptography>=1.3.4\",\n \"idna>=2.0.0\",\n \"certifi\",\n \"ipaddress; python_version=='2.7'\",\n ],\n \"socks\": [\"PySocks>=1.5.6,<2.0,!=1.5.7\"],\n },\n)\n", "path": "setup.py"}]} | 1,230 | 245 |
gh_patches_debug_2875 | rasdani/github-patches | git_diff | TheAlgorithms__Python-7556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PYTEST WARNING] QasmSimulator will be deprecated
### Feature description
The use of `q.Aer.get_backend("qasm_simulator")` raises the warning
```
/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/qiskit_aer/backends/qasm_simulator.py:360: PendingDeprecationWarning: The `QasmSimulator` backend will be deprecated in the future. It has been superseded by the `AerSimulator` backend.
warn('The `QasmSimulator` backend will be deprecated in the'
```
This code is found in the following files:
- deutsch_jozsa @abhishekjiitr
- half_adder @abhishekjiitr
- not_gate @abhishekjiitr
- single_quibit_measure @abhishekjiitr
origin: #7211
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `quantum/superdense_coding.py`
Content:
```
1 """
2 Build the superdense coding protocol. This quantum
3 circuit can send two classical bits using one quantum
4 bit. This circuit is designed using the Qiskit
5 framework. This experiment run in IBM Q simulator
6 with 1000 shots.
7 .
8 References:
9 https://qiskit.org/textbook/ch-algorithms/superdense-coding.html
10 https://en.wikipedia.org/wiki/Superdense_coding
11 """
12
13 import math
14
15 import qiskit
16 from qiskit import Aer, ClassicalRegister, QuantumCircuit, QuantumRegister, execute
17
18
19 def superdense_coding(bit_1: int = 1, bit_2: int = 1) -> qiskit.result.counts.Counts:
20 """
21 The input refer to the classical message
22 that you wants to send. {'00','01','10','11'}
23 result for default values: {11: 1000}
24 βββββ βββββ
25 qr_0: ββββββ€ X ββββββββββββ€ X ββββββ
26 ββββββββ¬βββββββββββββββ¬βββββββ
27 qr_1: β€ H ββββ βββ€ X ββ€ Z ββββ βββ€ H β
28 βββββ ββββββββββ βββββ
29 cr: 2/ββββββββββββββββββββββββββββββ
30 Args:
31 bit_1: bit 1 of classical information to send.
32 bit_2: bit 2 of classical information to send.
33 Returns:
34 qiskit.result.counts.Counts: counts of send state.
35 >>> superdense_coding(0,0)
36 {'00': 1000}
37 >>> superdense_coding(0,1)
38 {'01': 1000}
39 >>> superdense_coding(-1,0)
40 Traceback (most recent call last):
41 ...
42 ValueError: inputs must be positive.
43 >>> superdense_coding(1,'j')
44 Traceback (most recent call last):
45 ...
46 TypeError: inputs must be integers.
47 >>> superdense_coding(1,0.5)
48 Traceback (most recent call last):
49 ...
50 ValueError: inputs must be exact integers.
51 >>> superdense_coding(2,1)
52 Traceback (most recent call last):
53 ...
54 ValueError: inputs must be less or equal to 1.
55 """
56 if (type(bit_1) == str) or (type(bit_2) == str):
57 raise TypeError("inputs must be integers.")
58 if (bit_1 < 0) or (bit_2 < 0):
59 raise ValueError("inputs must be positive.")
60 if (math.floor(bit_1) != bit_1) or (math.floor(bit_2) != bit_2):
61 raise ValueError("inputs must be exact integers.")
62 if (bit_1 > 1) or (bit_2 > 1):
63 raise ValueError("inputs must be less or equal to 1.")
64
65 # build registers
66 qr = QuantumRegister(2, "qr")
67 cr = ClassicalRegister(2, "cr")
68
69 quantum_circuit = QuantumCircuit(qr, cr)
70
71 # entanglement the qubits
72 quantum_circuit.h(1)
73 quantum_circuit.cx(1, 0)
74
75 # send the information
76 c_information = str(bit_1) + str(bit_2)
77
78 if c_information == "11":
79 quantum_circuit.x(1)
80 quantum_circuit.z(1)
81 elif c_information == "10":
82 quantum_circuit.z(1)
83 elif c_information == "01":
84 quantum_circuit.x(1)
85 else:
86 quantum_circuit.i(1)
87
88 # unentangled the circuit
89 quantum_circuit.cx(1, 0)
90 quantum_circuit.h(1)
91
92 # measure the circuit
93 quantum_circuit.measure(qr, cr)
94
95 backend = Aer.get_backend("qasm_simulator")
96 job = execute(quantum_circuit, backend, shots=1000)
97
98 return job.result().get_counts(quantum_circuit)
99
100
101 if __name__ == "__main__":
102 print(f"Counts for classical state send: {superdense_coding(1,1)}")
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/quantum/superdense_coding.py b/quantum/superdense_coding.py
--- a/quantum/superdense_coding.py
+++ b/quantum/superdense_coding.py
@@ -92,7 +92,7 @@
# measure the circuit
quantum_circuit.measure(qr, cr)
- backend = Aer.get_backend("qasm_simulator")
+ backend = Aer.get_backend("aer_simulator")
job = execute(quantum_circuit, backend, shots=1000)
return job.result().get_counts(quantum_circuit)
| {"golden_diff": "diff --git a/quantum/superdense_coding.py b/quantum/superdense_coding.py\n--- a/quantum/superdense_coding.py\n+++ b/quantum/superdense_coding.py\n@@ -92,7 +92,7 @@\n # measure the circuit\n quantum_circuit.measure(qr, cr)\n \n- backend = Aer.get_backend(\"qasm_simulator\")\n+ backend = Aer.get_backend(\"aer_simulator\")\n job = execute(quantum_circuit, backend, shots=1000)\n \n return job.result().get_counts(quantum_circuit)\n", "issue": "[PYTEST WARNING] QasmSimulator will be deprecated\n### Feature description\n\nThe use of `q.Aer.get_backend(\"qasm_simulator\")` raises the warning\r\n```\r\n/opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/site-packages/qiskit_aer/backends/qasm_simulator.py:360: PendingDeprecationWarning: The `QasmSimulator` backend will be deprecated in the future. It has been superseded by the `AerSimulator` backend.\r\n warn('The `QasmSimulator` backend will be deprecated in the'\r\n```\r\nThis code is found in the following files:\r\n - deutsch_jozsa @abhishekjiitr \r\n - half_adder @abhishekjiitr \r\n - not_gate @abhishekjiitr \r\n - single_quibit_measure @abhishekjiitr \r\n\r\norigin: #7211\n", "before_files": [{"content": "\"\"\"\nBuild the superdense coding protocol. This quantum\ncircuit can send two classical bits using one quantum\nbit. This circuit is designed using the Qiskit\nframework. This experiment run in IBM Q simulator\nwith 1000 shots.\n.\nReferences:\nhttps://qiskit.org/textbook/ch-algorithms/superdense-coding.html\nhttps://en.wikipedia.org/wiki/Superdense_coding\n\"\"\"\n\nimport math\n\nimport qiskit\nfrom qiskit import Aer, ClassicalRegister, QuantumCircuit, QuantumRegister, execute\n\n\ndef superdense_coding(bit_1: int = 1, bit_2: int = 1) -> qiskit.result.counts.Counts:\n \"\"\"\n The input refer to the classical message\n that you wants to send. {'00','01','10','11'}\n result for default values: {11: 1000}\n \u250c\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2510\n qr_0: \u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\n \u250c\u2500\u2500\u2500\u2510\u2514\u2500\u252c\u2500\u2518\u250c\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2510\u2514\u2500\u252c\u2500\u2518\u250c\u2500\u2500\u2500\u2510\n qr_1: \u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2524 X \u251c\u2524 Z \u251c\u2500\u2500\u25a0\u2500\u2500\u2524 H \u251c\n \u2514\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2518\n cr: 2/\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n Args:\n bit_1: bit 1 of classical information to send.\n bit_2: bit 2 of classical information to send.\n Returns:\n qiskit.result.counts.Counts: counts of send state.\n >>> superdense_coding(0,0)\n {'00': 1000}\n >>> superdense_coding(0,1)\n {'01': 1000}\n >>> superdense_coding(-1,0)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be positive.\n >>> superdense_coding(1,'j')\n Traceback (most recent call last):\n ...\n TypeError: inputs must be integers.\n >>> superdense_coding(1,0.5)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be exact integers.\n >>> superdense_coding(2,1)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be less or equal to 1.\n \"\"\"\n if (type(bit_1) == str) or (type(bit_2) == str):\n raise TypeError(\"inputs must be integers.\")\n if (bit_1 < 0) or (bit_2 < 0):\n raise ValueError(\"inputs must be positive.\")\n if (math.floor(bit_1) != bit_1) or (math.floor(bit_2) != bit_2):\n raise ValueError(\"inputs must be exact integers.\")\n if (bit_1 > 1) or (bit_2 > 1):\n raise ValueError(\"inputs must be less or equal to 1.\")\n\n # build registers\n qr = QuantumRegister(2, \"qr\")\n cr = ClassicalRegister(2, \"cr\")\n\n quantum_circuit = QuantumCircuit(qr, cr)\n\n # entanglement the qubits\n quantum_circuit.h(1)\n quantum_circuit.cx(1, 0)\n\n # send the information\n c_information = str(bit_1) + str(bit_2)\n\n if c_information == \"11\":\n quantum_circuit.x(1)\n quantum_circuit.z(1)\n elif c_information == \"10\":\n quantum_circuit.z(1)\n elif c_information == \"01\":\n quantum_circuit.x(1)\n else:\n quantum_circuit.i(1)\n\n # unentangled the circuit\n quantum_circuit.cx(1, 0)\n quantum_circuit.h(1)\n\n # measure the circuit\n quantum_circuit.measure(qr, cr)\n\n backend = Aer.get_backend(\"qasm_simulator\")\n job = execute(quantum_circuit, backend, shots=1000)\n\n return job.result().get_counts(quantum_circuit)\n\n\nif __name__ == \"__main__\":\n print(f\"Counts for classical state send: {superdense_coding(1,1)}\")\n", "path": "quantum/superdense_coding.py"}], "after_files": [{"content": "\"\"\"\nBuild the superdense coding protocol. This quantum\ncircuit can send two classical bits using one quantum\nbit. This circuit is designed using the Qiskit\nframework. This experiment run in IBM Q simulator\nwith 1000 shots.\n.\nReferences:\nhttps://qiskit.org/textbook/ch-algorithms/superdense-coding.html\nhttps://en.wikipedia.org/wiki/Superdense_coding\n\"\"\"\n\nimport math\n\nimport qiskit\nfrom qiskit import Aer, ClassicalRegister, QuantumCircuit, QuantumRegister, execute\n\n\ndef superdense_coding(bit_1: int = 1, bit_2: int = 1) -> qiskit.result.counts.Counts:\n \"\"\"\n The input refer to the classical message\n that you wants to send. {'00','01','10','11'}\n result for default values: {11: 1000}\n \u250c\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2510\n qr_0: \u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\n \u250c\u2500\u2500\u2500\u2510\u2514\u2500\u252c\u2500\u2518\u250c\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2510\u2514\u2500\u252c\u2500\u2518\u250c\u2500\u2500\u2500\u2510\n qr_1: \u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2524 X \u251c\u2524 Z \u251c\u2500\u2500\u25a0\u2500\u2500\u2524 H \u251c\n \u2514\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2518\n cr: 2/\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n Args:\n bit_1: bit 1 of classical information to send.\n bit_2: bit 2 of classical information to send.\n Returns:\n qiskit.result.counts.Counts: counts of send state.\n >>> superdense_coding(0,0)\n {'00': 1000}\n >>> superdense_coding(0,1)\n {'01': 1000}\n >>> superdense_coding(-1,0)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be positive.\n >>> superdense_coding(1,'j')\n Traceback (most recent call last):\n ...\n TypeError: inputs must be integers.\n >>> superdense_coding(1,0.5)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be exact integers.\n >>> superdense_coding(2,1)\n Traceback (most recent call last):\n ...\n ValueError: inputs must be less or equal to 1.\n \"\"\"\n if (type(bit_1) == str) or (type(bit_2) == str):\n raise TypeError(\"inputs must be integers.\")\n if (bit_1 < 0) or (bit_2 < 0):\n raise ValueError(\"inputs must be positive.\")\n if (math.floor(bit_1) != bit_1) or (math.floor(bit_2) != bit_2):\n raise ValueError(\"inputs must be exact integers.\")\n if (bit_1 > 1) or (bit_2 > 1):\n raise ValueError(\"inputs must be less or equal to 1.\")\n\n # build registers\n qr = QuantumRegister(2, \"qr\")\n cr = ClassicalRegister(2, \"cr\")\n\n quantum_circuit = QuantumCircuit(qr, cr)\n\n # entanglement the qubits\n quantum_circuit.h(1)\n quantum_circuit.cx(1, 0)\n\n # send the information\n c_information = str(bit_1) + str(bit_2)\n\n if c_information == \"11\":\n quantum_circuit.x(1)\n quantum_circuit.z(1)\n elif c_information == \"10\":\n quantum_circuit.z(1)\n elif c_information == \"01\":\n quantum_circuit.x(1)\n else:\n quantum_circuit.i(1)\n\n # unentangled the circuit\n quantum_circuit.cx(1, 0)\n quantum_circuit.h(1)\n\n # measure the circuit\n quantum_circuit.measure(qr, cr)\n\n backend = Aer.get_backend(\"aer_simulator\")\n job = execute(quantum_circuit, backend, shots=1000)\n\n return job.result().get_counts(quantum_circuit)\n\n\nif __name__ == \"__main__\":\n print(f\"Counts for classical state send: {superdense_coding(1,1)}\")\n", "path": "quantum/superdense_coding.py"}]} | 1,628 | 136 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.