problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_34582 | rasdani/github-patches | git_diff | netbox-community__netbox-14903 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User Dashboard can become permanently broken when misconfiguring DEFAULT_DASHBOARD
### Deployment Type
Self-hosted
### NetBox Version
v3.7.0
### Python Version
3.9
### Steps to Reproduce
1. User johndoe modifies their dashboard over the GUI
2. Administrator changes DEFAULT_DASHBOARD in configuration.py with a broken configuration (Assigned a wrong value in a bookmarks widget)
3. Bookmark widget looks fine without logging in since it won't load any data
4. After johndoe logs in, they reset their dashboard
5. Site crashes with an Field Error Exception since the dashboard is misconfigured
6. johndoe can't reset their dashboard anymore, even after the error in DEFAULT_DASHBOARD is corrected
### Expected Behavior
When resetting the dashboard, I would expect it to now always follow the changes of DEFAULT_DASHBOARD.
### Observed Behavior
Resetting the dashboard seems to copy the current state of DEFAULT_DASHBOARD. If that state is broken, a user cannot reset it anymore.
Only the main page with the dashboard crashes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/dashboard/utils.py`
Content:
```
1 import uuid
2
3 from django.conf import settings
4 from django.core.exceptions import ObjectDoesNotExist
5
6 from netbox.registry import registry
7 from extras.constants import DEFAULT_DASHBOARD
8
9 __all__ = (
10 'get_dashboard',
11 'get_default_dashboard',
12 'get_widget_class',
13 'register_widget',
14 )
15
16
17 def register_widget(cls):
18 """
19 Decorator for registering a DashboardWidget class.
20 """
21 app_label = cls.__module__.split('.', maxsplit=1)[0]
22 label = f'{app_label}.{cls.__name__}'
23 registry['widgets'][label] = cls
24
25 return cls
26
27
28 def get_widget_class(name):
29 """
30 Return a registered DashboardWidget class identified by its name.
31 """
32 try:
33 return registry['widgets'][name]
34 except KeyError:
35 raise ValueError(f"Unregistered widget class: {name}")
36
37
38 def get_dashboard(user):
39 """
40 Return the Dashboard for a given User if one exists, or generate a default dashboard.
41 """
42 if user.is_anonymous:
43 dashboard = get_default_dashboard()
44 else:
45 try:
46 dashboard = user.dashboard
47 except ObjectDoesNotExist:
48 # Create a dashboard for this user
49 dashboard = get_default_dashboard()
50 dashboard.user = user
51 dashboard.save()
52
53 return dashboard
54
55
56 def get_default_dashboard():
57 from extras.models import Dashboard
58
59 dashboard = Dashboard()
60 default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
61
62 for widget in default_config:
63 id = str(uuid.uuid4())
64 dashboard.layout.append({
65 'id': id,
66 'w': widget['width'],
67 'h': widget['height'],
68 'x': widget.get('x'),
69 'y': widget.get('y'),
70 })
71 dashboard.config[id] = {
72 'class': widget['widget'],
73 'title': widget.get('title'),
74 'color': widget.get('color'),
75 'config': widget.get('config', {}),
76 }
77
78 return dashboard
79
```
Path: `netbox/netbox/views/misc.py`
Content:
```
1 import re
2 from collections import namedtuple
3
4 from django.conf import settings
5 from django.contrib.contenttypes.models import ContentType
6 from django.core.cache import cache
7 from django.shortcuts import redirect, render
8 from django.views.generic import View
9 from django_tables2 import RequestConfig
10 from packaging import version
11
12 from extras.dashboard.utils import get_dashboard
13 from netbox.forms import SearchForm
14 from netbox.search import LookupTypes
15 from netbox.search.backends import search_backend
16 from netbox.tables import SearchTable
17 from utilities.htmx import is_htmx
18 from utilities.paginator import EnhancedPaginator, get_paginate_count
19
20 __all__ = (
21 'HomeView',
22 'SearchView',
23 )
24
25 Link = namedtuple('Link', ('label', 'viewname', 'permission', 'count'))
26
27
28 class HomeView(View):
29 template_name = 'home.html'
30
31 def get(self, request):
32 if settings.LOGIN_REQUIRED and not request.user.is_authenticated:
33 return redirect('login')
34
35 # Construct the user's custom dashboard layout
36 dashboard = get_dashboard(request.user).get_layout()
37
38 # Check whether a new release is available. (Only for staff/superusers.)
39 new_release = None
40 if request.user.is_staff or request.user.is_superuser:
41 latest_release = cache.get('latest_release')
42 if latest_release:
43 release_version, release_url = latest_release
44 if release_version > version.parse(settings.VERSION):
45 new_release = {
46 'version': str(release_version),
47 'url': release_url,
48 }
49
50 return render(request, self.template_name, {
51 'dashboard': dashboard,
52 'new_release': new_release,
53 })
54
55
56 class SearchView(View):
57
58 def get(self, request):
59 results = []
60 highlight = None
61
62 # Initialize search form
63 form = SearchForm(request.GET) if 'q' in request.GET else SearchForm()
64
65 if form.is_valid():
66
67 # Restrict results by object type
68 object_types = []
69 for obj_type in form.cleaned_data['obj_types']:
70 app_label, model_name = obj_type.split('.')
71 object_types.append(ContentType.objects.get_by_natural_key(app_label, model_name))
72
73 lookup = form.cleaned_data['lookup'] or LookupTypes.PARTIAL
74 results = search_backend.search(
75 form.cleaned_data['q'],
76 user=request.user,
77 object_types=object_types,
78 lookup=lookup
79 )
80
81 # If performing a regex search, pass the highlight value as a compiled pattern
82 if form.cleaned_data['lookup'] == LookupTypes.REGEX:
83 try:
84 highlight = re.compile(f"({form.cleaned_data['q']})", flags=re.IGNORECASE)
85 except re.error:
86 pass
87 elif form.cleaned_data['lookup'] != LookupTypes.EXACT:
88 highlight = form.cleaned_data['q']
89
90 table = SearchTable(results, highlight=highlight)
91
92 # Paginate the table results
93 RequestConfig(request, {
94 'paginator_class': EnhancedPaginator,
95 'per_page': get_paginate_count(request)
96 }).configure(table)
97
98 # If this is an HTMX request, return only the rendered table HTML
99 if is_htmx(request):
100 return render(request, 'htmx/table.html', {
101 'table': table,
102 })
103
104 return render(request, 'search.html', {
105 'form': form,
106 'table': table,
107 })
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/dashboard/utils.py b/netbox/extras/dashboard/utils.py
--- a/netbox/extras/dashboard/utils.py
+++ b/netbox/extras/dashboard/utils.py
@@ -53,13 +53,13 @@
return dashboard
-def get_default_dashboard():
+def get_default_dashboard(config=None):
from extras.models import Dashboard
dashboard = Dashboard()
- default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
+ config = config or settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD
- for widget in default_config:
+ for widget in config:
id = str(uuid.uuid4())
dashboard.layout.append({
'id': id,
diff --git a/netbox/netbox/views/misc.py b/netbox/netbox/views/misc.py
--- a/netbox/netbox/views/misc.py
+++ b/netbox/netbox/views/misc.py
@@ -2,14 +2,17 @@
from collections import namedtuple
from django.conf import settings
+from django.contrib import messages
from django.contrib.contenttypes.models import ContentType
from django.core.cache import cache
from django.shortcuts import redirect, render
+from django.utils.translation import gettext_lazy as _
from django.views.generic import View
from django_tables2 import RequestConfig
from packaging import version
-from extras.dashboard.utils import get_dashboard
+from extras.constants import DEFAULT_DASHBOARD
+from extras.dashboard.utils import get_dashboard, get_default_dashboard
from netbox.forms import SearchForm
from netbox.search import LookupTypes
from netbox.search.backends import search_backend
@@ -33,7 +36,13 @@
return redirect('login')
# Construct the user's custom dashboard layout
- dashboard = get_dashboard(request.user).get_layout()
+ try:
+ dashboard = get_dashboard(request.user).get_layout()
+ except Exception:
+ messages.error(request, _(
+ "There was an error loading the dashboard configuration. A default dashboard is in use."
+ ))
+ dashboard = get_default_dashboard(config=DEFAULT_DASHBOARD).get_layout()
# Check whether a new release is available. (Only for staff/superusers.)
new_release = None
| {"golden_diff": "diff --git a/netbox/extras/dashboard/utils.py b/netbox/extras/dashboard/utils.py\n--- a/netbox/extras/dashboard/utils.py\n+++ b/netbox/extras/dashboard/utils.py\n@@ -53,13 +53,13 @@\n return dashboard\n \n \n-def get_default_dashboard():\n+def get_default_dashboard(config=None):\n from extras.models import Dashboard\n \n dashboard = Dashboard()\n- default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD\n+ config = config or settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD\n \n- for widget in default_config:\n+ for widget in config:\n id = str(uuid.uuid4())\n dashboard.layout.append({\n 'id': id,\ndiff --git a/netbox/netbox/views/misc.py b/netbox/netbox/views/misc.py\n--- a/netbox/netbox/views/misc.py\n+++ b/netbox/netbox/views/misc.py\n@@ -2,14 +2,17 @@\n from collections import namedtuple\n \n from django.conf import settings\n+from django.contrib import messages\n from django.contrib.contenttypes.models import ContentType\n from django.core.cache import cache\n from django.shortcuts import redirect, render\n+from django.utils.translation import gettext_lazy as _\n from django.views.generic import View\n from django_tables2 import RequestConfig\n from packaging import version\n \n-from extras.dashboard.utils import get_dashboard\n+from extras.constants import DEFAULT_DASHBOARD\n+from extras.dashboard.utils import get_dashboard, get_default_dashboard\n from netbox.forms import SearchForm\n from netbox.search import LookupTypes\n from netbox.search.backends import search_backend\n@@ -33,7 +36,13 @@\n return redirect('login')\n \n # Construct the user's custom dashboard layout\n- dashboard = get_dashboard(request.user).get_layout()\n+ try:\n+ dashboard = get_dashboard(request.user).get_layout()\n+ except Exception:\n+ messages.error(request, _(\n+ \"There was an error loading the dashboard configuration. A default dashboard is in use.\"\n+ ))\n+ dashboard = get_default_dashboard(config=DEFAULT_DASHBOARD).get_layout()\n \n # Check whether a new release is available. (Only for staff/superusers.)\n new_release = None\n", "issue": "User Dashboard can become permanently broken when misconfiguring DEFAULT_DASHBOARD\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.0\n\n### Python Version\n\n3.9\n\n### Steps to Reproduce\n\n1. User johndoe modifies their dashboard over the GUI\r\n2. Administrator changes DEFAULT_DASHBOARD in configuration.py with a broken configuration (Assigned a wrong value in a bookmarks widget)\r\n3. Bookmark widget looks fine without logging in since it won't load any data\r\n4. After johndoe logs in, they reset their dashboard\r\n5. Site crashes with an Field Error Exception since the dashboard is misconfigured\r\n6. johndoe can't reset their dashboard anymore, even after the error in DEFAULT_DASHBOARD is corrected\n\n### Expected Behavior\n\nWhen resetting the dashboard, I would expect it to now always follow the changes of DEFAULT_DASHBOARD.\n\n### Observed Behavior\n\nResetting the dashboard seems to copy the current state of DEFAULT_DASHBOARD. If that state is broken, a user cannot reset it anymore.\r\nOnly the main page with the dashboard crashes.\n", "before_files": [{"content": "import uuid\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\n\nfrom netbox.registry import registry\nfrom extras.constants import DEFAULT_DASHBOARD\n\n__all__ = (\n 'get_dashboard',\n 'get_default_dashboard',\n 'get_widget_class',\n 'register_widget',\n)\n\n\ndef register_widget(cls):\n \"\"\"\n Decorator for registering a DashboardWidget class.\n \"\"\"\n app_label = cls.__module__.split('.', maxsplit=1)[0]\n label = f'{app_label}.{cls.__name__}'\n registry['widgets'][label] = cls\n\n return cls\n\n\ndef get_widget_class(name):\n \"\"\"\n Return a registered DashboardWidget class identified by its name.\n \"\"\"\n try:\n return registry['widgets'][name]\n except KeyError:\n raise ValueError(f\"Unregistered widget class: {name}\")\n\n\ndef get_dashboard(user):\n \"\"\"\n Return the Dashboard for a given User if one exists, or generate a default dashboard.\n \"\"\"\n if user.is_anonymous:\n dashboard = get_default_dashboard()\n else:\n try:\n dashboard = user.dashboard\n except ObjectDoesNotExist:\n # Create a dashboard for this user\n dashboard = get_default_dashboard()\n dashboard.user = user\n dashboard.save()\n\n return dashboard\n\n\ndef get_default_dashboard():\n from extras.models import Dashboard\n\n dashboard = Dashboard()\n default_config = settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD\n\n for widget in default_config:\n id = str(uuid.uuid4())\n dashboard.layout.append({\n 'id': id,\n 'w': widget['width'],\n 'h': widget['height'],\n 'x': widget.get('x'),\n 'y': widget.get('y'),\n })\n dashboard.config[id] = {\n 'class': widget['widget'],\n 'title': widget.get('title'),\n 'color': widget.get('color'),\n 'config': widget.get('config', {}),\n }\n\n return dashboard\n", "path": "netbox/extras/dashboard/utils.py"}, {"content": "import re\nfrom collections import namedtuple\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.cache import cache\nfrom django.shortcuts import redirect, render\nfrom django.views.generic import View\nfrom django_tables2 import RequestConfig\nfrom packaging import version\n\nfrom extras.dashboard.utils import get_dashboard\nfrom netbox.forms import SearchForm\nfrom netbox.search import LookupTypes\nfrom netbox.search.backends import search_backend\nfrom netbox.tables import SearchTable\nfrom utilities.htmx import is_htmx\nfrom utilities.paginator import EnhancedPaginator, get_paginate_count\n\n__all__ = (\n 'HomeView',\n 'SearchView',\n)\n\nLink = namedtuple('Link', ('label', 'viewname', 'permission', 'count'))\n\n\nclass HomeView(View):\n template_name = 'home.html'\n\n def get(self, request):\n if settings.LOGIN_REQUIRED and not request.user.is_authenticated:\n return redirect('login')\n\n # Construct the user's custom dashboard layout\n dashboard = get_dashboard(request.user).get_layout()\n\n # Check whether a new release is available. (Only for staff/superusers.)\n new_release = None\n if request.user.is_staff or request.user.is_superuser:\n latest_release = cache.get('latest_release')\n if latest_release:\n release_version, release_url = latest_release\n if release_version > version.parse(settings.VERSION):\n new_release = {\n 'version': str(release_version),\n 'url': release_url,\n }\n\n return render(request, self.template_name, {\n 'dashboard': dashboard,\n 'new_release': new_release,\n })\n\n\nclass SearchView(View):\n\n def get(self, request):\n results = []\n highlight = None\n\n # Initialize search form\n form = SearchForm(request.GET) if 'q' in request.GET else SearchForm()\n\n if form.is_valid():\n\n # Restrict results by object type\n object_types = []\n for obj_type in form.cleaned_data['obj_types']:\n app_label, model_name = obj_type.split('.')\n object_types.append(ContentType.objects.get_by_natural_key(app_label, model_name))\n\n lookup = form.cleaned_data['lookup'] or LookupTypes.PARTIAL\n results = search_backend.search(\n form.cleaned_data['q'],\n user=request.user,\n object_types=object_types,\n lookup=lookup\n )\n\n # If performing a regex search, pass the highlight value as a compiled pattern\n if form.cleaned_data['lookup'] == LookupTypes.REGEX:\n try:\n highlight = re.compile(f\"({form.cleaned_data['q']})\", flags=re.IGNORECASE)\n except re.error:\n pass\n elif form.cleaned_data['lookup'] != LookupTypes.EXACT:\n highlight = form.cleaned_data['q']\n\n table = SearchTable(results, highlight=highlight)\n\n # Paginate the table results\n RequestConfig(request, {\n 'paginator_class': EnhancedPaginator,\n 'per_page': get_paginate_count(request)\n }).configure(table)\n\n # If this is an HTMX request, return only the rendered table HTML\n if is_htmx(request):\n return render(request, 'htmx/table.html', {\n 'table': table,\n })\n\n return render(request, 'search.html', {\n 'form': form,\n 'table': table,\n })\n", "path": "netbox/netbox/views/misc.py"}], "after_files": [{"content": "import uuid\n\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\n\nfrom netbox.registry import registry\nfrom extras.constants import DEFAULT_DASHBOARD\n\n__all__ = (\n 'get_dashboard',\n 'get_default_dashboard',\n 'get_widget_class',\n 'register_widget',\n)\n\n\ndef register_widget(cls):\n \"\"\"\n Decorator for registering a DashboardWidget class.\n \"\"\"\n app_label = cls.__module__.split('.', maxsplit=1)[0]\n label = f'{app_label}.{cls.__name__}'\n registry['widgets'][label] = cls\n\n return cls\n\n\ndef get_widget_class(name):\n \"\"\"\n Return a registered DashboardWidget class identified by its name.\n \"\"\"\n try:\n return registry['widgets'][name]\n except KeyError:\n raise ValueError(f\"Unregistered widget class: {name}\")\n\n\ndef get_dashboard(user):\n \"\"\"\n Return the Dashboard for a given User if one exists, or generate a default dashboard.\n \"\"\"\n if user.is_anonymous:\n dashboard = get_default_dashboard()\n else:\n try:\n dashboard = user.dashboard\n except ObjectDoesNotExist:\n # Create a dashboard for this user\n dashboard = get_default_dashboard()\n dashboard.user = user\n dashboard.save()\n\n return dashboard\n\n\ndef get_default_dashboard(config=None):\n from extras.models import Dashboard\n\n dashboard = Dashboard()\n config = config or settings.DEFAULT_DASHBOARD or DEFAULT_DASHBOARD\n\n for widget in config:\n id = str(uuid.uuid4())\n dashboard.layout.append({\n 'id': id,\n 'w': widget['width'],\n 'h': widget['height'],\n 'x': widget.get('x'),\n 'y': widget.get('y'),\n })\n dashboard.config[id] = {\n 'class': widget['widget'],\n 'title': widget.get('title'),\n 'color': widget.get('color'),\n 'config': widget.get('config', {}),\n }\n\n return dashboard\n", "path": "netbox/extras/dashboard/utils.py"}, {"content": "import re\nfrom collections import namedtuple\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.cache import cache\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.generic import View\nfrom django_tables2 import RequestConfig\nfrom packaging import version\n\nfrom extras.constants import DEFAULT_DASHBOARD\nfrom extras.dashboard.utils import get_dashboard, get_default_dashboard\nfrom netbox.forms import SearchForm\nfrom netbox.search import LookupTypes\nfrom netbox.search.backends import search_backend\nfrom netbox.tables import SearchTable\nfrom utilities.htmx import is_htmx\nfrom utilities.paginator import EnhancedPaginator, get_paginate_count\n\n__all__ = (\n 'HomeView',\n 'SearchView',\n)\n\nLink = namedtuple('Link', ('label', 'viewname', 'permission', 'count'))\n\n\nclass HomeView(View):\n template_name = 'home.html'\n\n def get(self, request):\n if settings.LOGIN_REQUIRED and not request.user.is_authenticated:\n return redirect('login')\n\n # Construct the user's custom dashboard layout\n try:\n dashboard = get_dashboard(request.user).get_layout()\n except Exception:\n messages.error(request, _(\n \"There was an error loading the dashboard configuration. A default dashboard is in use.\"\n ))\n dashboard = get_default_dashboard(config=DEFAULT_DASHBOARD).get_layout()\n\n # Check whether a new release is available. (Only for staff/superusers.)\n new_release = None\n if request.user.is_staff or request.user.is_superuser:\n latest_release = cache.get('latest_release')\n if latest_release:\n release_version, release_url = latest_release\n if release_version > version.parse(settings.VERSION):\n new_release = {\n 'version': str(release_version),\n 'url': release_url,\n }\n\n return render(request, self.template_name, {\n 'dashboard': dashboard,\n 'new_release': new_release,\n })\n\n\nclass SearchView(View):\n\n def get(self, request):\n results = []\n highlight = None\n\n # Initialize search form\n form = SearchForm(request.GET) if 'q' in request.GET else SearchForm()\n\n if form.is_valid():\n\n # Restrict results by object type\n object_types = []\n for obj_type in form.cleaned_data['obj_types']:\n app_label, model_name = obj_type.split('.')\n object_types.append(ContentType.objects.get_by_natural_key(app_label, model_name))\n\n lookup = form.cleaned_data['lookup'] or LookupTypes.PARTIAL\n results = search_backend.search(\n form.cleaned_data['q'],\n user=request.user,\n object_types=object_types,\n lookup=lookup\n )\n\n # If performing a regex search, pass the highlight value as a compiled pattern\n if form.cleaned_data['lookup'] == LookupTypes.REGEX:\n try:\n highlight = re.compile(f\"({form.cleaned_data['q']})\", flags=re.IGNORECASE)\n except re.error:\n pass\n elif form.cleaned_data['lookup'] != LookupTypes.EXACT:\n highlight = form.cleaned_data['q']\n\n table = SearchTable(results, highlight=highlight)\n\n # Paginate the table results\n RequestConfig(request, {\n 'paginator_class': EnhancedPaginator,\n 'per_page': get_paginate_count(request)\n }).configure(table)\n\n # If this is an HTMX request, return only the rendered table HTML\n if is_htmx(request):\n return render(request, 'htmx/table.html', {\n 'table': table,\n })\n\n return render(request, 'search.html', {\n 'form': form,\n 'table': table,\n })\n", "path": "netbox/netbox/views/misc.py"}]} | 2,002 | 460 |
gh_patches_debug_38759 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-2501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update django-filter to 1.0
## Details
Sorry for deleting the issue template: This is about technical debt :) It may not be immediately critical, but the advice from the author of django-filter is that it's worth it.
django-filter 1.0 has changes that are backwards incompatible. The release notes are here:
http://django-filter.readthedocs.io/en/latest/migration.html
It means, amongst other this, that all where `Filter` object instances are iterated on, we have to [add the `.qs` method](http://django-filter.readthedocs.io/en/latest/migration.html#queryset-methods-are-no-longer-proxied).
Pin django-filter
The new 1.0 series is incompatible, and I've opened #2498 for this purpose.
Meanwhile, as the current master is broken because of this, the version should be pinned - I guess it's sort of bad practice to use the `master` branch anyways, am thinking it's possibly also an outdated decision now.
This fixes #2495 and #2490
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/builds/filters.py`
Content:
```
1 from django.utils.translation import ugettext_lazy as _
2
3 import django_filters
4
5 from readthedocs.builds import constants
6 from readthedocs.builds.models import Build, Version
7
8
9 ANY_REPO = (
10 ('', _('Any')),
11 )
12
13 BUILD_TYPES = ANY_REPO + constants.BUILD_TYPES
14
15
16 class VersionSlugFilter(django_filters.FilterSet):
17
18 class Meta:
19 model = Version
20 fields = {
21 'identifier': ['icontains'],
22 'slug': ['icontains'],
23 }
24
25
26 class VersionFilter(django_filters.FilterSet):
27 project = django_filters.CharFilter(name='project__slug')
28 # Allow filtering on slug= or version=
29 slug = django_filters.CharFilter(label=_("Name"), name='slug',
30 lookup_type='exact')
31 version = django_filters.CharFilter(label=_("Version"), name='slug',
32 lookup_type='exact')
33
34 class Meta:
35 model = Version
36 fields = ['project', 'slug', 'version']
37
38
39 class BuildFilter(django_filters.FilterSet):
40 date = django_filters.DateRangeFilter(label=_("Build Date"), name="date", lookup_type='range')
41 type = django_filters.ChoiceFilter(label=_("Build Type"),
42 choices=BUILD_TYPES)
43
44 class Meta:
45 model = Build
46 fields = ['type', 'date', 'success']
47
```
Path: `readthedocs/projects/filters.py`
Content:
```
1 """Project query filters"""
2
3 from django.utils.translation import ugettext_lazy as _
4
5 import django_filters
6
7 from readthedocs.projects import constants
8 from readthedocs.projects.models import Project, Domain
9
10 ANY_REPO = (
11 ('', _('Any')),
12 )
13
14 REPO_CHOICES = ANY_REPO + constants.REPO_CHOICES
15
16
17 def sort_slug(queryset, query):
18 """Fuzzy filter for slug fields
19
20 Returns sorted queryset where slug approximates ``query``
21 """
22 queryset = queryset.filter(slug__icontains=query)
23 ret = []
24 ret.extend([q.pk for q in queryset
25 if q.slug == query])
26 ret.extend([q.pk for q in queryset
27 if q.slug.startswith(query) and q.pk not in ret])
28 ret.extend([q.pk for q in queryset
29 if q.slug.endswith(query) and q.pk not in ret])
30 ret.extend([q.pk for q in queryset
31 if q.pk not in ret])
32
33 # Create a QS preserving ordering
34 clauses = ' '.join(['WHEN projects_project.id=%s THEN %s' % (pk, i)
35 for i, pk in enumerate(ret)])
36 ordering = 'CASE %s END' % clauses
37 ret_queryset = Project.objects.filter(pk__in=ret).extra(
38 select={'ordering': ordering}, order_by=('ordering',))
39 return ret_queryset
40
41
42 class ProjectFilter(django_filters.FilterSet):
43
44 """Project filter for filter views"""
45
46 name = django_filters.CharFilter(label=_("Name"), name='name',
47 lookup_type='icontains')
48 slug = django_filters.CharFilter(label=_("Slug"), name='slug',
49 lookup_type='icontains')
50 pub_date = django_filters.DateRangeFilter(label=_("Created Date"),
51 name="pub_date")
52 repo = django_filters.CharFilter(label=_("Repository URL"), name='repo',
53 lookup_type='icontains')
54 repo_type = django_filters.ChoiceFilter(
55 label=_("Repository Type"),
56 name='repo',
57 lookup_type='icontains',
58 choices=REPO_CHOICES,
59 )
60
61 class Meta:
62 model = Project
63 fields = ['name', 'slug', 'pub_date', 'repo', 'repo_type']
64
65
66 class DomainFilter(django_filters.FilterSet):
67 project = django_filters.CharFilter(label=_("Project"), name='project__slug',
68 lookup_type='exact')
69
70 class Meta:
71 model = Domain
72 fields = ['domain', 'project', 'canonical']
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/builds/filters.py b/readthedocs/builds/filters.py
--- a/readthedocs/builds/filters.py
+++ b/readthedocs/builds/filters.py
@@ -27,9 +27,9 @@
project = django_filters.CharFilter(name='project__slug')
# Allow filtering on slug= or version=
slug = django_filters.CharFilter(label=_("Name"), name='slug',
- lookup_type='exact')
+ lookup_expr='exact')
version = django_filters.CharFilter(label=_("Version"), name='slug',
- lookup_type='exact')
+ lookup_expr='exact')
class Meta:
model = Version
@@ -37,7 +37,7 @@
class BuildFilter(django_filters.FilterSet):
- date = django_filters.DateRangeFilter(label=_("Build Date"), name="date", lookup_type='range')
+ date = django_filters.DateRangeFilter(label=_("Build Date"), name="date", lookup_expr='range')
type = django_filters.ChoiceFilter(label=_("Build Type"),
choices=BUILD_TYPES)
diff --git a/readthedocs/projects/filters.py b/readthedocs/projects/filters.py
--- a/readthedocs/projects/filters.py
+++ b/readthedocs/projects/filters.py
@@ -44,17 +44,17 @@
"""Project filter for filter views"""
name = django_filters.CharFilter(label=_("Name"), name='name',
- lookup_type='icontains')
+ lookup_expr='icontains')
slug = django_filters.CharFilter(label=_("Slug"), name='slug',
- lookup_type='icontains')
+ lookup_expr='icontains')
pub_date = django_filters.DateRangeFilter(label=_("Created Date"),
name="pub_date")
repo = django_filters.CharFilter(label=_("Repository URL"), name='repo',
- lookup_type='icontains')
+ lookup_expr='icontains')
repo_type = django_filters.ChoiceFilter(
label=_("Repository Type"),
name='repo',
- lookup_type='icontains',
+ lookup_expr='icontains',
choices=REPO_CHOICES,
)
@@ -65,7 +65,7 @@
class DomainFilter(django_filters.FilterSet):
project = django_filters.CharFilter(label=_("Project"), name='project__slug',
- lookup_type='exact')
+ lookup_expr='exact')
class Meta:
model = Domain
| {"golden_diff": "diff --git a/readthedocs/builds/filters.py b/readthedocs/builds/filters.py\n--- a/readthedocs/builds/filters.py\n+++ b/readthedocs/builds/filters.py\n@@ -27,9 +27,9 @@\n project = django_filters.CharFilter(name='project__slug')\n # Allow filtering on slug= or version=\n slug = django_filters.CharFilter(label=_(\"Name\"), name='slug',\n- lookup_type='exact')\n+ lookup_expr='exact')\n version = django_filters.CharFilter(label=_(\"Version\"), name='slug',\n- lookup_type='exact')\n+ lookup_expr='exact')\n \n class Meta:\n model = Version\n@@ -37,7 +37,7 @@\n \n \n class BuildFilter(django_filters.FilterSet):\n- date = django_filters.DateRangeFilter(label=_(\"Build Date\"), name=\"date\", lookup_type='range')\n+ date = django_filters.DateRangeFilter(label=_(\"Build Date\"), name=\"date\", lookup_expr='range')\n type = django_filters.ChoiceFilter(label=_(\"Build Type\"),\n choices=BUILD_TYPES)\n \ndiff --git a/readthedocs/projects/filters.py b/readthedocs/projects/filters.py\n--- a/readthedocs/projects/filters.py\n+++ b/readthedocs/projects/filters.py\n@@ -44,17 +44,17 @@\n \"\"\"Project filter for filter views\"\"\"\n \n name = django_filters.CharFilter(label=_(\"Name\"), name='name',\n- lookup_type='icontains')\n+ lookup_expr='icontains')\n slug = django_filters.CharFilter(label=_(\"Slug\"), name='slug',\n- lookup_type='icontains')\n+ lookup_expr='icontains')\n pub_date = django_filters.DateRangeFilter(label=_(\"Created Date\"),\n name=\"pub_date\")\n repo = django_filters.CharFilter(label=_(\"Repository URL\"), name='repo',\n- lookup_type='icontains')\n+ lookup_expr='icontains')\n repo_type = django_filters.ChoiceFilter(\n label=_(\"Repository Type\"),\n name='repo',\n- lookup_type='icontains',\n+ lookup_expr='icontains',\n choices=REPO_CHOICES,\n )\n \n@@ -65,7 +65,7 @@\n \n class DomainFilter(django_filters.FilterSet):\n project = django_filters.CharFilter(label=_(\"Project\"), name='project__slug',\n- lookup_type='exact')\n+ lookup_expr='exact')\n \n class Meta:\n model = Domain\n", "issue": "Update django-filter to 1.0\n## Details\r\n\r\nSorry for deleting the issue template: This is about technical debt :) It may not be immediately critical, but the advice from the author of django-filter is that it's worth it.\r\n\r\ndjango-filter 1.0 has changes that are backwards incompatible. The release notes are here:\r\n\r\nhttp://django-filter.readthedocs.io/en/latest/migration.html\r\n\r\nIt means, amongst other this, that all where `Filter` object instances are iterated on, we have to [add the `.qs` method](http://django-filter.readthedocs.io/en/latest/migration.html#queryset-methods-are-no-longer-proxied).\nPin django-filter\nThe new 1.0 series is incompatible, and I've opened #2498 for this purpose.\r\n\r\nMeanwhile, as the current master is broken because of this, the version should be pinned - I guess it's sort of bad practice to use the `master` branch anyways, am thinking it's possibly also an outdated decision now.\r\n\r\nThis fixes #2495 and #2490\n", "before_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nimport django_filters\n\nfrom readthedocs.builds import constants\nfrom readthedocs.builds.models import Build, Version\n\n\nANY_REPO = (\n ('', _('Any')),\n)\n\nBUILD_TYPES = ANY_REPO + constants.BUILD_TYPES\n\n\nclass VersionSlugFilter(django_filters.FilterSet):\n\n class Meta:\n model = Version\n fields = {\n 'identifier': ['icontains'],\n 'slug': ['icontains'],\n }\n\n\nclass VersionFilter(django_filters.FilterSet):\n project = django_filters.CharFilter(name='project__slug')\n # Allow filtering on slug= or version=\n slug = django_filters.CharFilter(label=_(\"Name\"), name='slug',\n lookup_type='exact')\n version = django_filters.CharFilter(label=_(\"Version\"), name='slug',\n lookup_type='exact')\n\n class Meta:\n model = Version\n fields = ['project', 'slug', 'version']\n\n\nclass BuildFilter(django_filters.FilterSet):\n date = django_filters.DateRangeFilter(label=_(\"Build Date\"), name=\"date\", lookup_type='range')\n type = django_filters.ChoiceFilter(label=_(\"Build Type\"),\n choices=BUILD_TYPES)\n\n class Meta:\n model = Build\n fields = ['type', 'date', 'success']\n", "path": "readthedocs/builds/filters.py"}, {"content": "\"\"\"Project query filters\"\"\"\n\nfrom django.utils.translation import ugettext_lazy as _\n\nimport django_filters\n\nfrom readthedocs.projects import constants\nfrom readthedocs.projects.models import Project, Domain\n\nANY_REPO = (\n ('', _('Any')),\n)\n\nREPO_CHOICES = ANY_REPO + constants.REPO_CHOICES\n\n\ndef sort_slug(queryset, query):\n \"\"\"Fuzzy filter for slug fields\n\n Returns sorted queryset where slug approximates ``query``\n \"\"\"\n queryset = queryset.filter(slug__icontains=query)\n ret = []\n ret.extend([q.pk for q in queryset\n if q.slug == query])\n ret.extend([q.pk for q in queryset\n if q.slug.startswith(query) and q.pk not in ret])\n ret.extend([q.pk for q in queryset\n if q.slug.endswith(query) and q.pk not in ret])\n ret.extend([q.pk for q in queryset\n if q.pk not in ret])\n\n # Create a QS preserving ordering\n clauses = ' '.join(['WHEN projects_project.id=%s THEN %s' % (pk, i)\n for i, pk in enumerate(ret)])\n ordering = 'CASE %s END' % clauses\n ret_queryset = Project.objects.filter(pk__in=ret).extra(\n select={'ordering': ordering}, order_by=('ordering',))\n return ret_queryset\n\n\nclass ProjectFilter(django_filters.FilterSet):\n\n \"\"\"Project filter for filter views\"\"\"\n\n name = django_filters.CharFilter(label=_(\"Name\"), name='name',\n lookup_type='icontains')\n slug = django_filters.CharFilter(label=_(\"Slug\"), name='slug',\n lookup_type='icontains')\n pub_date = django_filters.DateRangeFilter(label=_(\"Created Date\"),\n name=\"pub_date\")\n repo = django_filters.CharFilter(label=_(\"Repository URL\"), name='repo',\n lookup_type='icontains')\n repo_type = django_filters.ChoiceFilter(\n label=_(\"Repository Type\"),\n name='repo',\n lookup_type='icontains',\n choices=REPO_CHOICES,\n )\n\n class Meta:\n model = Project\n fields = ['name', 'slug', 'pub_date', 'repo', 'repo_type']\n\n\nclass DomainFilter(django_filters.FilterSet):\n project = django_filters.CharFilter(label=_(\"Project\"), name='project__slug',\n lookup_type='exact')\n\n class Meta:\n model = Domain\n fields = ['domain', 'project', 'canonical']\n", "path": "readthedocs/projects/filters.py"}], "after_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nimport django_filters\n\nfrom readthedocs.builds import constants\nfrom readthedocs.builds.models import Build, Version\n\n\nANY_REPO = (\n ('', _('Any')),\n)\n\nBUILD_TYPES = ANY_REPO + constants.BUILD_TYPES\n\n\nclass VersionSlugFilter(django_filters.FilterSet):\n\n class Meta:\n model = Version\n fields = {\n 'identifier': ['icontains'],\n 'slug': ['icontains'],\n }\n\n\nclass VersionFilter(django_filters.FilterSet):\n project = django_filters.CharFilter(name='project__slug')\n # Allow filtering on slug= or version=\n slug = django_filters.CharFilter(label=_(\"Name\"), name='slug',\n lookup_expr='exact')\n version = django_filters.CharFilter(label=_(\"Version\"), name='slug',\n lookup_expr='exact')\n\n class Meta:\n model = Version\n fields = ['project', 'slug', 'version']\n\n\nclass BuildFilter(django_filters.FilterSet):\n date = django_filters.DateRangeFilter(label=_(\"Build Date\"), name=\"date\", lookup_expr='range')\n type = django_filters.ChoiceFilter(label=_(\"Build Type\"),\n choices=BUILD_TYPES)\n\n class Meta:\n model = Build\n fields = ['type', 'date', 'success']\n", "path": "readthedocs/builds/filters.py"}, {"content": "\"\"\"Project query filters\"\"\"\n\nfrom django.utils.translation import ugettext_lazy as _\n\nimport django_filters\n\nfrom readthedocs.projects import constants\nfrom readthedocs.projects.models import Project, Domain\n\nANY_REPO = (\n ('', _('Any')),\n)\n\nREPO_CHOICES = ANY_REPO + constants.REPO_CHOICES\n\n\ndef sort_slug(queryset, query):\n \"\"\"Fuzzy filter for slug fields\n\n Returns sorted queryset where slug approximates ``query``\n \"\"\"\n queryset = queryset.filter(slug__icontains=query)\n ret = []\n ret.extend([q.pk for q in queryset\n if q.slug == query])\n ret.extend([q.pk for q in queryset\n if q.slug.startswith(query) and q.pk not in ret])\n ret.extend([q.pk for q in queryset\n if q.slug.endswith(query) and q.pk not in ret])\n ret.extend([q.pk for q in queryset\n if q.pk not in ret])\n\n # Create a QS preserving ordering\n clauses = ' '.join(['WHEN projects_project.id=%s THEN %s' % (pk, i)\n for i, pk in enumerate(ret)])\n ordering = 'CASE %s END' % clauses\n ret_queryset = Project.objects.filter(pk__in=ret).extra(\n select={'ordering': ordering}, order_by=('ordering',))\n return ret_queryset\n\n\nclass ProjectFilter(django_filters.FilterSet):\n\n \"\"\"Project filter for filter views\"\"\"\n\n name = django_filters.CharFilter(label=_(\"Name\"), name='name',\n lookup_expr='icontains')\n slug = django_filters.CharFilter(label=_(\"Slug\"), name='slug',\n lookup_expr='icontains')\n pub_date = django_filters.DateRangeFilter(label=_(\"Created Date\"),\n name=\"pub_date\")\n repo = django_filters.CharFilter(label=_(\"Repository URL\"), name='repo',\n lookup_expr='icontains')\n repo_type = django_filters.ChoiceFilter(\n label=_(\"Repository Type\"),\n name='repo',\n lookup_expr='icontains',\n choices=REPO_CHOICES,\n )\n\n class Meta:\n model = Project\n fields = ['name', 'slug', 'pub_date', 'repo', 'repo_type']\n\n\nclass DomainFilter(django_filters.FilterSet):\n project = django_filters.CharFilter(label=_(\"Project\"), name='project__slug',\n lookup_expr='exact')\n\n class Meta:\n model = Domain\n fields = ['domain', 'project', 'canonical']\n", "path": "readthedocs/projects/filters.py"}]} | 1,519 | 514 |
gh_patches_debug_20195 | rasdani/github-patches | git_diff | kivy__python-for-android-1723 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Psycopg2 error after the apk installation.

I got this error while debugging the android apk. I associate this to Buildozer because I specified into the buildozer.spec requirements the psycopg2 library. It means that is not working.
How do I fix it? I know that is a recipe for psycopg2 here: https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/psycopg2/__init__.py
How can I add this recipe to my project, to buildozer compile it successfully ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/psycopg2/__init__.py`
Content:
```
1 from pythonforandroid.recipe import PythonRecipe
2 from pythonforandroid.toolchain import current_directory, shprint
3 import sh
4
5
6 class Psycopg2Recipe(PythonRecipe):
7 """
8 Requires `libpq-dev` system dependency e.g. for `pg_config` binary.
9 """
10 version = 'latest'
11 url = 'http://initd.org/psycopg/tarballs/psycopg2-{version}.tar.gz'
12 depends = ['libpq']
13 site_packages_name = 'psycopg2'
14 call_hostpython_via_targetpython = False
15
16 def prebuild_arch(self, arch):
17 libdir = self.ctx.get_libs_dir(arch.arch)
18 with current_directory(self.get_build_dir(arch.arch)):
19 # pg_config_helper will return the system installed libpq, but we
20 # need the one we just cross-compiled
21 shprint(sh.sed, '-i',
22 "s|pg_config_helper.query(.libdir.)|'{}'|".format(libdir),
23 'setup.py')
24
25 def get_recipe_env(self, arch):
26 env = super(Psycopg2Recipe, self).get_recipe_env(arch)
27 env['LDFLAGS'] = "{} -L{}".format(env['LDFLAGS'], self.ctx.get_libs_dir(arch.arch))
28 env['EXTRA_CFLAGS'] = "--host linux-armv"
29 return env
30
31 def install_python_package(self, arch, name=None, env=None, is_dir=True):
32 '''Automate the installation of a Python package (or a cython
33 package where the cython components are pre-built).'''
34 if env is None:
35 env = self.get_recipe_env(arch)
36
37 with current_directory(self.get_build_dir(arch.arch)):
38 hostpython = sh.Command(self.ctx.hostpython)
39
40 shprint(hostpython, 'setup.py', 'build_ext', '--static-libpq',
41 _env=env)
42 shprint(hostpython, 'setup.py', 'install', '-O2',
43 '--root={}'.format(self.ctx.get_python_install_dir()),
44 '--install-lib=lib/python2.7/site-packages', _env=env)
45
46
47 recipe = Psycopg2Recipe()
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/psycopg2/__init__.py b/pythonforandroid/recipes/psycopg2/__init__.py
--- a/pythonforandroid/recipes/psycopg2/__init__.py
+++ b/pythonforandroid/recipes/psycopg2/__init__.py
@@ -6,6 +6,9 @@
class Psycopg2Recipe(PythonRecipe):
"""
Requires `libpq-dev` system dependency e.g. for `pg_config` binary.
+ If you get `nl_langinfo` symbol runtime error, make sure you're running on
+ `ANDROID_API` (`ndk-api`) >= 26, see:
+ https://github.com/kivy/python-for-android/issues/1711#issuecomment-465747557
"""
version = 'latest'
url = 'http://initd.org/psycopg/tarballs/psycopg2-{version}.tar.gz'
@@ -41,7 +44,7 @@
_env=env)
shprint(hostpython, 'setup.py', 'install', '-O2',
'--root={}'.format(self.ctx.get_python_install_dir()),
- '--install-lib=lib/python2.7/site-packages', _env=env)
+ '--install-lib=.', _env=env)
recipe = Psycopg2Recipe()
| {"golden_diff": "diff --git a/pythonforandroid/recipes/psycopg2/__init__.py b/pythonforandroid/recipes/psycopg2/__init__.py\n--- a/pythonforandroid/recipes/psycopg2/__init__.py\n+++ b/pythonforandroid/recipes/psycopg2/__init__.py\n@@ -6,6 +6,9 @@\n class Psycopg2Recipe(PythonRecipe):\n \"\"\"\n Requires `libpq-dev` system dependency e.g. for `pg_config` binary.\n+ If you get `nl_langinfo` symbol runtime error, make sure you're running on\n+ `ANDROID_API` (`ndk-api`) >= 26, see:\n+ https://github.com/kivy/python-for-android/issues/1711#issuecomment-465747557\n \"\"\"\n version = 'latest'\n url = 'http://initd.org/psycopg/tarballs/psycopg2-{version}.tar.gz'\n@@ -41,7 +44,7 @@\n _env=env)\n shprint(hostpython, 'setup.py', 'install', '-O2',\n '--root={}'.format(self.ctx.get_python_install_dir()),\n- '--install-lib=lib/python2.7/site-packages', _env=env)\n+ '--install-lib=.', _env=env)\n \n \n recipe = Psycopg2Recipe()\n", "issue": "Psycopg2 error after the apk installation.\n\r\n\r\nI got this error while debugging the android apk. I associate this to Buildozer because I specified into the buildozer.spec requirements the psycopg2 library. It means that is not working.\r\n\r\nHow do I fix it? I know that is a recipe for psycopg2 here: https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/psycopg2/__init__.py\r\n\r\nHow can I add this recipe to my project, to buildozer compile it successfully ?\n", "before_files": [{"content": "from pythonforandroid.recipe import PythonRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nimport sh\n\n\nclass Psycopg2Recipe(PythonRecipe):\n \"\"\"\n Requires `libpq-dev` system dependency e.g. for `pg_config` binary.\n \"\"\"\n version = 'latest'\n url = 'http://initd.org/psycopg/tarballs/psycopg2-{version}.tar.gz'\n depends = ['libpq']\n site_packages_name = 'psycopg2'\n call_hostpython_via_targetpython = False\n\n def prebuild_arch(self, arch):\n libdir = self.ctx.get_libs_dir(arch.arch)\n with current_directory(self.get_build_dir(arch.arch)):\n # pg_config_helper will return the system installed libpq, but we\n # need the one we just cross-compiled\n shprint(sh.sed, '-i',\n \"s|pg_config_helper.query(.libdir.)|'{}'|\".format(libdir),\n 'setup.py')\n\n def get_recipe_env(self, arch):\n env = super(Psycopg2Recipe, self).get_recipe_env(arch)\n env['LDFLAGS'] = \"{} -L{}\".format(env['LDFLAGS'], self.ctx.get_libs_dir(arch.arch))\n env['EXTRA_CFLAGS'] = \"--host linux-armv\"\n return env\n\n def install_python_package(self, arch, name=None, env=None, is_dir=True):\n '''Automate the installation of a Python package (or a cython\n package where the cython components are pre-built).'''\n if env is None:\n env = self.get_recipe_env(arch)\n\n with current_directory(self.get_build_dir(arch.arch)):\n hostpython = sh.Command(self.ctx.hostpython)\n\n shprint(hostpython, 'setup.py', 'build_ext', '--static-libpq',\n _env=env)\n shprint(hostpython, 'setup.py', 'install', '-O2',\n '--root={}'.format(self.ctx.get_python_install_dir()),\n '--install-lib=lib/python2.7/site-packages', _env=env)\n\n\nrecipe = Psycopg2Recipe()\n", "path": "pythonforandroid/recipes/psycopg2/__init__.py"}], "after_files": [{"content": "from pythonforandroid.recipe import PythonRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nimport sh\n\n\nclass Psycopg2Recipe(PythonRecipe):\n \"\"\"\n Requires `libpq-dev` system dependency e.g. for `pg_config` binary.\n If you get `nl_langinfo` symbol runtime error, make sure you're running on\n `ANDROID_API` (`ndk-api`) >= 26, see:\n https://github.com/kivy/python-for-android/issues/1711#issuecomment-465747557\n \"\"\"\n version = 'latest'\n url = 'http://initd.org/psycopg/tarballs/psycopg2-{version}.tar.gz'\n depends = ['libpq']\n site_packages_name = 'psycopg2'\n call_hostpython_via_targetpython = False\n\n def prebuild_arch(self, arch):\n libdir = self.ctx.get_libs_dir(arch.arch)\n with current_directory(self.get_build_dir(arch.arch)):\n # pg_config_helper will return the system installed libpq, but we\n # need the one we just cross-compiled\n shprint(sh.sed, '-i',\n \"s|pg_config_helper.query(.libdir.)|'{}'|\".format(libdir),\n 'setup.py')\n\n def get_recipe_env(self, arch):\n env = super(Psycopg2Recipe, self).get_recipe_env(arch)\n env['LDFLAGS'] = \"{} -L{}\".format(env['LDFLAGS'], self.ctx.get_libs_dir(arch.arch))\n env['EXTRA_CFLAGS'] = \"--host linux-armv\"\n return env\n\n def install_python_package(self, arch, name=None, env=None, is_dir=True):\n '''Automate the installation of a Python package (or a cython\n package where the cython components are pre-built).'''\n if env is None:\n env = self.get_recipe_env(arch)\n\n with current_directory(self.get_build_dir(arch.arch)):\n hostpython = sh.Command(self.ctx.hostpython)\n\n shprint(hostpython, 'setup.py', 'build_ext', '--static-libpq',\n _env=env)\n shprint(hostpython, 'setup.py', 'install', '-O2',\n '--root={}'.format(self.ctx.get_python_install_dir()),\n '--install-lib=.', _env=env)\n\n\nrecipe = Psycopg2Recipe()\n", "path": "pythonforandroid/recipes/psycopg2/__init__.py"}]} | 992 | 294 |
gh_patches_debug_7498 | rasdani/github-patches | git_diff | beeware__toga-1751 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
iOS app not showing content with Rubicon ObjC 0.4.4
### Describe the bug
When running an iOS app with Toga 0.3.0dev39 and Rubicon 0.4.4, the app isn't successfully started, and the main app content is never displayed. When the app runs, you'll see the following in the log:
```
2023-01-24 12:14:13.871494+0800 Hello World[94057:4239245] Running app module: helloworld
2023-01-24 12:14:14.399629+0800 Hello World[94057:4239245] /Users/rkm/Library/Developer/CoreSimulator/Devices/84FC86CA-1D89-46EF-9349-29DDCF840143/data/Containers/Bundle/Application/7038F3CE-2212-4C60-9067-1978A80DEC8D/Hello World.app/app_packages/toga_iOS/app.py:95: DeprecationWarning: There is no current event loop
2023-01-24 12:14:14.399738+0800 Hello World[94057:4239245] self.loop = asyncio.get_event_loop()
```
This is a warning, not an error; the app will continue to run.
### Steps to reproduce
1. Run `examples/tutorial0` on iOS
2. See error
The app won't crash; but the app window will remain black.
### Expected behavior
The app should run and window content should be displayed.
### Screenshots
_No response_
### Environment
- Operating System: iOS
- Python version: All
- Software versions:
- Briefcase: All
- Toga: <=0.3.0.dev39
- Rubicon-objc 0.4.4
### Logs
N/A
### Additional context
The error has been caused because Toga-iOS 0.3.0.dev39 [included a shim](https://github.com/beeware/toga/blob/v0.3.0.dev39/src/iOS/src/toga_iOS/app.py#L112) that reproduced the implementation of `run_forever_cooperatively()`. This was done when the iOS implementation was originally created, with the expectation that this shim would be replaced with the actual call once Rubicon 0.3 was released. This didn't happen, but the old shim continued to work as it matched the implementation in Rubicon.
However, Rubicon 0.4.4 altered the implementation of `run_forever_cooperatively()`. As a result, the shim in Toga-iOS 0.3.0.dev39 no longer does everything it needs to in order to start the app.
The issue has already been [fixed in the main branch](https://github.com/beeware/toga/blob/main/iOS/src/toga_iOS/app.py#L117) - the shim has been replaced with the actual call to `run_forever_cooperatively()`.
Two workarounds exist:
1. Use the `main` branch of Toga in your app.
2. Block the use of rubicon-objc 0.4.4. If you add `rubicon-objc!=0.4.4` to the requires list in your iOS configuration, this will prevent toga-iOS from using the new version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `iOS/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import re
3
4 from setuptools import setup
5
6 # Version handline needs to be programatic because
7 # we can't import toga_iOS to compute the version;
8 # and to support versioned subpackage dependencies
9 with open("src/toga_iOS/__init__.py", encoding="utf8") as version_file:
10 version_match = re.search(
11 r"^__version__ = ['\"]([^'\"]*)['\"]", version_file.read(), re.M
12 )
13 if version_match:
14 version = version_match.group(1)
15 else:
16 raise RuntimeError("Unable to find version string.")
17
18 setup(
19 version=version,
20 install_requires=[
21 "rubicon-objc>=0.4.4",
22 f"toga-core=={version}",
23 ],
24 )
25
```
Path: `cocoa/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import re
3
4 from setuptools import setup
5
6 # Version handline needs to be programatic because
7 # we can't import toga_cocoa to compute the version;
8 # and to support versioned subpackage dependencies
9 with open("src/toga_cocoa/__init__.py", encoding="utf8") as version_file:
10 version_match = re.search(
11 r"^__version__ = ['\"]([^'\"]*)['\"]", version_file.read(), re.M
12 )
13 if version_match:
14 version = version_match.group(1)
15 else:
16 raise RuntimeError("Unable to find version string.")
17
18 setup(
19 version=version,
20 install_requires=[
21 "rubicon-objc>=0.4.4",
22 f"toga-core=={version}",
23 ],
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cocoa/setup.py b/cocoa/setup.py
--- a/cocoa/setup.py
+++ b/cocoa/setup.py
@@ -18,7 +18,7 @@
setup(
version=version,
install_requires=[
- "rubicon-objc>=0.4.4",
- f"toga-core=={version}",
+ "rubicon-objc >= 0.4.5rc1, < 0.5.0",
+ f"toga-core == {version}",
],
)
diff --git a/iOS/setup.py b/iOS/setup.py
--- a/iOS/setup.py
+++ b/iOS/setup.py
@@ -18,7 +18,7 @@
setup(
version=version,
install_requires=[
- "rubicon-objc>=0.4.4",
- f"toga-core=={version}",
+ "rubicon-objc >= 0.4.5rc1, < 0.5.0",
+ f"toga-core == {version}",
],
)
| {"golden_diff": "diff --git a/cocoa/setup.py b/cocoa/setup.py\n--- a/cocoa/setup.py\n+++ b/cocoa/setup.py\n@@ -18,7 +18,7 @@\n setup(\n version=version,\n install_requires=[\n- \"rubicon-objc>=0.4.4\",\n- f\"toga-core=={version}\",\n+ \"rubicon-objc >= 0.4.5rc1, < 0.5.0\",\n+ f\"toga-core == {version}\",\n ],\n )\ndiff --git a/iOS/setup.py b/iOS/setup.py\n--- a/iOS/setup.py\n+++ b/iOS/setup.py\n@@ -18,7 +18,7 @@\n setup(\n version=version,\n install_requires=[\n- \"rubicon-objc>=0.4.4\",\n- f\"toga-core=={version}\",\n+ \"rubicon-objc >= 0.4.5rc1, < 0.5.0\",\n+ f\"toga-core == {version}\",\n ],\n )\n", "issue": "iOS app not showing content with Rubicon ObjC 0.4.4\n### Describe the bug\r\n\r\nWhen running an iOS app with Toga 0.3.0dev39 and Rubicon 0.4.4, the app isn't successfully started, and the main app content is never displayed. When the app runs, you'll see the following in the log:\r\n \r\n```\r\n2023-01-24 12:14:13.871494+0800 Hello World[94057:4239245] Running app module: helloworld\r\n2023-01-24 12:14:14.399629+0800 Hello World[94057:4239245] /Users/rkm/Library/Developer/CoreSimulator/Devices/84FC86CA-1D89-46EF-9349-29DDCF840143/data/Containers/Bundle/Application/7038F3CE-2212-4C60-9067-1978A80DEC8D/Hello World.app/app_packages/toga_iOS/app.py:95: DeprecationWarning: There is no current event loop\r\n2023-01-24 12:14:14.399738+0800 Hello World[94057:4239245] self.loop = asyncio.get_event_loop()\r\n```\r\n\r\nThis is a warning, not an error; the app will continue to run.\r\n\r\n### Steps to reproduce\r\n\r\n1. Run `examples/tutorial0` on iOS\r\n2. See error\r\n\r\nThe app won't crash; but the app window will remain black.\r\n\r\n### Expected behavior\r\n\r\nThe app should run and window content should be displayed.\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n- Operating System: iOS\r\n- Python version: All\r\n- Software versions:\r\n - Briefcase: All\r\n - Toga: <=0.3.0.dev39\r\n - Rubicon-objc 0.4.4\r\n\r\n\r\n### Logs\r\n\r\nN/A\r\n\r\n### Additional context\r\n\r\nThe error has been caused because Toga-iOS 0.3.0.dev39 [included a shim](https://github.com/beeware/toga/blob/v0.3.0.dev39/src/iOS/src/toga_iOS/app.py#L112) that reproduced the implementation of `run_forever_cooperatively()`. This was done when the iOS implementation was originally created, with the expectation that this shim would be replaced with the actual call once Rubicon 0.3 was released. This didn't happen, but the old shim continued to work as it matched the implementation in Rubicon.\r\n\r\nHowever, Rubicon 0.4.4 altered the implementation of `run_forever_cooperatively()`. As a result, the shim in Toga-iOS 0.3.0.dev39 no longer does everything it needs to in order to start the app. \r\n\r\nThe issue has already been [fixed in the main branch](https://github.com/beeware/toga/blob/main/iOS/src/toga_iOS/app.py#L117) - the shim has been replaced with the actual call to `run_forever_cooperatively()`.\r\n\r\nTwo workarounds exist:\r\n1. Use the `main` branch of Toga in your app.\r\n2. Block the use of rubicon-objc 0.4.4. If you add `rubicon-objc!=0.4.4` to the requires list in your iOS configuration, this will prevent toga-iOS from using the new version.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport re\n\nfrom setuptools import setup\n\n# Version handline needs to be programatic because\n# we can't import toga_iOS to compute the version;\n# and to support versioned subpackage dependencies\nwith open(\"src/toga_iOS/__init__.py\", encoding=\"utf8\") as version_file:\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M\n )\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n version=version,\n install_requires=[\n \"rubicon-objc>=0.4.4\",\n f\"toga-core=={version}\",\n ],\n)\n", "path": "iOS/setup.py"}, {"content": "#!/usr/bin/env python\nimport re\n\nfrom setuptools import setup\n\n# Version handline needs to be programatic because\n# we can't import toga_cocoa to compute the version;\n# and to support versioned subpackage dependencies\nwith open(\"src/toga_cocoa/__init__.py\", encoding=\"utf8\") as version_file:\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M\n )\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n version=version,\n install_requires=[\n \"rubicon-objc>=0.4.4\",\n f\"toga-core=={version}\",\n ],\n)\n", "path": "cocoa/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport re\n\nfrom setuptools import setup\n\n# Version handline needs to be programatic because\n# we can't import toga_iOS to compute the version;\n# and to support versioned subpackage dependencies\nwith open(\"src/toga_iOS/__init__.py\", encoding=\"utf8\") as version_file:\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M\n )\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n version=version,\n install_requires=[\n \"rubicon-objc >= 0.4.5rc1, < 0.5.0\",\n f\"toga-core == {version}\",\n ],\n)\n", "path": "iOS/setup.py"}, {"content": "#!/usr/bin/env python\nimport re\n\nfrom setuptools import setup\n\n# Version handline needs to be programatic because\n# we can't import toga_cocoa to compute the version;\n# and to support versioned subpackage dependencies\nwith open(\"src/toga_cocoa/__init__.py\", encoding=\"utf8\") as version_file:\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file.read(), re.M\n )\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n version=version,\n install_requires=[\n \"rubicon-objc >= 0.4.5rc1, < 0.5.0\",\n f\"toga-core == {version}\",\n ],\n)\n", "path": "cocoa/setup.py"}]} | 1,508 | 230 |
gh_patches_debug_5408 | rasdani/github-patches | git_diff | Mailu__Mailu-2982 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use official clamav docker image for Mailu clamav image
With https://github.com/Cisco-Talos/clamav having official docker support https://hub.docker.com/r/clamav/clamav it might be worth considering referring or preferring that container in the future?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optional/clamav/start.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 import logging as logger
5 import sys
6 from socrate import system
7
8 system.set_env(log_filters=r'SelfCheck: Database status OK\.$')
9
10 # Bootstrap the database if clamav is running for the first time
11 if not os.path.isfile("/data/main.cvd"):
12 logger.info("Starting primary virus DB download")
13 os.system("freshclam")
14
15 # Run the update daemon
16 logger.info("Starting the update daemon")
17 os.system("freshclam -d -c 6")
18
19 # Run clamav
20 logger.info("Starting clamav")
21 os.system("clamd")
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optional/clamav/start.py b/optional/clamav/start.py
deleted file mode 100755
--- a/optional/clamav/start.py
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import logging as logger
-import sys
-from socrate import system
-
-system.set_env(log_filters=r'SelfCheck: Database status OK\.$')
-
-# Bootstrap the database if clamav is running for the first time
-if not os.path.isfile("/data/main.cvd"):
- logger.info("Starting primary virus DB download")
- os.system("freshclam")
-
-# Run the update daemon
-logger.info("Starting the update daemon")
-os.system("freshclam -d -c 6")
-
-# Run clamav
-logger.info("Starting clamav")
-os.system("clamd")
| {"golden_diff": "diff --git a/optional/clamav/start.py b/optional/clamav/start.py\ndeleted file mode 100755\n--- a/optional/clamav/start.py\n+++ /dev/null\n@@ -1,21 +0,0 @@\n-#!/usr/bin/env python3\n-\n-import os\n-import logging as logger\n-import sys\n-from socrate import system\n-\n-system.set_env(log_filters=r'SelfCheck: Database status OK\\.$')\n-\n-# Bootstrap the database if clamav is running for the first time\n-if not os.path.isfile(\"/data/main.cvd\"):\n- logger.info(\"Starting primary virus DB download\")\n- os.system(\"freshclam\")\n-\n-# Run the update daemon\n-logger.info(\"Starting the update daemon\")\n-os.system(\"freshclam -d -c 6\")\n-\n-# Run clamav\n-logger.info(\"Starting clamav\")\n-os.system(\"clamd\")\n", "issue": "Use official clamav docker image for Mailu clamav image\nWith https://github.com/Cisco-Talos/clamav having official docker support https://hub.docker.com/r/clamav/clamav it might be worth considering referring or preferring that container in the future?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport logging as logger\nimport sys\nfrom socrate import system\n\nsystem.set_env(log_filters=r'SelfCheck: Database status OK\\.$')\n\n# Bootstrap the database if clamav is running for the first time\nif not os.path.isfile(\"/data/main.cvd\"):\n logger.info(\"Starting primary virus DB download\")\n os.system(\"freshclam\")\n\n# Run the update daemon\nlogger.info(\"Starting the update daemon\")\nos.system(\"freshclam -d -c 6\")\n\n# Run clamav\nlogger.info(\"Starting clamav\")\nos.system(\"clamd\")\n", "path": "optional/clamav/start.py"}], "after_files": [{"content": null, "path": "optional/clamav/start.py"}]} | 486 | 199 |
gh_patches_debug_27959 | rasdani/github-patches | git_diff | pwndbg__pwndbg-2009 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pattern_create & run features
It is annoying to have to have multiple versions of gdb to complete some of my work. I don't understand why this feature hasn't been brought over yet like gdb-peda has implemented. Reversing takes long enough, this would make our lives a little bit easier.
I would like to add the pattern_create feature into pwndbg. As well as run, so that I can quickly create our cyclic values and then run our output (run < payload-100.txt) so we can check the registers in under 10 seconds without restarting the program.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/cyclic.py`
Content:
```
1 from __future__ import annotations
2
3 import argparse
4 import string
5
6 import gdb
7 from pwnlib.util.cyclic import cyclic
8 from pwnlib.util.cyclic import cyclic_find
9
10 import pwndbg.commands
11 import pwndbg.gdblib.arch
12 from pwndbg.color import message
13
14 parser = argparse.ArgumentParser(description="Cyclic pattern creator/finder.")
15
16 parser.add_argument(
17 "-a",
18 "--alphabet",
19 metavar="charset",
20 default=string.ascii_lowercase,
21 type=str.encode,
22 help="The alphabet to use in the cyclic pattern",
23 )
24
25 parser.add_argument(
26 "-n",
27 "--length",
28 metavar="length",
29 type=int,
30 help="Size of the unique subsequences (defaults to the pointer size for the current arch)",
31 )
32
33 group = parser.add_mutually_exclusive_group(required=False)
34 group.add_argument(
35 "-l",
36 "-o",
37 "--offset",
38 "--lookup",
39 dest="lookup",
40 metavar="lookup_value",
41 type=str,
42 help="Do a lookup instead of printing the sequence (accepts constant values as well as expressions)",
43 )
44
45 group.add_argument(
46 "count",
47 type=int,
48 nargs="?",
49 default=100,
50 help="Number of characters to print from the sequence (default: print the entire sequence)",
51 )
52
53
54 @pwndbg.commands.ArgparsedCommand(parser, command_name="cyclic")
55 def cyclic_cmd(alphabet, length, lookup, count=100) -> None:
56 if length:
57 # Convert from gdb.Value
58 length = int(length)
59 else:
60 length = pwndbg.gdblib.arch.ptrsize
61
62 if lookup:
63 lookup = pwndbg.commands.fix(lookup, sloppy=True)
64
65 if isinstance(lookup, (gdb.Value, int)):
66 lookup = int(lookup).to_bytes(length, pwndbg.gdblib.arch.endian)
67 elif isinstance(lookup, str):
68 lookup = bytes(lookup, "utf-8")
69
70 if len(lookup) != length:
71 print(
72 message.error(
73 f"Lookup pattern must be {length} bytes (use `-n <length>` to lookup pattern of different length)"
74 )
75 )
76 return
77
78 hexstr = "0x" + lookup.hex()
79 print(
80 message.notice(
81 f"Finding cyclic pattern of {length} bytes: {str(lookup)} (hex: {hexstr})"
82 )
83 )
84
85 if any(c not in alphabet for c in lookup):
86 print(message.error("Pattern contains characters not present in the alphabet"))
87 return
88
89 offset = cyclic_find(lookup, alphabet, length)
90
91 if offset == -1:
92 print(message.error("Given lookup pattern does not exist in the sequence"))
93 else:
94 print(message.success(f"Found at offset {offset}"))
95 else:
96 sequence = cyclic(int(count), alphabet, length)
97 print(sequence.decode())
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/cyclic.py b/pwndbg/commands/cyclic.py
--- a/pwndbg/commands/cyclic.py
+++ b/pwndbg/commands/cyclic.py
@@ -30,6 +30,7 @@
help="Size of the unique subsequences (defaults to the pointer size for the current arch)",
)
+
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument(
"-l",
@@ -50,9 +51,17 @@
help="Number of characters to print from the sequence (default: print the entire sequence)",
)
+parser.add_argument(
+ "filename",
+ type=str,
+ help="Name (path) of the file to save the cyclic pattern to",
+ default="",
+ nargs="?",
+)
+
@pwndbg.commands.ArgparsedCommand(parser, command_name="cyclic")
-def cyclic_cmd(alphabet, length, lookup, count=100) -> None:
+def cyclic_cmd(alphabet, length, lookup, count=100, filename="") -> None:
if length:
# Convert from gdb.Value
length = int(length)
@@ -93,5 +102,12 @@
else:
print(message.success(f"Found at offset {offset}"))
else:
- sequence = cyclic(int(count), alphabet, length)
- print(sequence.decode())
+ count = int(count)
+ sequence = cyclic(count, alphabet, length)
+
+ if not filename:
+ print(sequence.decode())
+ else:
+ with open(filename, "wb") as f:
+ f.write(sequence)
+ print(f"Written a cyclic sequence of length {count} to file {filename}")
| {"golden_diff": "diff --git a/pwndbg/commands/cyclic.py b/pwndbg/commands/cyclic.py\n--- a/pwndbg/commands/cyclic.py\n+++ b/pwndbg/commands/cyclic.py\n@@ -30,6 +30,7 @@\n help=\"Size of the unique subsequences (defaults to the pointer size for the current arch)\",\n )\n \n+\n group = parser.add_mutually_exclusive_group(required=False)\n group.add_argument(\n \"-l\",\n@@ -50,9 +51,17 @@\n help=\"Number of characters to print from the sequence (default: print the entire sequence)\",\n )\n \n+parser.add_argument(\n+ \"filename\",\n+ type=str,\n+ help=\"Name (path) of the file to save the cyclic pattern to\",\n+ default=\"\",\n+ nargs=\"?\",\n+)\n+\n \n @pwndbg.commands.ArgparsedCommand(parser, command_name=\"cyclic\")\n-def cyclic_cmd(alphabet, length, lookup, count=100) -> None:\n+def cyclic_cmd(alphabet, length, lookup, count=100, filename=\"\") -> None:\n if length:\n # Convert from gdb.Value\n length = int(length)\n@@ -93,5 +102,12 @@\n else:\n print(message.success(f\"Found at offset {offset}\"))\n else:\n- sequence = cyclic(int(count), alphabet, length)\n- print(sequence.decode())\n+ count = int(count)\n+ sequence = cyclic(count, alphabet, length)\n+\n+ if not filename:\n+ print(sequence.decode())\n+ else:\n+ with open(filename, \"wb\") as f:\n+ f.write(sequence)\n+ print(f\"Written a cyclic sequence of length {count} to file {filename}\")\n", "issue": "pattern_create & run features\nIt is annoying to have to have multiple versions of gdb to complete some of my work. I don't understand why this feature hasn't been brought over yet like gdb-peda has implemented. Reversing takes long enough, this would make our lives a little bit easier.\r\n\r\nI would like to add the pattern_create feature into pwndbg. As well as run, so that I can quickly create our cyclic values and then run our output (run < payload-100.txt) so we can check the registers in under 10 seconds without restarting the program. \n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport string\n\nimport gdb\nfrom pwnlib.util.cyclic import cyclic\nfrom pwnlib.util.cyclic import cyclic_find\n\nimport pwndbg.commands\nimport pwndbg.gdblib.arch\nfrom pwndbg.color import message\n\nparser = argparse.ArgumentParser(description=\"Cyclic pattern creator/finder.\")\n\nparser.add_argument(\n \"-a\",\n \"--alphabet\",\n metavar=\"charset\",\n default=string.ascii_lowercase,\n type=str.encode,\n help=\"The alphabet to use in the cyclic pattern\",\n)\n\nparser.add_argument(\n \"-n\",\n \"--length\",\n metavar=\"length\",\n type=int,\n help=\"Size of the unique subsequences (defaults to the pointer size for the current arch)\",\n)\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n \"-l\",\n \"-o\",\n \"--offset\",\n \"--lookup\",\n dest=\"lookup\",\n metavar=\"lookup_value\",\n type=str,\n help=\"Do a lookup instead of printing the sequence (accepts constant values as well as expressions)\",\n)\n\ngroup.add_argument(\n \"count\",\n type=int,\n nargs=\"?\",\n default=100,\n help=\"Number of characters to print from the sequence (default: print the entire sequence)\",\n)\n\n\[email protected](parser, command_name=\"cyclic\")\ndef cyclic_cmd(alphabet, length, lookup, count=100) -> None:\n if length:\n # Convert from gdb.Value\n length = int(length)\n else:\n length = pwndbg.gdblib.arch.ptrsize\n\n if lookup:\n lookup = pwndbg.commands.fix(lookup, sloppy=True)\n\n if isinstance(lookup, (gdb.Value, int)):\n lookup = int(lookup).to_bytes(length, pwndbg.gdblib.arch.endian)\n elif isinstance(lookup, str):\n lookup = bytes(lookup, \"utf-8\")\n\n if len(lookup) != length:\n print(\n message.error(\n f\"Lookup pattern must be {length} bytes (use `-n <length>` to lookup pattern of different length)\"\n )\n )\n return\n\n hexstr = \"0x\" + lookup.hex()\n print(\n message.notice(\n f\"Finding cyclic pattern of {length} bytes: {str(lookup)} (hex: {hexstr})\"\n )\n )\n\n if any(c not in alphabet for c in lookup):\n print(message.error(\"Pattern contains characters not present in the alphabet\"))\n return\n\n offset = cyclic_find(lookup, alphabet, length)\n\n if offset == -1:\n print(message.error(\"Given lookup pattern does not exist in the sequence\"))\n else:\n print(message.success(f\"Found at offset {offset}\"))\n else:\n sequence = cyclic(int(count), alphabet, length)\n print(sequence.decode())\n", "path": "pwndbg/commands/cyclic.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport string\n\nimport gdb\nfrom pwnlib.util.cyclic import cyclic\nfrom pwnlib.util.cyclic import cyclic_find\n\nimport pwndbg.commands\nimport pwndbg.gdblib.arch\nfrom pwndbg.color import message\n\nparser = argparse.ArgumentParser(description=\"Cyclic pattern creator/finder.\")\n\nparser.add_argument(\n \"-a\",\n \"--alphabet\",\n metavar=\"charset\",\n default=string.ascii_lowercase,\n type=str.encode,\n help=\"The alphabet to use in the cyclic pattern\",\n)\n\nparser.add_argument(\n \"-n\",\n \"--length\",\n metavar=\"length\",\n type=int,\n help=\"Size of the unique subsequences (defaults to the pointer size for the current arch)\",\n)\n\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n \"-l\",\n \"-o\",\n \"--offset\",\n \"--lookup\",\n dest=\"lookup\",\n metavar=\"lookup_value\",\n type=str,\n help=\"Do a lookup instead of printing the sequence (accepts constant values as well as expressions)\",\n)\n\ngroup.add_argument(\n \"count\",\n type=int,\n nargs=\"?\",\n default=100,\n help=\"Number of characters to print from the sequence (default: print the entire sequence)\",\n)\n\nparser.add_argument(\n \"filename\",\n type=str,\n help=\"Name (path) of the file to save the cyclic pattern to\",\n default=\"\",\n nargs=\"?\",\n)\n\n\[email protected](parser, command_name=\"cyclic\")\ndef cyclic_cmd(alphabet, length, lookup, count=100, filename=\"\") -> None:\n if length:\n # Convert from gdb.Value\n length = int(length)\n else:\n length = pwndbg.gdblib.arch.ptrsize\n\n if lookup:\n lookup = pwndbg.commands.fix(lookup, sloppy=True)\n\n if isinstance(lookup, (gdb.Value, int)):\n lookup = int(lookup).to_bytes(length, pwndbg.gdblib.arch.endian)\n elif isinstance(lookup, str):\n lookup = bytes(lookup, \"utf-8\")\n\n if len(lookup) != length:\n print(\n message.error(\n f\"Lookup pattern must be {length} bytes (use `-n <length>` to lookup pattern of different length)\"\n )\n )\n return\n\n hexstr = \"0x\" + lookup.hex()\n print(\n message.notice(\n f\"Finding cyclic pattern of {length} bytes: {str(lookup)} (hex: {hexstr})\"\n )\n )\n\n if any(c not in alphabet for c in lookup):\n print(message.error(\"Pattern contains characters not present in the alphabet\"))\n return\n\n offset = cyclic_find(lookup, alphabet, length)\n\n if offset == -1:\n print(message.error(\"Given lookup pattern does not exist in the sequence\"))\n else:\n print(message.success(f\"Found at offset {offset}\"))\n else:\n count = int(count)\n sequence = cyclic(count, alphabet, length)\n\n if not filename:\n print(sequence.decode())\n else:\n with open(filename, \"wb\") as f:\n f.write(sequence)\n print(f\"Written a cyclic sequence of length {count} to file {filename}\")\n", "path": "pwndbg/commands/cyclic.py"}]} | 1,190 | 378 |
gh_patches_debug_687 | rasdani/github-patches | git_diff | hylang__hy-2220 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add header notice to "stable" line documentation to point users to the alpha cycle documentation
I was reading documentation and noticed that hy.contrib.walk is mentioned there:
https://docs.hylang.org/en/stable/contrib/walk.html
however it appears that hy.contrib.walk file is no longer on the master branch.
https://github.com/hylang/hy/blob/6ba90fd3f853b2ddc391aa3358f9386c41d831c4/hy/contrib/walk.hy
is it a bug in documentation or I'm missing something?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # This file is execfile()d with the current directory set to its containing dir.
2
3 import re, os, sys, time, html
4
5 sys.path.insert(0, os.path.abspath('..'))
6
7 extensions = [
8 'sphinx.ext.napoleon',
9 'sphinx.ext.intersphinx',
10 'sphinx.ext.autodoc',
11 'sphinx.ext.viewcode',
12 'sphinxcontrib.hydomain',
13 ]
14
15 from get_version import __version__ as hy_version
16
17 # Read the Docs might dirty its checkout, so strip the dirty flag.
18 hy_version = re.sub(r'[+.]dirty\Z', '', hy_version)
19
20 templates_path = ['_templates']
21 source_suffix = '.rst'
22
23 master_doc = 'index'
24
25 # General information about the project.
26 project = 'hy'
27 copyright = '%s the authors' % time.strftime('%Y')
28
29 # The version info for the project you're documenting, acts as replacement for
30 # |version| and |release|, also used in various other places throughout the
31 # built documents.
32 #
33 # The short X.Y version.
34 version = ".".join(hy_version.split(".")[:-1])
35 # The full version, including alpha/beta/rc tags.
36 release = hy_version
37 hy_descriptive_version = html.escape(hy_version)
38 if "+" in hy_version:
39 hy_descriptive_version += " <strong style='color: red;'>(unstable)</strong>"
40
41 exclude_patterns = ['_build', 'coreteam.rst']
42 add_module_names = True
43
44 pygments_style = 'sphinx'
45
46 import sphinx_rtd_theme
47 html_theme = 'sphinx_rtd_theme'
48 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
49
50 # Add any paths that contain custom static files (such as style sheets) here,
51 # relative to this directory. They are copied after the builtin static files,
52 # so a file named "default.css" will overwrite the builtin "default.css".
53 html_static_path = ['_static']
54
55 html_use_smartypants = False
56 html_show_sphinx = False
57
58 html_context = dict(
59 hy_descriptive_version = hy_descriptive_version)
60
61 highlight_language = 'clojure'
62
63 intersphinx_mapping = dict(
64 py = ('https://docs.python.org/3/', None),
65 py3_10 = ('https://docs.python.org/3.10/', None),
66 hyrule = ('https://hyrule.readthedocs.io/en/master/', None))
67 # ** Generate Cheatsheet
68 import json
69 from pathlib import Path
70 from itertools import zip_longest
71
72 def refize(spec):
73 role = ':hy:func:'
74 if isinstance(spec, dict):
75 _name = spec['name']
76 uri = spec['uri']
77 if spec.get('internal'):
78 role = ':ref:'
79 else:
80 uri = spec
81 _name = str.split(uri, '.')[-1]
82 return '{}`{} <{}>`'.format(role, _name, uri)
83
84
85 def format_refs(refs, indent):
86 args = [iter(map(refize, refs))]
87 ref_groups = zip_longest(*args, fillvalue="")
88 return str.join(
89 ' \\\n' + ' ' * (indent + 3),
90 [str.join(' ', ref_group) for ref_group in ref_groups],
91 )
92
93
94 def format_row(category, divider_loc):
95 return '{title: <{width}} | {methods}'.format(
96 width=divider_loc,
97 title=category['name'],
98 methods=format_refs(category['methods'], divider_loc)
99 )
100
101
102 def format_table(table_spec):
103 table_name = table_spec['name']
104 categories = table_spec['categories']
105 longest_cat_name = max(len(category['name']) for category in categories)
106 table = [
107 table_name,
108 '-' * len(table_name),
109 '',
110 '=' * longest_cat_name + ' ' + '=' * 25,
111 *(format_row(category, longest_cat_name) for category in categories),
112 '=' * longest_cat_name + ' ' + '=' * 25,
113 ''
114 ]
115 return '\n'.join(table)
116
117
118 # Modifications to the cheatsheet should be added in `cheatsheet.json`
119 cheatsheet_spec = json.loads(Path('./docs/cheatsheet.json').read_text())
120 cheatsheet = [
121 '..',
122 ' DO NOT MODIFY THIS FILE. IT IS AUTO GENERATED BY ``conf.py``',
123 ' If you need to change or add methods, modify ``cheatsheet_spec`` in ``conf.py``',
124 '',
125 '.. _cheatsheet:',
126 '',
127 'Cheatsheet',
128 '==========',
129 '',
130 *map(format_table, cheatsheet_spec),
131 ]
132 Path('./docs/cheatsheet.rst').write_text('\n'.join(cheatsheet))
133
134
135 # ** Sphinx App Setup
136
137
138 def setup(app):
139 app.add_css_file('overrides.css')
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -56,7 +56,9 @@
html_show_sphinx = False
html_context = dict(
- hy_descriptive_version = hy_descriptive_version)
+ hy_descriptive_version = hy_descriptive_version,
+ has_active_alpha = True,
+)
highlight_language = 'clojure'
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -56,7 +56,9 @@\n html_show_sphinx = False\n \n html_context = dict(\n- hy_descriptive_version = hy_descriptive_version)\n+ hy_descriptive_version = hy_descriptive_version,\n+ has_active_alpha = True,\n+)\n \n highlight_language = 'clojure'\n", "issue": "Add header notice to \"stable\" line documentation to point users to the alpha cycle documentation\nI was reading documentation and noticed that hy.contrib.walk is mentioned there:\r\nhttps://docs.hylang.org/en/stable/contrib/walk.html\r\n\r\nhowever it appears that hy.contrib.walk file is no longer on the master branch. \r\nhttps://github.com/hylang/hy/blob/6ba90fd3f853b2ddc391aa3358f9386c41d831c4/hy/contrib/walk.hy\r\n\r\nis it a bug in documentation or I'm missing something? \r\n\r\n\n", "before_files": [{"content": "# This file is execfile()d with the current directory set to its containing dir.\n\nimport re, os, sys, time, html\n\nsys.path.insert(0, os.path.abspath('..'))\n\nextensions = [\n 'sphinx.ext.napoleon',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.hydomain',\n]\n\nfrom get_version import __version__ as hy_version\n\n# Read the Docs might dirty its checkout, so strip the dirty flag.\nhy_version = re.sub(r'[+.]dirty\\Z', '', hy_version)\n\ntemplates_path = ['_templates']\nsource_suffix = '.rst'\n\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'hy'\ncopyright = '%s the authors' % time.strftime('%Y')\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \".\".join(hy_version.split(\".\")[:-1])\n# The full version, including alpha/beta/rc tags.\nrelease = hy_version\nhy_descriptive_version = html.escape(hy_version)\nif \"+\" in hy_version:\n hy_descriptive_version += \" <strong style='color: red;'>(unstable)</strong>\"\n\nexclude_patterns = ['_build', 'coreteam.rst']\nadd_module_names = True\n\npygments_style = 'sphinx'\n\nimport sphinx_rtd_theme\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_use_smartypants = False\nhtml_show_sphinx = False\n\nhtml_context = dict(\n hy_descriptive_version = hy_descriptive_version)\n\nhighlight_language = 'clojure'\n\nintersphinx_mapping = dict(\n py = ('https://docs.python.org/3/', None),\n py3_10 = ('https://docs.python.org/3.10/', None),\n hyrule = ('https://hyrule.readthedocs.io/en/master/', None))\n# ** Generate Cheatsheet\nimport json\nfrom pathlib import Path\nfrom itertools import zip_longest\n\ndef refize(spec):\n role = ':hy:func:'\n if isinstance(spec, dict):\n _name = spec['name']\n uri = spec['uri']\n if spec.get('internal'):\n role = ':ref:'\n else:\n uri = spec\n _name = str.split(uri, '.')[-1]\n return '{}`{} <{}>`'.format(role, _name, uri)\n\n\ndef format_refs(refs, indent):\n args = [iter(map(refize, refs))]\n ref_groups = zip_longest(*args, fillvalue=\"\")\n return str.join(\n ' \\\\\\n' + ' ' * (indent + 3),\n [str.join(' ', ref_group) for ref_group in ref_groups],\n )\n\n\ndef format_row(category, divider_loc):\n return '{title: <{width}} | {methods}'.format(\n width=divider_loc,\n title=category['name'],\n methods=format_refs(category['methods'], divider_loc)\n )\n\n\ndef format_table(table_spec):\n table_name = table_spec['name']\n categories = table_spec['categories']\n longest_cat_name = max(len(category['name']) for category in categories)\n table = [\n table_name,\n '-' * len(table_name),\n '',\n '=' * longest_cat_name + ' ' + '=' * 25,\n *(format_row(category, longest_cat_name) for category in categories),\n '=' * longest_cat_name + ' ' + '=' * 25,\n ''\n ]\n return '\\n'.join(table)\n\n\n# Modifications to the cheatsheet should be added in `cheatsheet.json`\ncheatsheet_spec = json.loads(Path('./docs/cheatsheet.json').read_text())\ncheatsheet = [\n '..',\n ' DO NOT MODIFY THIS FILE. IT IS AUTO GENERATED BY ``conf.py``',\n ' If you need to change or add methods, modify ``cheatsheet_spec`` in ``conf.py``',\n '',\n '.. _cheatsheet:',\n '',\n 'Cheatsheet',\n '==========',\n '',\n *map(format_table, cheatsheet_spec),\n]\nPath('./docs/cheatsheet.rst').write_text('\\n'.join(cheatsheet))\n\n\n# ** Sphinx App Setup\n\n\ndef setup(app):\n app.add_css_file('overrides.css')\n", "path": "docs/conf.py"}], "after_files": [{"content": "# This file is execfile()d with the current directory set to its containing dir.\n\nimport re, os, sys, time, html\n\nsys.path.insert(0, os.path.abspath('..'))\n\nextensions = [\n 'sphinx.ext.napoleon',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.hydomain',\n]\n\nfrom get_version import __version__ as hy_version\n\n# Read the Docs might dirty its checkout, so strip the dirty flag.\nhy_version = re.sub(r'[+.]dirty\\Z', '', hy_version)\n\ntemplates_path = ['_templates']\nsource_suffix = '.rst'\n\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'hy'\ncopyright = '%s the authors' % time.strftime('%Y')\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \".\".join(hy_version.split(\".\")[:-1])\n# The full version, including alpha/beta/rc tags.\nrelease = hy_version\nhy_descriptive_version = html.escape(hy_version)\nif \"+\" in hy_version:\n hy_descriptive_version += \" <strong style='color: red;'>(unstable)</strong>\"\n\nexclude_patterns = ['_build', 'coreteam.rst']\nadd_module_names = True\n\npygments_style = 'sphinx'\n\nimport sphinx_rtd_theme\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_use_smartypants = False\nhtml_show_sphinx = False\n\nhtml_context = dict(\n hy_descriptive_version = hy_descriptive_version,\n has_active_alpha = True,\n)\n\nhighlight_language = 'clojure'\n\nintersphinx_mapping = dict(\n py = ('https://docs.python.org/3/', None),\n py3_10 = ('https://docs.python.org/3.10/', None),\n hyrule = ('https://hyrule.readthedocs.io/en/master/', None))\n# ** Generate Cheatsheet\nimport json\nfrom pathlib import Path\nfrom itertools import zip_longest\n\ndef refize(spec):\n role = ':hy:func:'\n if isinstance(spec, dict):\n _name = spec['name']\n uri = spec['uri']\n if spec.get('internal'):\n role = ':ref:'\n else:\n uri = spec\n _name = str.split(uri, '.')[-1]\n return '{}`{} <{}>`'.format(role, _name, uri)\n\n\ndef format_refs(refs, indent):\n args = [iter(map(refize, refs))]\n ref_groups = zip_longest(*args, fillvalue=\"\")\n return str.join(\n ' \\\\\\n' + ' ' * (indent + 3),\n [str.join(' ', ref_group) for ref_group in ref_groups],\n )\n\n\ndef format_row(category, divider_loc):\n return '{title: <{width}} | {methods}'.format(\n width=divider_loc,\n title=category['name'],\n methods=format_refs(category['methods'], divider_loc)\n )\n\n\ndef format_table(table_spec):\n table_name = table_spec['name']\n categories = table_spec['categories']\n longest_cat_name = max(len(category['name']) for category in categories)\n table = [\n table_name,\n '-' * len(table_name),\n '',\n '=' * longest_cat_name + ' ' + '=' * 25,\n *(format_row(category, longest_cat_name) for category in categories),\n '=' * longest_cat_name + ' ' + '=' * 25,\n ''\n ]\n return '\\n'.join(table)\n\n\n# Modifications to the cheatsheet should be added in `cheatsheet.json`\ncheatsheet_spec = json.loads(Path('./docs/cheatsheet.json').read_text())\ncheatsheet = [\n '..',\n ' DO NOT MODIFY THIS FILE. IT IS AUTO GENERATED BY ``conf.py``',\n ' If you need to change or add methods, modify ``cheatsheet_spec`` in ``conf.py``',\n '',\n '.. _cheatsheet:',\n '',\n 'Cheatsheet',\n '==========',\n '',\n *map(format_table, cheatsheet_spec),\n]\nPath('./docs/cheatsheet.rst').write_text('\\n'.join(cheatsheet))\n\n\n# ** Sphinx App Setup\n\n\ndef setup(app):\n app.add_css_file('overrides.css')\n", "path": "docs/conf.py"}]} | 1,737 | 90 |
gh_patches_debug_27568 | rasdani/github-patches | git_diff | spack__spack-18325 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation issue: py-lxml doesn't use the spack libxslt and libexslt libraries
It looks like py-lxml should have AUTO_RPATH set to true. Otherwise it picks up the OS versions of libxslt instead of the spack built versions. I added this to the package.py and the library dependencies were correct:
```
def setup_build_environment(self, env):
env.set('AUTO_RPATH', 'true')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/py-lxml/package.py`
Content:
```
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class PyLxml(PythonPackage):
10 """lxml is the most feature-rich and easy-to-use library for processing
11 XML and HTML in the Python language."""
12
13 homepage = "http://lxml.de/"
14 url = "https://pypi.io/packages/source/l/lxml/lxml-4.4.1.tar.gz"
15
16 version('4.4.1', sha256='c81cb40bff373ab7a7446d6bbca0190bccc5be3448b47b51d729e37799bb5692')
17 version('4.3.3', sha256='4a03dd682f8e35a10234904e0b9508d705ff98cf962c5851ed052e9340df3d90')
18 version('4.2.5', sha256='36720698c29e7a9626a0dc802ef8885f8f0239bfd1689628ecd459a061f2807f')
19 version('3.7.3', sha256='aa502d78a51ee7d127b4824ff96500f0181d3c7826e6ee7b800d068be79361c7')
20 version('2.3', sha256='eea1b8d29532739c1383cb4794c5eacd6176f0972b59e8d29348335b87ff2e66')
21
22 variant('html5', default=False, description='Enable html5lib backend')
23 variant('htmlsoup', default=False, description='Enable BeautifulSoup4 backend')
24 variant('cssselect', default=False, description='Enable cssselect module')
25
26 depends_on('[email protected]:2.8,3.5:', type=('build', 'run'))
27 depends_on('py-setuptools', type='build')
28 depends_on('libxml2', type=('build', 'run'))
29 depends_on('libxslt', type=('build', 'run'))
30 depends_on('py-html5lib', when='+html5', type=('build', 'run'))
31 depends_on('py-beautifulsoup4', when='+htmlsoup', type=('build', 'run'))
32 depends_on('[email protected]:', when='+cssselect', type=('build', 'run'))
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/py-lxml/package.py b/var/spack/repos/builtin/packages/py-lxml/package.py
--- a/var/spack/repos/builtin/packages/py-lxml/package.py
+++ b/var/spack/repos/builtin/packages/py-lxml/package.py
@@ -13,6 +13,7 @@
homepage = "http://lxml.de/"
url = "https://pypi.io/packages/source/l/lxml/lxml-4.4.1.tar.gz"
+ version('4.5.2', sha256='cdc13a1682b2a6241080745b1953719e7fe0850b40a5c71ca574f090a1391df6')
version('4.4.1', sha256='c81cb40bff373ab7a7446d6bbca0190bccc5be3448b47b51d729e37799bb5692')
version('4.3.3', sha256='4a03dd682f8e35a10234904e0b9508d705ff98cf962c5851ed052e9340df3d90')
version('4.2.5', sha256='36720698c29e7a9626a0dc802ef8885f8f0239bfd1689628ecd459a061f2807f')
@@ -25,8 +26,8 @@
depends_on('[email protected]:2.8,3.5:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
- depends_on('libxml2', type=('build', 'run'))
- depends_on('libxslt', type=('build', 'run'))
+ depends_on('libxml2', type=('build', 'link', 'run'))
+ depends_on('libxslt', type=('build', 'link', 'run'))
depends_on('py-html5lib', when='+html5', type=('build', 'run'))
depends_on('py-beautifulsoup4', when='+htmlsoup', type=('build', 'run'))
depends_on('[email protected]:', when='+cssselect', type=('build', 'run'))
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-lxml/package.py b/var/spack/repos/builtin/packages/py-lxml/package.py\n--- a/var/spack/repos/builtin/packages/py-lxml/package.py\n+++ b/var/spack/repos/builtin/packages/py-lxml/package.py\n@@ -13,6 +13,7 @@\n homepage = \"http://lxml.de/\"\n url = \"https://pypi.io/packages/source/l/lxml/lxml-4.4.1.tar.gz\"\n \n+ version('4.5.2', sha256='cdc13a1682b2a6241080745b1953719e7fe0850b40a5c71ca574f090a1391df6')\n version('4.4.1', sha256='c81cb40bff373ab7a7446d6bbca0190bccc5be3448b47b51d729e37799bb5692')\n version('4.3.3', sha256='4a03dd682f8e35a10234904e0b9508d705ff98cf962c5851ed052e9340df3d90')\n version('4.2.5', sha256='36720698c29e7a9626a0dc802ef8885f8f0239bfd1689628ecd459a061f2807f')\n@@ -25,8 +26,8 @@\n \n depends_on('[email protected]:2.8,3.5:', type=('build', 'run'))\n depends_on('py-setuptools', type='build')\n- depends_on('libxml2', type=('build', 'run'))\n- depends_on('libxslt', type=('build', 'run'))\n+ depends_on('libxml2', type=('build', 'link', 'run'))\n+ depends_on('libxslt', type=('build', 'link', 'run'))\n depends_on('py-html5lib', when='+html5', type=('build', 'run'))\n depends_on('py-beautifulsoup4', when='+htmlsoup', type=('build', 'run'))\n depends_on('[email protected]:', when='+cssselect', type=('build', 'run'))\n", "issue": "Installation issue: py-lxml doesn't use the spack libxslt and libexslt libraries\nIt looks like py-lxml should have AUTO_RPATH set to true. Otherwise it picks up the OS versions of libxslt instead of the spack built versions. I added this to the package.py and the library dependencies were correct:\r\n\r\n```\r\n def setup_build_environment(self, env):\r\n env.set('AUTO_RPATH', 'true')\r\n```\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyLxml(PythonPackage):\n \"\"\"lxml is the most feature-rich and easy-to-use library for processing\n XML and HTML in the Python language.\"\"\"\n\n homepage = \"http://lxml.de/\"\n url = \"https://pypi.io/packages/source/l/lxml/lxml-4.4.1.tar.gz\"\n\n version('4.4.1', sha256='c81cb40bff373ab7a7446d6bbca0190bccc5be3448b47b51d729e37799bb5692')\n version('4.3.3', sha256='4a03dd682f8e35a10234904e0b9508d705ff98cf962c5851ed052e9340df3d90')\n version('4.2.5', sha256='36720698c29e7a9626a0dc802ef8885f8f0239bfd1689628ecd459a061f2807f')\n version('3.7.3', sha256='aa502d78a51ee7d127b4824ff96500f0181d3c7826e6ee7b800d068be79361c7')\n version('2.3', sha256='eea1b8d29532739c1383cb4794c5eacd6176f0972b59e8d29348335b87ff2e66')\n\n variant('html5', default=False, description='Enable html5lib backend')\n variant('htmlsoup', default=False, description='Enable BeautifulSoup4 backend')\n variant('cssselect', default=False, description='Enable cssselect module')\n\n depends_on('[email protected]:2.8,3.5:', type=('build', 'run'))\n depends_on('py-setuptools', type='build')\n depends_on('libxml2', type=('build', 'run'))\n depends_on('libxslt', type=('build', 'run'))\n depends_on('py-html5lib', when='+html5', type=('build', 'run'))\n depends_on('py-beautifulsoup4', when='+htmlsoup', type=('build', 'run'))\n depends_on('[email protected]:', when='+cssselect', type=('build', 'run'))\n", "path": "var/spack/repos/builtin/packages/py-lxml/package.py"}], "after_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyLxml(PythonPackage):\n \"\"\"lxml is the most feature-rich and easy-to-use library for processing\n XML and HTML in the Python language.\"\"\"\n\n homepage = \"http://lxml.de/\"\n url = \"https://pypi.io/packages/source/l/lxml/lxml-4.4.1.tar.gz\"\n\n version('4.5.2', sha256='cdc13a1682b2a6241080745b1953719e7fe0850b40a5c71ca574f090a1391df6')\n version('4.4.1', sha256='c81cb40bff373ab7a7446d6bbca0190bccc5be3448b47b51d729e37799bb5692')\n version('4.3.3', sha256='4a03dd682f8e35a10234904e0b9508d705ff98cf962c5851ed052e9340df3d90')\n version('4.2.5', sha256='36720698c29e7a9626a0dc802ef8885f8f0239bfd1689628ecd459a061f2807f')\n version('3.7.3', sha256='aa502d78a51ee7d127b4824ff96500f0181d3c7826e6ee7b800d068be79361c7')\n version('2.3', sha256='eea1b8d29532739c1383cb4794c5eacd6176f0972b59e8d29348335b87ff2e66')\n\n variant('html5', default=False, description='Enable html5lib backend')\n variant('htmlsoup', default=False, description='Enable BeautifulSoup4 backend')\n variant('cssselect', default=False, description='Enable cssselect module')\n\n depends_on('[email protected]:2.8,3.5:', type=('build', 'run'))\n depends_on('py-setuptools', type='build')\n depends_on('libxml2', type=('build', 'link', 'run'))\n depends_on('libxslt', type=('build', 'link', 'run'))\n depends_on('py-html5lib', when='+html5', type=('build', 'run'))\n depends_on('py-beautifulsoup4', when='+htmlsoup', type=('build', 'run'))\n depends_on('[email protected]:', when='+cssselect', type=('build', 'run'))\n", "path": "var/spack/repos/builtin/packages/py-lxml/package.py"}]} | 1,096 | 587 |
gh_patches_debug_1252 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4762 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When too many requests come simultaneously, mitmdump called an error and quited [ValueError: too many file descriptors in select()]
#### Problem Description
A clear and concise description of what the bug is.
When too many requests come simultaneously, mitmdump called an error and quited.
Traceback (most recent call last):
File "mitmdump", line 3, in <module>
File "mitmproxy\tools\main.py", line 147, in mitmdump
File "mitmproxy\tools\main.py", line 114, in run
File "mitmproxy\master.py", line 76, in run
File "mitmproxy\master.py", line 59, in run_loop
File "mitmproxy\master.py", line 95, in shutdown
File "asyncio\base_events.py", line 629, in run_until_complete
File "asyncio\base_events.py", line 596, in run_forever
File "asyncio\base_events.py", line 1854, in _run_once
File "selectors.py", line 324, in select
File "selectors.py", line 315, in _select
ValueError: too many file descriptors in select()
[77436] Failed to execute script 'mitmdump' due to unhandled exception!
I googled the error message, and found the following answer. Don't know if it's related.
https://stackoverflow.com/questions/57182009/why-am-i-getting-an-valueerror-too-many-file-descriptors-in-select
#### Steps to reproduce the behavior:
1. I use the following command
`mitmdump.exe -p 8080 --anticomp -q -s "d:\redirect-router.py"`
In the script, I re-write the host for a specific URL
2.
3.
#### System Information
Paste the output of "mitmproxy --version" here.
mitmproxy --version
Mitmproxy: 7.0.2 binary
Python: 3.9.6
OpenSSL: OpenSSL 1.1.1k 25 Mar 2021
Platform: Windows-10-10.0.18363-SP0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/__init__.py`
Content:
```
1 import asyncio
2 import sys
3
4 if sys.platform == 'win32':
5 # workaround for
6 # https://github.com/tornadoweb/tornado/issues/2751
7 # https://www.tornadoweb.org/en/stable/index.html#installation
8 # (copied multiple times in the codebase, please remove all occurrences)
9 asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/__init__.py b/mitmproxy/__init__.py
--- a/mitmproxy/__init__.py
+++ b/mitmproxy/__init__.py
@@ -1,9 +0,0 @@
-import asyncio
-import sys
-
-if sys.platform == 'win32':
- # workaround for
- # https://github.com/tornadoweb/tornado/issues/2751
- # https://www.tornadoweb.org/en/stable/index.html#installation
- # (copied multiple times in the codebase, please remove all occurrences)
- asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
| {"golden_diff": "diff --git a/mitmproxy/__init__.py b/mitmproxy/__init__.py\n--- a/mitmproxy/__init__.py\n+++ b/mitmproxy/__init__.py\n@@ -1,9 +0,0 @@\n-import asyncio\n-import sys\n-\n-if sys.platform == 'win32':\n- # workaround for\n- # https://github.com/tornadoweb/tornado/issues/2751\n- # https://www.tornadoweb.org/en/stable/index.html#installation\n- # (copied multiple times in the codebase, please remove all occurrences)\n- asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\n", "issue": "When too many requests come simultaneously, mitmdump called an error and quited [ValueError: too many file descriptors in select()]\n#### Problem Description\r\nA clear and concise description of what the bug is.\r\nWhen too many requests come simultaneously, mitmdump called an error and quited.\r\nTraceback (most recent call last):\r\n File \"mitmdump\", line 3, in <module>\r\n File \"mitmproxy\\tools\\main.py\", line 147, in mitmdump\r\n File \"mitmproxy\\tools\\main.py\", line 114, in run\r\n File \"mitmproxy\\master.py\", line 76, in run\r\n File \"mitmproxy\\master.py\", line 59, in run_loop\r\n File \"mitmproxy\\master.py\", line 95, in shutdown\r\n File \"asyncio\\base_events.py\", line 629, in run_until_complete\r\n File \"asyncio\\base_events.py\", line 596, in run_forever\r\n File \"asyncio\\base_events.py\", line 1854, in _run_once\r\n File \"selectors.py\", line 324, in select\r\n File \"selectors.py\", line 315, in _select\r\nValueError: too many file descriptors in select()\r\n[77436] Failed to execute script 'mitmdump' due to unhandled exception!\r\n\r\nI googled the error message, and found the following answer. Don't know if it's related.\r\nhttps://stackoverflow.com/questions/57182009/why-am-i-getting-an-valueerror-too-many-file-descriptors-in-select\r\n\r\n#### Steps to reproduce the behavior:\r\n1. I use the following command\r\n`mitmdump.exe -p 8080 --anticomp -q -s \"d:\\redirect-router.py\"`\r\nIn the script, I re-write the host for a specific URL\r\n2. \r\n3. \r\n\r\n#### System Information\r\nPaste the output of \"mitmproxy --version\" here.\r\nmitmproxy --version\r\nMitmproxy: 7.0.2 binary\r\nPython: 3.9.6\r\nOpenSSL: OpenSSL 1.1.1k 25 Mar 2021\r\nPlatform: Windows-10-10.0.18363-SP0\n", "before_files": [{"content": "import asyncio\nimport sys\n\nif sys.platform == 'win32':\n # workaround for\n # https://github.com/tornadoweb/tornado/issues/2751\n # https://www.tornadoweb.org/en/stable/index.html#installation\n # (copied multiple times in the codebase, please remove all occurrences)\n asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\n", "path": "mitmproxy/__init__.py"}], "after_files": [{"content": "", "path": "mitmproxy/__init__.py"}]} | 871 | 144 |
gh_patches_debug_23336 | rasdani/github-patches | git_diff | pytorch__ignite-286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tqm_logger: metric_names is currently not optional
Hi,
https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tqdm_logger.py#L75
This line should be modified to make `metric_names` optional. Here is a suggestion:
```
if metric_names is not None and not isinstance(metric_names, list):
raise TypeError("metric_names should be a list, got {} instead".format(type(metric_names)))
```
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/handlers/tqdm_logger.py`
Content:
```
1 try:
2 from tqdm import tqdm
3 except ImportError:
4 raise RuntimeError("This contrib module requires tqdm to be installed")
5
6 from ignite.engine import Events
7
8
9 class ProgressBar:
10 """
11 TQDM progress bar handler to log training progress and computed metrics.
12
13 Examples:
14
15 Create a progress bar that shows you some metrics as they are computed,
16 by simply attaching the progress bar object to your engine.
17
18 .. code-block:: python
19
20 pbar = ProgressBar()
21 pbar.attach(trainer, ['loss'])
22
23 Note:
24 When adding attaching the progress bar to an engine, it is recommend that you replace
25 every print operation in the engine's handlers triggered every iteration with
26 ``pbar.log_message`` to guarantee the correct format of the stdout.
27 """
28
29 def __init__(self):
30 self.pbar = None
31
32 def _reset(self, engine):
33 self.pbar = tqdm(
34 total=len(engine.state.dataloader),
35 leave=False,
36 bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]')
37
38 def _close(self, engine):
39 self.pbar.close()
40 self.pbar = None
41
42 def _update(self, engine, metric_names=None):
43 if self.pbar is None:
44 self._reset(engine)
45
46 self.pbar.set_description('Epoch {}'.format(engine.state.epoch))
47
48 if metric_names is not None:
49 if not all(metric in engine.state.metrics for metric in metric_names):
50 raise KeyError("metrics not found in engine.state.metrics")
51
52 metrics = {name: '{:.2e}'.format(engine.state.metrics[name]) for name in metric_names}
53 self.pbar.set_postfix(**metrics)
54
55 self.pbar.update()
56
57 @staticmethod
58 def log_message(message):
59 """
60 Logs a message, preserving the progress bar correct output format
61
62 Args:
63 message (str): string you wish to log
64 """
65 tqdm.write(message)
66
67 def attach(self, engine, metric_names=None):
68 """
69 Attaches the progress bar to an engine object
70
71 Args:
72 engine (Engine): engine object
73 metric_names (list): (Optional) list of the metrics names to log as the bar progresses
74 """
75 if not isinstance(metric_names, list):
76 raise TypeError("metric_names should be a list, got {} instead".format(type(metric_names)))
77
78 engine.add_event_handler(Events.EPOCH_COMPLETED, self._close)
79 engine.add_event_handler(Events.ITERATION_COMPLETED, self._update, metric_names)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py
--- a/ignite/contrib/handlers/tqdm_logger.py
+++ b/ignite/contrib/handlers/tqdm_logger.py
@@ -47,6 +47,7 @@
if metric_names is not None:
if not all(metric in engine.state.metrics for metric in metric_names):
+ self._close(engine)
raise KeyError("metrics not found in engine.state.metrics")
metrics = {name: '{:.2e}'.format(engine.state.metrics[name]) for name in metric_names}
@@ -72,7 +73,7 @@
engine (Engine): engine object
metric_names (list): (Optional) list of the metrics names to log as the bar progresses
"""
- if not isinstance(metric_names, list):
+ if metric_names is not None and not isinstance(metric_names, list):
raise TypeError("metric_names should be a list, got {} instead".format(type(metric_names)))
engine.add_event_handler(Events.EPOCH_COMPLETED, self._close)
| {"golden_diff": "diff --git a/ignite/contrib/handlers/tqdm_logger.py b/ignite/contrib/handlers/tqdm_logger.py\n--- a/ignite/contrib/handlers/tqdm_logger.py\n+++ b/ignite/contrib/handlers/tqdm_logger.py\n@@ -47,6 +47,7 @@\n \n if metric_names is not None:\n if not all(metric in engine.state.metrics for metric in metric_names):\n+ self._close(engine)\n raise KeyError(\"metrics not found in engine.state.metrics\")\n \n metrics = {name: '{:.2e}'.format(engine.state.metrics[name]) for name in metric_names}\n@@ -72,7 +73,7 @@\n engine (Engine): engine object\n metric_names (list): (Optional) list of the metrics names to log as the bar progresses\n \"\"\"\n- if not isinstance(metric_names, list):\n+ if metric_names is not None and not isinstance(metric_names, list):\n raise TypeError(\"metric_names should be a list, got {} instead\".format(type(metric_names)))\n \n engine.add_event_handler(Events.EPOCH_COMPLETED, self._close)\n", "issue": "tqm_logger: metric_names is currently not optional \nHi,\r\n\r\nhttps://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tqdm_logger.py#L75\r\nThis line should be modified to make `metric_names` optional. Here is a suggestion:\r\n```\r\nif metric_names is not None and not isinstance(metric_names, list):\r\n raise TypeError(\"metric_names should be a list, got {} instead\".format(type(metric_names)))\r\n```\r\n\r\nThanks\n", "before_files": [{"content": "try:\n from tqdm import tqdm\nexcept ImportError:\n raise RuntimeError(\"This contrib module requires tqdm to be installed\")\n\nfrom ignite.engine import Events\n\n\nclass ProgressBar:\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Examples:\n\n Create a progress bar that shows you some metrics as they are computed,\n by simply attaching the progress bar object to your engine.\n\n .. code-block:: python\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n \"\"\"\n\n def __init__(self):\n self.pbar = None\n\n def _reset(self, engine):\n self.pbar = tqdm(\n total=len(engine.state.dataloader),\n leave=False,\n bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]')\n\n def _close(self, engine):\n self.pbar.close()\n self.pbar = None\n\n def _update(self, engine, metric_names=None):\n if self.pbar is None:\n self._reset(engine)\n\n self.pbar.set_description('Epoch {}'.format(engine.state.epoch))\n\n if metric_names is not None:\n if not all(metric in engine.state.metrics for metric in metric_names):\n raise KeyError(\"metrics not found in engine.state.metrics\")\n\n metrics = {name: '{:.2e}'.format(engine.state.metrics[name]) for name in metric_names}\n self.pbar.set_postfix(**metrics)\n\n self.pbar.update()\n\n @staticmethod\n def log_message(message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format\n\n Args:\n message (str): string you wish to log\n \"\"\"\n tqdm.write(message)\n\n def attach(self, engine, metric_names=None):\n \"\"\"\n Attaches the progress bar to an engine object\n\n Args:\n engine (Engine): engine object\n metric_names (list): (Optional) list of the metrics names to log as the bar progresses\n \"\"\"\n if not isinstance(metric_names, list):\n raise TypeError(\"metric_names should be a list, got {} instead\".format(type(metric_names)))\n\n engine.add_event_handler(Events.EPOCH_COMPLETED, self._close)\n engine.add_event_handler(Events.ITERATION_COMPLETED, self._update, metric_names)\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}], "after_files": [{"content": "try:\n from tqdm import tqdm\nexcept ImportError:\n raise RuntimeError(\"This contrib module requires tqdm to be installed\")\n\nfrom ignite.engine import Events\n\n\nclass ProgressBar:\n \"\"\"\n TQDM progress bar handler to log training progress and computed metrics.\n\n Examples:\n\n Create a progress bar that shows you some metrics as they are computed,\n by simply attaching the progress bar object to your engine.\n\n .. code-block:: python\n\n pbar = ProgressBar()\n pbar.attach(trainer, ['loss'])\n\n Note:\n When adding attaching the progress bar to an engine, it is recommend that you replace\n every print operation in the engine's handlers triggered every iteration with\n ``pbar.log_message`` to guarantee the correct format of the stdout.\n \"\"\"\n\n def __init__(self):\n self.pbar = None\n\n def _reset(self, engine):\n self.pbar = tqdm(\n total=len(engine.state.dataloader),\n leave=False,\n bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]')\n\n def _close(self, engine):\n self.pbar.close()\n self.pbar = None\n\n def _update(self, engine, metric_names=None):\n if self.pbar is None:\n self._reset(engine)\n\n self.pbar.set_description('Epoch {}'.format(engine.state.epoch))\n\n if metric_names is not None:\n if not all(metric in engine.state.metrics for metric in metric_names):\n self._close(engine)\n raise KeyError(\"metrics not found in engine.state.metrics\")\n\n metrics = {name: '{:.2e}'.format(engine.state.metrics[name]) for name in metric_names}\n self.pbar.set_postfix(**metrics)\n\n self.pbar.update()\n\n @staticmethod\n def log_message(message):\n \"\"\"\n Logs a message, preserving the progress bar correct output format\n\n Args:\n message (str): string you wish to log\n \"\"\"\n tqdm.write(message)\n\n def attach(self, engine, metric_names=None):\n \"\"\"\n Attaches the progress bar to an engine object\n\n Args:\n engine (Engine): engine object\n metric_names (list): (Optional) list of the metrics names to log as the bar progresses\n \"\"\"\n if metric_names is not None and not isinstance(metric_names, list):\n raise TypeError(\"metric_names should be a list, got {} instead\".format(type(metric_names)))\n\n engine.add_event_handler(Events.EPOCH_COMPLETED, self._close)\n engine.add_event_handler(Events.ITERATION_COMPLETED, self._update, metric_names)\n", "path": "ignite/contrib/handlers/tqdm_logger.py"}]} | 1,069 | 244 |
gh_patches_debug_60627 | rasdani/github-patches | git_diff | CTPUG__wafer-111 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wafer page editing fails on Django 1.8 with ImproperlyConfigured: error
As seen on Travis, and confirmed locally, attempting to edit a page bombs out, ending with
"Specifying both 'fields' and 'form_class' is not permitted."
ImproperlyConfigured: Specifying both 'fields' and 'form_class' is not permitted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wafer/pages/views.py`
Content:
```
1 from django.http import Http404
2 from django.core.exceptions import PermissionDenied
3 from django.views.generic import DetailView, TemplateView, UpdateView
4
5 from wafer.pages.models import Page
6 from wafer.pages.forms import PageForm
7
8
9 class ShowPage(DetailView):
10 template_name = 'wafer.pages/page.html'
11 model = Page
12
13
14 class EditPage(UpdateView):
15 template_name = 'wafer.pages/page_form.html'
16 model = Page
17 form_class = PageForm
18 fields = ['name', 'content']
19
20
21 def slug(request, url):
22 """Look up a page by url (which is a tree of slugs)"""
23 page = None
24 for slug in url.split('/'):
25 if not slug:
26 continue
27 try:
28 page = Page.objects.get(slug=slug, parent=page)
29 except Page.DoesNotExist:
30 raise Http404
31
32 if page is None:
33 try:
34 page = Page.objects.get(slug='index')
35 except Page.DoesNotExist:
36 return TemplateView.as_view(
37 template_name='wafer/index.html')(request)
38
39 if 'edit' in request.GET.keys():
40 if not request.user.has_perm('pages.change_page'):
41 raise PermissionDenied
42 return EditPage.as_view()(request, pk=page.id)
43
44 return ShowPage.as_view()(request, pk=page.id)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wafer/pages/views.py b/wafer/pages/views.py
--- a/wafer/pages/views.py
+++ b/wafer/pages/views.py
@@ -15,7 +15,6 @@
template_name = 'wafer.pages/page_form.html'
model = Page
form_class = PageForm
- fields = ['name', 'content']
def slug(request, url):
| {"golden_diff": "diff --git a/wafer/pages/views.py b/wafer/pages/views.py\n--- a/wafer/pages/views.py\n+++ b/wafer/pages/views.py\n@@ -15,7 +15,6 @@\n template_name = 'wafer.pages/page_form.html'\n model = Page\n form_class = PageForm\n- fields = ['name', 'content']\n \n \n def slug(request, url):\n", "issue": "Wafer page editing fails on Django 1.8 with ImproperlyConfigured: error\nAs seen on Travis, and confirmed locally, attempting to edit a page bombs out, ending with\n\n\"Specifying both 'fields' and 'form_class' is not permitted.\"\nImproperlyConfigured: Specifying both 'fields' and 'form_class' is not permitted.\n\n", "before_files": [{"content": "from django.http import Http404\nfrom django.core.exceptions import PermissionDenied\nfrom django.views.generic import DetailView, TemplateView, UpdateView\n\nfrom wafer.pages.models import Page\nfrom wafer.pages.forms import PageForm\n\n\nclass ShowPage(DetailView):\n template_name = 'wafer.pages/page.html'\n model = Page\n\n\nclass EditPage(UpdateView):\n template_name = 'wafer.pages/page_form.html'\n model = Page\n form_class = PageForm\n fields = ['name', 'content']\n\n\ndef slug(request, url):\n \"\"\"Look up a page by url (which is a tree of slugs)\"\"\"\n page = None\n for slug in url.split('/'):\n if not slug:\n continue\n try:\n page = Page.objects.get(slug=slug, parent=page)\n except Page.DoesNotExist:\n raise Http404\n\n if page is None:\n try:\n page = Page.objects.get(slug='index')\n except Page.DoesNotExist:\n return TemplateView.as_view(\n template_name='wafer/index.html')(request)\n\n if 'edit' in request.GET.keys():\n if not request.user.has_perm('pages.change_page'):\n raise PermissionDenied\n return EditPage.as_view()(request, pk=page.id)\n\n return ShowPage.as_view()(request, pk=page.id)\n", "path": "wafer/pages/views.py"}], "after_files": [{"content": "from django.http import Http404\nfrom django.core.exceptions import PermissionDenied\nfrom django.views.generic import DetailView, TemplateView, UpdateView\n\nfrom wafer.pages.models import Page\nfrom wafer.pages.forms import PageForm\n\n\nclass ShowPage(DetailView):\n template_name = 'wafer.pages/page.html'\n model = Page\n\n\nclass EditPage(UpdateView):\n template_name = 'wafer.pages/page_form.html'\n model = Page\n form_class = PageForm\n\n\ndef slug(request, url):\n \"\"\"Look up a page by url (which is a tree of slugs)\"\"\"\n page = None\n for slug in url.split('/'):\n if not slug:\n continue\n try:\n page = Page.objects.get(slug=slug, parent=page)\n except Page.DoesNotExist:\n raise Http404\n\n if page is None:\n try:\n page = Page.objects.get(slug='index')\n except Page.DoesNotExist:\n return TemplateView.as_view(\n template_name='wafer/index.html')(request)\n\n if 'edit' in request.GET.keys():\n if not request.user.has_perm('pages.change_page'):\n raise PermissionDenied\n return EditPage.as_view()(request, pk=page.id)\n\n return ShowPage.as_view()(request, pk=page.id)\n", "path": "wafer/pages/views.py"}]} | 709 | 89 |
gh_patches_debug_20149 | rasdani/github-patches | git_diff | hydroshare__hydroshare-5219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User account deletion doesn't remove resources from SOLR index
**Description of the bug**
Deleting a user in mezzanine also cascades to delete the user's resources. However it looks like the resources are not removed from the SOLR index. They still show up in discover.
Steps to reproduce the bug:
1. make a new user account
2. add a resource and make it discoverable
3. login as admin user and delete the account that you created in step 1 (via the mezzanine admin panel)
4. See that the resource listing persists on the Discover search page
**Expected behavior**
User account deletion should remove the user's resources from the SOLR index
**Additional information**
HS v 2.9.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/hydro_realtime_signal_processor.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.db import models
5 from hs_core.models import Date, BaseResource
6 from hs_access_control.models import ResourceAccess
7 from haystack.exceptions import NotHandled
8 from haystack.signals import BaseSignalProcessor
9
10 logger = logging.getLogger(__name__)
11
12
13 class HydroRealtimeSignalProcessor(BaseSignalProcessor):
14 """
15 Notes:
16 1. We assume everytime metadata is updated the modified datetime is updated
17 2. ResourceAccess does not update the modified datetime (it is not scientific metadata)
18 """
19
20 def setup(self):
21 if not getattr(settings, "DISABLE_HAYSTACK", False):
22 models.signals.post_save.connect(self.handle_update, sender=Date)
23 models.signals.post_save.connect(self.handle_access, sender=ResourceAccess)
24
25 def teardown(self):
26 if not getattr(settings, "DISABLE_HAYSTACK", False):
27 models.signals.post_save.disconnect(self.handle_update, sender=Date)
28 models.signals.post_save.disconnect(self.handle_access, sender=ResourceAccess)
29
30 def handle_update(self, sender, instance, **kwargs):
31 try:
32 # resolve the BaseResource corresponding to the metadata element.
33 newbase = instance.metadata.resource
34 index_resource(self, newbase)
35 except Exception as e:
36 logger.exception("{} exception: {}".format(type(instance), str(e)))
37
38 def handle_access(self, sender, instance, **kwargs):
39 try:
40 newbase = instance.resource
41 index_resource(self, newbase)
42 except Exception as e:
43 logger.exception("{} exception: {}".format(type(instance), str(e)))
44
45
46 def index_resource(signal_processor, instance: BaseResource):
47 if hasattr(instance, 'raccess') and hasattr(instance, 'metadata'):
48 # work around for failure of super(BaseResource, instance) to work properly.
49 # this always succeeds because this is a post-save object action.
50 newbase = BaseResource.objects.get(pk=instance.pk)
51 newsender = BaseResource
52 using_backends = signal_processor.connection_router.for_write(instance=newbase)
53 for using in using_backends:
54 # if object is public/discoverable or becoming public/discoverable, index it
55 # test whether the object should be exposed.
56 if instance.show_in_discover:
57 try:
58 index = signal_processor.connections[using].get_unified_index().get_index(newsender)
59 index.update_object(newbase, using=using)
60 except NotHandled:
61 logger.exception("Failure: changes to %s with short_id %s not added to Solr Index.",
62 str(type(instance)), newbase.short_id)
63
64 # if object is private or becoming private, delete from index
65 else: # not to be shown in discover
66 try:
67 index = signal_processor.connections[using].get_unified_index().get_index(newsender)
68 index.remove_object(newbase, using=using)
69 except NotHandled:
70 logger.exception("Failure: delete of %s with short_id %s failed.",
71 str(type(instance)), newbase.short_id)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/hydro_realtime_signal_processor.py b/hs_core/hydro_realtime_signal_processor.py
--- a/hs_core/hydro_realtime_signal_processor.py
+++ b/hs_core/hydro_realtime_signal_processor.py
@@ -21,11 +21,13 @@
if not getattr(settings, "DISABLE_HAYSTACK", False):
models.signals.post_save.connect(self.handle_update, sender=Date)
models.signals.post_save.connect(self.handle_access, sender=ResourceAccess)
+ models.signals.post_delete.connect(self.handle_delete, sender=BaseResource)
def teardown(self):
if not getattr(settings, "DISABLE_HAYSTACK", False):
models.signals.post_save.disconnect(self.handle_update, sender=Date)
models.signals.post_save.disconnect(self.handle_access, sender=ResourceAccess)
+ models.signals.post_delete.disconnect(self.handle_delete, sender=BaseResource)
def handle_update(self, sender, instance, **kwargs):
try:
| {"golden_diff": "diff --git a/hs_core/hydro_realtime_signal_processor.py b/hs_core/hydro_realtime_signal_processor.py\n--- a/hs_core/hydro_realtime_signal_processor.py\n+++ b/hs_core/hydro_realtime_signal_processor.py\n@@ -21,11 +21,13 @@\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.connect(self.handle_update, sender=Date)\n models.signals.post_save.connect(self.handle_access, sender=ResourceAccess)\n+ models.signals.post_delete.connect(self.handle_delete, sender=BaseResource)\n \n def teardown(self):\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.disconnect(self.handle_update, sender=Date)\n models.signals.post_save.disconnect(self.handle_access, sender=ResourceAccess)\n+ models.signals.post_delete.disconnect(self.handle_delete, sender=BaseResource)\n \n def handle_update(self, sender, instance, **kwargs):\n try:\n", "issue": "User account deletion doesn't remove resources from SOLR index\n**Description of the bug**\r\nDeleting a user in mezzanine also cascades to delete the user's resources. However it looks like the resources are not removed from the SOLR index. They still show up in discover.\r\n\r\nSteps to reproduce the bug:\r\n1. make a new user account\r\n2. add a resource and make it discoverable\r\n3. login as admin user and delete the account that you created in step 1 (via the mezzanine admin panel)\r\n4. See that the resource listing persists on the Discover search page\r\n\r\n**Expected behavior**\r\nUser account deletion should remove the user's resources from the SOLR index\r\n\r\n**Additional information**\r\nHS v 2.9.2\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom hs_core.models import Date, BaseResource\nfrom hs_access_control.models import ResourceAccess\nfrom haystack.exceptions import NotHandled\nfrom haystack.signals import BaseSignalProcessor\n\nlogger = logging.getLogger(__name__)\n\n\nclass HydroRealtimeSignalProcessor(BaseSignalProcessor):\n \"\"\"\n Notes:\n 1. We assume everytime metadata is updated the modified datetime is updated\n 2. ResourceAccess does not update the modified datetime (it is not scientific metadata)\n \"\"\"\n\n def setup(self):\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.connect(self.handle_update, sender=Date)\n models.signals.post_save.connect(self.handle_access, sender=ResourceAccess)\n\n def teardown(self):\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.disconnect(self.handle_update, sender=Date)\n models.signals.post_save.disconnect(self.handle_access, sender=ResourceAccess)\n\n def handle_update(self, sender, instance, **kwargs):\n try:\n # resolve the BaseResource corresponding to the metadata element.\n newbase = instance.metadata.resource\n index_resource(self, newbase)\n except Exception as e:\n logger.exception(\"{} exception: {}\".format(type(instance), str(e)))\n\n def handle_access(self, sender, instance, **kwargs):\n try:\n newbase = instance.resource\n index_resource(self, newbase)\n except Exception as e:\n logger.exception(\"{} exception: {}\".format(type(instance), str(e)))\n\n\ndef index_resource(signal_processor, instance: BaseResource):\n if hasattr(instance, 'raccess') and hasattr(instance, 'metadata'):\n # work around for failure of super(BaseResource, instance) to work properly.\n # this always succeeds because this is a post-save object action.\n newbase = BaseResource.objects.get(pk=instance.pk)\n newsender = BaseResource\n using_backends = signal_processor.connection_router.for_write(instance=newbase)\n for using in using_backends:\n # if object is public/discoverable or becoming public/discoverable, index it\n # test whether the object should be exposed.\n if instance.show_in_discover:\n try:\n index = signal_processor.connections[using].get_unified_index().get_index(newsender)\n index.update_object(newbase, using=using)\n except NotHandled:\n logger.exception(\"Failure: changes to %s with short_id %s not added to Solr Index.\",\n str(type(instance)), newbase.short_id)\n\n # if object is private or becoming private, delete from index\n else: # not to be shown in discover\n try:\n index = signal_processor.connections[using].get_unified_index().get_index(newsender)\n index.remove_object(newbase, using=using)\n except NotHandled:\n logger.exception(\"Failure: delete of %s with short_id %s failed.\",\n str(type(instance)), newbase.short_id)\n", "path": "hs_core/hydro_realtime_signal_processor.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom hs_core.models import Date, BaseResource\nfrom hs_access_control.models import ResourceAccess\nfrom haystack.exceptions import NotHandled\nfrom haystack.signals import BaseSignalProcessor\n\nlogger = logging.getLogger(__name__)\n\n\nclass HydroRealtimeSignalProcessor(BaseSignalProcessor):\n \"\"\"\n Notes:\n 1. We assume everytime metadata is updated the modified datetime is updated\n 2. ResourceAccess does not update the modified datetime (it is not scientific metadata)\n \"\"\"\n\n def setup(self):\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.connect(self.handle_update, sender=Date)\n models.signals.post_save.connect(self.handle_access, sender=ResourceAccess)\n models.signals.post_delete.connect(self.handle_delete, sender=BaseResource)\n\n def teardown(self):\n if not getattr(settings, \"DISABLE_HAYSTACK\", False):\n models.signals.post_save.disconnect(self.handle_update, sender=Date)\n models.signals.post_save.disconnect(self.handle_access, sender=ResourceAccess)\n models.signals.post_delete.disconnect(self.handle_delete, sender=BaseResource)\n\n def handle_update(self, sender, instance, **kwargs):\n try:\n # resolve the BaseResource corresponding to the metadata element.\n newbase = instance.metadata.resource\n index_resource(self, newbase)\n except Exception as e:\n logger.exception(\"{} exception: {}\".format(type(instance), str(e)))\n\n def handle_access(self, sender, instance, **kwargs):\n try:\n newbase = instance.resource\n index_resource(self, newbase)\n except Exception as e:\n logger.exception(\"{} exception: {}\".format(type(instance), str(e)))\n\n\ndef index_resource(signal_processor, instance: BaseResource):\n if hasattr(instance, 'raccess') and hasattr(instance, 'metadata'):\n # work around for failure of super(BaseResource, instance) to work properly.\n # this always succeeds because this is a post-save object action.\n newbase = BaseResource.objects.get(pk=instance.pk)\n newsender = BaseResource\n using_backends = signal_processor.connection_router.for_write(instance=newbase)\n for using in using_backends:\n # if object is public/discoverable or becoming public/discoverable, index it\n # test whether the object should be exposed.\n if instance.show_in_discover:\n try:\n index = signal_processor.connections[using].get_unified_index().get_index(newsender)\n index.update_object(newbase, using=using)\n except NotHandled:\n logger.exception(\"Failure: changes to %s with short_id %s not added to Solr Index.\",\n str(type(instance)), newbase.short_id)\n\n # if object is private or becoming private, delete from index\n else: # not to be shown in discover\n try:\n index = signal_processor.connections[using].get_unified_index().get_index(newsender)\n index.remove_object(newbase, using=using)\n except NotHandled:\n logger.exception(\"Failure: delete of %s with short_id %s failed.\",\n str(type(instance)), newbase.short_id)\n", "path": "hs_core/hydro_realtime_signal_processor.py"}]} | 1,182 | 207 |
gh_patches_debug_8403 | rasdani/github-patches | git_diff | pypa__pip-10507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
21.3 regression with legacy resolver
Assuming the following project with an empty `pyproject.toml` and the following `setup.cfg`:
```ini
[metadata]
name = pkgb
version = 1.0
[options]
install_requires =
wrapt
```
We get the following stack trace, using pip main branch, today:
```console
$ pip install --use-deprecated=legacy-resolver -e ./pkgb
Obtaining file:///home/me/tmp/brol/pkgb
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: wrapt in /home/me/.virtualenvs/tempenv-49ea1126817e6/lib/python3.8/site-packages (from pkgb==1.0) (1.12.1)
ERROR: Exception:
Traceback (most recent call last):
File "/home/me/pip/src/pip/_internal/cli/base_command.py", line 179, in exc_logging_wrapper
status = run_func(*args)
File "/home/me/pip/src/pip/_internal/cli/req_command.py", line 203, in wrapper
return func(self, options, args)
File "/home/me/pip/src/pip/_internal/commands/install.py", line 334, in run
requirement_set = resolver.resolve(
File "/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py", line 181, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py", line 382, in _resolve_one
_check_dist_requires_python(
File "/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py", line 75, in _check_dist_requires_python
requires_python = str(dist.requires_python)
File "/home/me/pip/src/pip/_vendor/pkg_resources/__init__.py", line 2816, in __getattr__
return getattr(self._provider, attr)
AttributeError: 'PathMetadata' object has no attribute 'requires_python'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/distributions/installed.py`
Content:
```
1 from pip._internal.distributions.base import AbstractDistribution
2 from pip._internal.index.package_finder import PackageFinder
3 from pip._internal.metadata import BaseDistribution
4
5
6 class InstalledDistribution(AbstractDistribution):
7 """Represents an installed package.
8
9 This does not need any preparation as the required information has already
10 been computed.
11 """
12
13 def get_metadata_distribution(self) -> BaseDistribution:
14 assert self.req.satisfied_by is not None, "not actually installed"
15 return self.req.satisfied_by
16
17 def prepare_distribution_metadata(
18 self, finder: PackageFinder, build_isolation: bool
19 ) -> None:
20 pass
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pip/_internal/distributions/installed.py b/src/pip/_internal/distributions/installed.py
--- a/src/pip/_internal/distributions/installed.py
+++ b/src/pip/_internal/distributions/installed.py
@@ -11,8 +11,10 @@
"""
def get_metadata_distribution(self) -> BaseDistribution:
+ from pip._internal.metadata.pkg_resources import Distribution as _Dist
+
assert self.req.satisfied_by is not None, "not actually installed"
- return self.req.satisfied_by
+ return _Dist(self.req.satisfied_by)
def prepare_distribution_metadata(
self, finder: PackageFinder, build_isolation: bool
| {"golden_diff": "diff --git a/src/pip/_internal/distributions/installed.py b/src/pip/_internal/distributions/installed.py\n--- a/src/pip/_internal/distributions/installed.py\n+++ b/src/pip/_internal/distributions/installed.py\n@@ -11,8 +11,10 @@\n \"\"\"\n \n def get_metadata_distribution(self) -> BaseDistribution:\n+ from pip._internal.metadata.pkg_resources import Distribution as _Dist\n+\n assert self.req.satisfied_by is not None, \"not actually installed\"\n- return self.req.satisfied_by\n+ return _Dist(self.req.satisfied_by)\n \n def prepare_distribution_metadata(\n self, finder: PackageFinder, build_isolation: bool\n", "issue": "21.3 regression with legacy resolver\nAssuming the following project with an empty `pyproject.toml` and the following `setup.cfg`:\r\n\r\n```ini\r\n[metadata]\r\nname = pkgb\r\nversion = 1.0\r\n\r\n[options]\r\ninstall_requires =\r\n wrapt\r\n```\r\n\r\nWe get the following stack trace, using pip main branch, today:\r\n\r\n```console\r\n$ pip install --use-deprecated=legacy-resolver -e ./pkgb\r\nObtaining file:///home/me/tmp/brol/pkgb\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... done\r\n Preparing wheel metadata ... done\r\nRequirement already satisfied: wrapt in /home/me/.virtualenvs/tempenv-49ea1126817e6/lib/python3.8/site-packages (from pkgb==1.0) (1.12.1)\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/home/me/pip/src/pip/_internal/cli/base_command.py\", line 179, in exc_logging_wrapper\r\n status = run_func(*args)\r\n File \"/home/me/pip/src/pip/_internal/cli/req_command.py\", line 203, in wrapper\r\n return func(self, options, args)\r\n File \"/home/me/pip/src/pip/_internal/commands/install.py\", line 334, in run\r\n requirement_set = resolver.resolve(\r\n File \"/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py\", line 181, in resolve\r\n discovered_reqs.extend(self._resolve_one(requirement_set, req))\r\n File \"/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py\", line 382, in _resolve_one\r\n _check_dist_requires_python(\r\n File \"/home/me/pip/src/pip/_internal/resolution/legacy/resolver.py\", line 75, in _check_dist_requires_python\r\n requires_python = str(dist.requires_python)\r\n File \"/home/me/pip/src/pip/_vendor/pkg_resources/__init__.py\", line 2816, in __getattr__\r\n return getattr(self._provider, attr)\r\nAttributeError: 'PathMetadata' object has no attribute 'requires_python'\r\n```\r\n\n", "before_files": [{"content": "from pip._internal.distributions.base import AbstractDistribution\nfrom pip._internal.index.package_finder import PackageFinder\nfrom pip._internal.metadata import BaseDistribution\n\n\nclass InstalledDistribution(AbstractDistribution):\n \"\"\"Represents an installed package.\n\n This does not need any preparation as the required information has already\n been computed.\n \"\"\"\n\n def get_metadata_distribution(self) -> BaseDistribution:\n assert self.req.satisfied_by is not None, \"not actually installed\"\n return self.req.satisfied_by\n\n def prepare_distribution_metadata(\n self, finder: PackageFinder, build_isolation: bool\n ) -> None:\n pass\n", "path": "src/pip/_internal/distributions/installed.py"}], "after_files": [{"content": "from pip._internal.distributions.base import AbstractDistribution\nfrom pip._internal.index.package_finder import PackageFinder\nfrom pip._internal.metadata import BaseDistribution\n\n\nclass InstalledDistribution(AbstractDistribution):\n \"\"\"Represents an installed package.\n\n This does not need any preparation as the required information has already\n been computed.\n \"\"\"\n\n def get_metadata_distribution(self) -> BaseDistribution:\n from pip._internal.metadata.pkg_resources import Distribution as _Dist\n\n assert self.req.satisfied_by is not None, \"not actually installed\"\n return _Dist(self.req.satisfied_by)\n\n def prepare_distribution_metadata(\n self, finder: PackageFinder, build_isolation: bool\n ) -> None:\n pass\n", "path": "src/pip/_internal/distributions/installed.py"}]} | 914 | 152 |
gh_patches_debug_19762 | rasdani/github-patches | git_diff | scrapy__scrapy-4676 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
startproject only allows creating the project on an existing folder if the path is specified with an invalid module name
```
(venv) [adrian@afonsox temporal]$ mkdir existing_folder
(venv) [adrian@afonsox temporal]$ scrapy startproject existing_folder
Error: Module 'existing_folder' already exists
(venv) [adrian@afonsox temporal]$ cd existing_folder && scrapy startproject existing_folder .
New Scrapy project 'existing_folder', using template directory '/home/adrian/temporal/venv/lib/python3.8/site-packages/scrapy/templates/project', created in:
/home/adrian/temporal/existing_folder
You can start your first spider with:
cd .
scrapy genspider example example.com
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/startproject.py`
Content:
```
1 import re
2 import os
3 import string
4 from importlib import import_module
5 from os.path import join, exists, abspath
6 from shutil import ignore_patterns, move, copy2, copystat
7 from stat import S_IWUSR as OWNER_WRITE_PERMISSION
8
9 import scrapy
10 from scrapy.commands import ScrapyCommand
11 from scrapy.utils.template import render_templatefile, string_camelcase
12 from scrapy.exceptions import UsageError
13
14
15 TEMPLATES_TO_RENDER = (
16 ('scrapy.cfg',),
17 ('${project_name}', 'settings.py.tmpl'),
18 ('${project_name}', 'items.py.tmpl'),
19 ('${project_name}', 'pipelines.py.tmpl'),
20 ('${project_name}', 'middlewares.py.tmpl'),
21 )
22
23 IGNORE = ignore_patterns('*.pyc', '__pycache__', '.svn')
24
25
26 def _make_writable(path):
27 current_permissions = os.stat(path).st_mode
28 os.chmod(path, current_permissions | OWNER_WRITE_PERMISSION)
29
30
31 class Command(ScrapyCommand):
32
33 requires_project = False
34 default_settings = {'LOG_ENABLED': False,
35 'SPIDER_LOADER_WARN_ONLY': True}
36
37 def syntax(self):
38 return "<project_name> [project_dir]"
39
40 def short_desc(self):
41 return "Create new project"
42
43 def _is_valid_name(self, project_name):
44 def _module_exists(module_name):
45 try:
46 import_module(module_name)
47 return True
48 except ImportError:
49 return False
50
51 if not re.search(r'^[_a-zA-Z]\w*$', project_name):
52 print('Error: Project names must begin with a letter and contain'
53 ' only\nletters, numbers and underscores')
54 elif _module_exists(project_name):
55 print(f'Error: Module {project_name!r} already exists')
56 else:
57 return True
58 return False
59
60 def _copytree(self, src, dst):
61 """
62 Since the original function always creates the directory, to resolve
63 the issue a new function had to be created. It's a simple copy and
64 was reduced for this case.
65
66 More info at:
67 https://github.com/scrapy/scrapy/pull/2005
68 """
69 ignore = IGNORE
70 names = os.listdir(src)
71 ignored_names = ignore(src, names)
72
73 if not os.path.exists(dst):
74 os.makedirs(dst)
75
76 for name in names:
77 if name in ignored_names:
78 continue
79
80 srcname = os.path.join(src, name)
81 dstname = os.path.join(dst, name)
82 if os.path.isdir(srcname):
83 self._copytree(srcname, dstname)
84 else:
85 copy2(srcname, dstname)
86 _make_writable(dstname)
87
88 copystat(src, dst)
89 _make_writable(dst)
90
91 def run(self, args, opts):
92 if len(args) not in (1, 2):
93 raise UsageError()
94
95 project_name = args[0]
96 project_dir = args[0]
97
98 if len(args) == 2:
99 project_dir = args[1]
100
101 if exists(join(project_dir, 'scrapy.cfg')):
102 self.exitcode = 1
103 print(f'Error: scrapy.cfg already exists in {abspath(project_dir)}')
104 return
105
106 if not self._is_valid_name(project_name):
107 self.exitcode = 1
108 return
109
110 self._copytree(self.templates_dir, abspath(project_dir))
111 move(join(project_dir, 'module'), join(project_dir, project_name))
112 for paths in TEMPLATES_TO_RENDER:
113 path = join(*paths)
114 tplfile = join(project_dir, string.Template(path).substitute(project_name=project_name))
115 render_templatefile(tplfile, project_name=project_name, ProjectName=string_camelcase(project_name))
116 print(f"New Scrapy project '{project_name}', using template directory "
117 f"'{self.templates_dir}', created in:")
118 print(f" {abspath(project_dir)}\n")
119 print("You can start your first spider with:")
120 print(f" cd {project_dir}")
121 print(" scrapy genspider example example.com")
122
123 @property
124 def templates_dir(self):
125 return join(
126 self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),
127 'project'
128 )
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py
--- a/scrapy/commands/startproject.py
+++ b/scrapy/commands/startproject.py
@@ -1,7 +1,7 @@
import re
import os
import string
-from importlib import import_module
+from importlib.util import find_spec
from os.path import join, exists, abspath
from shutil import ignore_patterns, move, copy2, copystat
from stat import S_IWUSR as OWNER_WRITE_PERMISSION
@@ -42,11 +42,8 @@
def _is_valid_name(self, project_name):
def _module_exists(module_name):
- try:
- import_module(module_name)
- return True
- except ImportError:
- return False
+ spec = find_spec(module_name)
+ return spec is not None and spec.loader is not None
if not re.search(r'^[_a-zA-Z]\w*$', project_name):
print('Error: Project names must begin with a letter and contain'
| {"golden_diff": "diff --git a/scrapy/commands/startproject.py b/scrapy/commands/startproject.py\n--- a/scrapy/commands/startproject.py\n+++ b/scrapy/commands/startproject.py\n@@ -1,7 +1,7 @@\n import re\n import os\n import string\n-from importlib import import_module\n+from importlib.util import find_spec\n from os.path import join, exists, abspath\n from shutil import ignore_patterns, move, copy2, copystat\n from stat import S_IWUSR as OWNER_WRITE_PERMISSION\n@@ -42,11 +42,8 @@\n \n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n- try:\n- import_module(module_name)\n- return True\n- except ImportError:\n- return False\n+ spec = find_spec(module_name)\n+ return spec is not None and spec.loader is not None\n \n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\n", "issue": "startproject only allows creating the project on an existing folder if the path is specified with an invalid module name\n```\r\n(venv) [adrian@afonsox temporal]$ mkdir existing_folder\r\n(venv) [adrian@afonsox temporal]$ scrapy startproject existing_folder\r\nError: Module 'existing_folder' already exists\r\n(venv) [adrian@afonsox temporal]$ cd existing_folder && scrapy startproject existing_folder .\r\nNew Scrapy project 'existing_folder', using template directory '/home/adrian/temporal/venv/lib/python3.8/site-packages/scrapy/templates/project', created in:\r\n /home/adrian/temporal/existing_folder\r\n\r\nYou can start your first spider with:\r\n cd .\r\n scrapy genspider example example.com\r\n```\n", "before_files": [{"content": "import re\nimport os\nimport string\nfrom importlib import import_module\nfrom os.path import join, exists, abspath\nfrom shutil import ignore_patterns, move, copy2, copystat\nfrom stat import S_IWUSR as OWNER_WRITE_PERMISSION\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\nTEMPLATES_TO_RENDER = (\n ('scrapy.cfg',),\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n ('${project_name}', 'middlewares.py.tmpl'),\n)\n\nIGNORE = ignore_patterns('*.pyc', '__pycache__', '.svn')\n\n\ndef _make_writable(path):\n current_permissions = os.stat(path).st_mode\n os.chmod(path, current_permissions | OWNER_WRITE_PERMISSION)\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False,\n 'SPIDER_LOADER_WARN_ONLY': True}\n\n def syntax(self):\n return \"<project_name> [project_dir]\"\n\n def short_desc(self):\n return \"Create new project\"\n\n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n try:\n import_module(module_name)\n return True\n except ImportError:\n return False\n\n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\n ' only\\nletters, numbers and underscores')\n elif _module_exists(project_name):\n print(f'Error: Module {project_name!r} already exists')\n else:\n return True\n return False\n\n def _copytree(self, src, dst):\n \"\"\"\n Since the original function always creates the directory, to resolve\n the issue a new function had to be created. It's a simple copy and\n was reduced for this case.\n\n More info at:\n https://github.com/scrapy/scrapy/pull/2005\n \"\"\"\n ignore = IGNORE\n names = os.listdir(src)\n ignored_names = ignore(src, names)\n\n if not os.path.exists(dst):\n os.makedirs(dst)\n\n for name in names:\n if name in ignored_names:\n continue\n\n srcname = os.path.join(src, name)\n dstname = os.path.join(dst, name)\n if os.path.isdir(srcname):\n self._copytree(srcname, dstname)\n else:\n copy2(srcname, dstname)\n _make_writable(dstname)\n\n copystat(src, dst)\n _make_writable(dst)\n\n def run(self, args, opts):\n if len(args) not in (1, 2):\n raise UsageError()\n\n project_name = args[0]\n project_dir = args[0]\n\n if len(args) == 2:\n project_dir = args[1]\n\n if exists(join(project_dir, 'scrapy.cfg')):\n self.exitcode = 1\n print(f'Error: scrapy.cfg already exists in {abspath(project_dir)}')\n return\n\n if not self._is_valid_name(project_name):\n self.exitcode = 1\n return\n\n self._copytree(self.templates_dir, abspath(project_dir))\n move(join(project_dir, 'module'), join(project_dir, project_name))\n for paths in TEMPLATES_TO_RENDER:\n path = join(*paths)\n tplfile = join(project_dir, string.Template(path).substitute(project_name=project_name))\n render_templatefile(tplfile, project_name=project_name, ProjectName=string_camelcase(project_name))\n print(f\"New Scrapy project '{project_name}', using template directory \"\n f\"'{self.templates_dir}', created in:\")\n print(f\" {abspath(project_dir)}\\n\")\n print(\"You can start your first spider with:\")\n print(f\" cd {project_dir}\")\n print(\" scrapy genspider example example.com\")\n\n @property\n def templates_dir(self):\n return join(\n self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),\n 'project'\n )\n", "path": "scrapy/commands/startproject.py"}], "after_files": [{"content": "import re\nimport os\nimport string\nfrom importlib.util import find_spec\nfrom os.path import join, exists, abspath\nfrom shutil import ignore_patterns, move, copy2, copystat\nfrom stat import S_IWUSR as OWNER_WRITE_PERMISSION\n\nimport scrapy\nfrom scrapy.commands import ScrapyCommand\nfrom scrapy.utils.template import render_templatefile, string_camelcase\nfrom scrapy.exceptions import UsageError\n\n\nTEMPLATES_TO_RENDER = (\n ('scrapy.cfg',),\n ('${project_name}', 'settings.py.tmpl'),\n ('${project_name}', 'items.py.tmpl'),\n ('${project_name}', 'pipelines.py.tmpl'),\n ('${project_name}', 'middlewares.py.tmpl'),\n)\n\nIGNORE = ignore_patterns('*.pyc', '__pycache__', '.svn')\n\n\ndef _make_writable(path):\n current_permissions = os.stat(path).st_mode\n os.chmod(path, current_permissions | OWNER_WRITE_PERMISSION)\n\n\nclass Command(ScrapyCommand):\n\n requires_project = False\n default_settings = {'LOG_ENABLED': False,\n 'SPIDER_LOADER_WARN_ONLY': True}\n\n def syntax(self):\n return \"<project_name> [project_dir]\"\n\n def short_desc(self):\n return \"Create new project\"\n\n def _is_valid_name(self, project_name):\n def _module_exists(module_name):\n spec = find_spec(module_name)\n return spec is not None and spec.loader is not None\n\n if not re.search(r'^[_a-zA-Z]\\w*$', project_name):\n print('Error: Project names must begin with a letter and contain'\n ' only\\nletters, numbers and underscores')\n elif _module_exists(project_name):\n print('Error: Module %r already exists' % project_name)\n else:\n return True\n return False\n\n def _copytree(self, src, dst):\n \"\"\"\n Since the original function always creates the directory, to resolve\n the issue a new function had to be created. It's a simple copy and\n was reduced for this case.\n\n More info at:\n https://github.com/scrapy/scrapy/pull/2005\n \"\"\"\n ignore = IGNORE\n names = os.listdir(src)\n ignored_names = ignore(src, names)\n\n if not os.path.exists(dst):\n os.makedirs(dst)\n\n for name in names:\n if name in ignored_names:\n continue\n\n srcname = os.path.join(src, name)\n dstname = os.path.join(dst, name)\n if os.path.isdir(srcname):\n self._copytree(srcname, dstname)\n else:\n copy2(srcname, dstname)\n _make_writable(dstname)\n\n copystat(src, dst)\n _make_writable(dst)\n\n def run(self, args, opts):\n if len(args) not in (1, 2):\n raise UsageError()\n\n project_name = args[0]\n project_dir = args[0]\n\n if len(args) == 2:\n project_dir = args[1]\n\n if exists(join(project_dir, 'scrapy.cfg')):\n self.exitcode = 1\n print('Error: scrapy.cfg already exists in %s' % abspath(project_dir))\n return\n\n if not self._is_valid_name(project_name):\n self.exitcode = 1\n return\n\n self._copytree(self.templates_dir, abspath(project_dir))\n move(join(project_dir, 'module'), join(project_dir, project_name))\n for paths in TEMPLATES_TO_RENDER:\n path = join(*paths)\n tplfile = join(project_dir, string.Template(path).substitute(project_name=project_name))\n render_templatefile(tplfile, project_name=project_name, ProjectName=string_camelcase(project_name))\n print(\"New Scrapy project '%s', using template directory '%s', \"\n \"created in:\" % (project_name, self.templates_dir))\n print(\" %s\\n\" % abspath(project_dir))\n print(\"You can start your first spider with:\")\n print(\" cd %s\" % project_dir)\n print(\" scrapy genspider example example.com\")\n\n @property\n def templates_dir(self):\n return join(\n self.settings['TEMPLATES_DIR'] or join(scrapy.__path__[0], 'templates'),\n 'project'\n )\n", "path": "scrapy/commands/startproject.py"}]} | 1,632 | 227 |
gh_patches_debug_25061 | rasdani/github-patches | git_diff | elastic__apm-agent-python-813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Flower hangs from version 5.4.0.
**Describe the bug**:
Flower hangs (no answer from http connection to browser) when a version >= 5.4.0 is installed
**To Reproduce**
1. pip install elastic-apm==5.4.0
2. restart flower and try to access
**Environment (please complete the following information)**
- OS: Ubuntu 18.04
- Python version: 3.6
- Framework and version: Django 2.2
- APM Server version: NA
- Agent version: 5.4.0+
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/tornado.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30 """
31 Instrumentation for Tornado
32 """
33 import elasticapm
34 from elasticapm.conf import constants
35 from elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule
36 from elasticapm.traces import capture_span
37 from elasticapm.utils.disttracing import TraceParent
38
39
40 class TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):
41 name = "tornado_request_execute"
42 creates_transactions = True
43 instrument_list = [("tornado.web", "RequestHandler._execute")]
44
45 async def call(self, module, method, wrapped, instance, args, kwargs):
46 # Late import to avoid ImportErrors
47 from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response
48
49 request = instance.request
50 trace_parent = TraceParent.from_headers(request.headers)
51 client = instance.application.elasticapm_client
52 client.begin_transaction("request", trace_parent=trace_parent)
53 elasticapm.set_context(
54 lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), "request"
55 )
56 # TODO: Can we somehow incorporate the routing rule itself here?
57 elasticapm.set_transaction_name("{} {}".format(request.method, type(instance).__name__), override=False)
58
59 ret = await wrapped(*args, **kwargs)
60
61 elasticapm.set_context(
62 lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), "response"
63 )
64 result = "HTTP {}xx".format(instance.get_status() // 100)
65 elasticapm.set_transaction_result(result, override=False)
66 client.end_transaction()
67
68 return ret
69
70
71 class TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):
72 name = "tornado_handle_request_exception"
73
74 instrument_list = [("tornado.web", "RequestHandler._handle_request_exception")]
75
76 def call(self, module, method, wrapped, instance, args, kwargs):
77
78 # Late import to avoid ImportErrors
79 from tornado.web import Finish, HTTPError
80 from elasticapm.contrib.tornado.utils import get_data_from_request
81
82 e = args[0]
83 if isinstance(e, Finish):
84 # Not an error; Finish is an exception that ends a request without an error response
85 return wrapped(*args, **kwargs)
86
87 client = instance.application.elasticapm_client
88 request = instance.request
89 client.capture_exception(
90 context={"request": get_data_from_request(instance, request, client.config, constants.ERROR)}
91 )
92 if isinstance(e, HTTPError):
93 elasticapm.set_transaction_result("HTTP {}xx".format(int(e.status_code / 100)), override=False)
94 elasticapm.set_context({"status_code": e.status_code}, "response")
95 else:
96 elasticapm.set_transaction_result("HTTP 5xx", override=False)
97 elasticapm.set_context({"status_code": 500}, "response")
98
99 return wrapped(*args, **kwargs)
100
101
102 class TornadoRenderInstrumentation(AbstractInstrumentedModule):
103 name = "tornado_render"
104
105 instrument_list = [("tornado.web", "RequestHandler.render")]
106
107 def call(self, module, method, wrapped, instance, args, kwargs):
108 if "template_name" in kwargs:
109 name = kwargs["template_name"]
110 else:
111 name = args[0]
112
113 with capture_span(name, span_type="template", span_subtype="tornado", span_action="render"):
114 return wrapped(*args, **kwargs)
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/tornado.py b/elasticapm/instrumentation/packages/tornado.py
--- a/elasticapm/instrumentation/packages/tornado.py
+++ b/elasticapm/instrumentation/packages/tornado.py
@@ -43,6 +43,11 @@
instrument_list = [("tornado.web", "RequestHandler._execute")]
async def call(self, module, method, wrapped, instance, args, kwargs):
+ if not hasattr(instance.application, "elasticapm_client"):
+ # If tornado was instrumented but not as the main framework
+ # (i.e. in Flower), we should skip it.
+ return await wrapped(*args, **kwargs)
+
# Late import to avoid ImportErrors
from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response
@@ -74,6 +79,10 @@
instrument_list = [("tornado.web", "RequestHandler._handle_request_exception")]
def call(self, module, method, wrapped, instance, args, kwargs):
+ if not hasattr(instance.application, "elasticapm_client"):
+ # If tornado was instrumented but not as the main framework
+ # (i.e. in Flower), we should skip it.
+ return wrapped(*args, **kwargs)
# Late import to avoid ImportErrors
from tornado.web import Finish, HTTPError
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/tornado.py b/elasticapm/instrumentation/packages/tornado.py\n--- a/elasticapm/instrumentation/packages/tornado.py\n+++ b/elasticapm/instrumentation/packages/tornado.py\n@@ -43,6 +43,11 @@\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n \n async def call(self, module, method, wrapped, instance, args, kwargs):\n+ if not hasattr(instance.application, \"elasticapm_client\"):\n+ # If tornado was instrumented but not as the main framework\n+ # (i.e. in Flower), we should skip it.\n+ return await wrapped(*args, **kwargs)\n+\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n \n@@ -74,6 +79,10 @@\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n+ if not hasattr(instance.application, \"elasticapm_client\"):\n+ # If tornado was instrumented but not as the main framework\n+ # (i.e. in Flower), we should skip it.\n+ return wrapped(*args, **kwargs)\n \n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n", "issue": "Flower hangs from version 5.4.0.\n**Describe the bug**: \r\nFlower hangs (no answer from http connection to browser) when a version >= 5.4.0 is installed\r\n\r\n**To Reproduce**\r\n\r\n1. pip install elastic-apm==5.4.0\r\n2. restart flower and try to access\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Ubuntu 18.04\r\n- Python version: 3.6\r\n- Framework and version: Django 2.2\r\n- APM Server version: NA\r\n- Agent version: 5.4.0+ \r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\"\"\"\nInstrumentation for Tornado\n\"\"\"\nimport elasticapm\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.disttracing import TraceParent\n\n\nclass TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"tornado_request_execute\"\n creates_transactions = True\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n\n request = instance.request\n trace_parent = TraceParent.from_headers(request.headers)\n client = instance.application.elasticapm_client\n client.begin_transaction(\"request\", trace_parent=trace_parent)\n elasticapm.set_context(\n lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), \"request\"\n )\n # TODO: Can we somehow incorporate the routing rule itself here?\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, type(instance).__name__), override=False)\n\n ret = await wrapped(*args, **kwargs)\n\n elasticapm.set_context(\n lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), \"response\"\n )\n result = \"HTTP {}xx\".format(instance.get_status() // 100)\n elasticapm.set_transaction_result(result, override=False)\n client.end_transaction()\n\n return ret\n\n\nclass TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_handle_request_exception\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n\n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n from elasticapm.contrib.tornado.utils import get_data_from_request\n\n e = args[0]\n if isinstance(e, Finish):\n # Not an error; Finish is an exception that ends a request without an error response\n return wrapped(*args, **kwargs)\n\n client = instance.application.elasticapm_client\n request = instance.request\n client.capture_exception(\n context={\"request\": get_data_from_request(instance, request, client.config, constants.ERROR)}\n )\n if isinstance(e, HTTPError):\n elasticapm.set_transaction_result(\"HTTP {}xx\".format(int(e.status_code / 100)), override=False)\n elasticapm.set_context({\"status_code\": e.status_code}, \"response\")\n else:\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n return wrapped(*args, **kwargs)\n\n\nclass TornadoRenderInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_render\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler.render\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"template_name\" in kwargs:\n name = kwargs[\"template_name\"]\n else:\n name = args[0]\n\n with capture_span(name, span_type=\"template\", span_subtype=\"tornado\", span_action=\"render\"):\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/tornado.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\"\"\"\nInstrumentation for Tornado\n\"\"\"\nimport elasticapm\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.disttracing import TraceParent\n\n\nclass TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"tornado_request_execute\"\n creates_transactions = True\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n if not hasattr(instance.application, \"elasticapm_client\"):\n # If tornado was instrumented but not as the main framework\n # (i.e. in Flower), we should skip it.\n return await wrapped(*args, **kwargs)\n\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n\n request = instance.request\n trace_parent = TraceParent.from_headers(request.headers)\n client = instance.application.elasticapm_client\n client.begin_transaction(\"request\", trace_parent=trace_parent)\n elasticapm.set_context(\n lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), \"request\"\n )\n # TODO: Can we somehow incorporate the routing rule itself here?\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, type(instance).__name__), override=False)\n\n ret = await wrapped(*args, **kwargs)\n\n elasticapm.set_context(\n lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), \"response\"\n )\n result = \"HTTP {}xx\".format(instance.get_status() // 100)\n elasticapm.set_transaction_result(result, override=False)\n client.end_transaction()\n\n return ret\n\n\nclass TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_handle_request_exception\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if not hasattr(instance.application, \"elasticapm_client\"):\n # If tornado was instrumented but not as the main framework\n # (i.e. in Flower), we should skip it.\n return wrapped(*args, **kwargs)\n\n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n from elasticapm.contrib.tornado.utils import get_data_from_request\n\n e = args[0]\n if isinstance(e, Finish):\n # Not an error; Finish is an exception that ends a request without an error response\n return wrapped(*args, **kwargs)\n\n client = instance.application.elasticapm_client\n request = instance.request\n client.capture_exception(\n context={\"request\": get_data_from_request(instance, request, client.config, constants.ERROR)}\n )\n if isinstance(e, HTTPError):\n elasticapm.set_transaction_result(\"HTTP {}xx\".format(int(e.status_code / 100)), override=False)\n elasticapm.set_context({\"status_code\": e.status_code}, \"response\")\n else:\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n return wrapped(*args, **kwargs)\n\n\nclass TornadoRenderInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_render\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler.render\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"template_name\" in kwargs:\n name = kwargs[\"template_name\"]\n else:\n name = args[0]\n\n with capture_span(name, span_type=\"template\", span_subtype=\"tornado\", span_action=\"render\"):\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/tornado.py"}]} | 1,717 | 310 |
gh_patches_debug_36976 | rasdani/github-patches | git_diff | keras-team__keras-18977 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
chore: override item removal methods in tracking
Based on the TODO comments in keras/keras/utils/tracking.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/tracking.py`
Content:
```
1 from functools import wraps
2
3 from keras.backend.common.global_state import get_global_attribute
4 from keras.backend.common.global_state import set_global_attribute
5 from keras.utils import python_utils
6
7
8 class DotNotTrackScope:
9 def __enter__(self):
10 self.original_value = is_tracking_enabled()
11 set_global_attribute("tracking_on", False)
12
13 def __exit__(self, *args, **kwargs):
14 set_global_attribute("tracking_on", self.original_value)
15
16
17 def is_tracking_enabled():
18 return get_global_attribute("tracking_on", True)
19
20
21 def no_automatic_dependency_tracking(fn):
22 @wraps(fn)
23 def wrapper(*args, **kwargs):
24 with DotNotTrackScope():
25 return fn(*args, **kwargs)
26
27 return wrapper
28
29
30 class Tracker:
31 """Attribute tracker, used for e.g. Variable tracking.
32
33 Monitors certain attribute types
34 and put them in appropriate lists in case of a match.
35
36 Also passively tracks certain mutable collections
37 (dict, list) so that items added to them later
38 still get tracked. This is done by wrapping these
39 collections into an equivalent, tracking-aware object.
40
41 Usage:
42
43 ```python
44 def __init__(self):
45 self.tracker = Tracker(
46 # Format: `name: (test_fn, store)`
47 {
48 "variables":
49 (lambda x: isinstance(x, Variable), self._variables),
50 "metrics": (lambda x: isinstance(x, Metric), self._metrics),
51 "layers": (lambda x: isinstance(x, Layer), self._layers),
52 }
53 )
54
55 def __setattr__(self, name, value):
56 if hasattr(self, "_tracker"):
57 value = self._tracker.track(value)
58 return super().__setattr__(name, value)
59 ```
60 """
61
62 def __init__(self, config):
63 self.config = config
64 self.stored_ids = {name: set() for name in self.config.keys()}
65 self.locked = False
66 self._lock_violation_msg = None
67
68 def track(self, attr):
69 if not is_tracking_enabled():
70 return attr
71
72 for store_name, (is_attr_type, _) in self.config.items():
73 if is_attr_type(attr):
74 if id(attr) not in self.stored_ids[store_name]:
75 self.add_to_store(store_name, attr)
76 return attr
77 if isinstance(attr, tuple):
78 wrapped_attr = []
79 for e in attr:
80 wrapped_attr.append(self.track(e))
81 # This should cover tuples and nametuples
82 return attr.__class__(wrapped_attr)
83 elif isinstance(attr, list):
84 return TrackedList(attr, self)
85 elif isinstance(attr, dict):
86 # TODO: OrderedDict?
87 return TrackedDict(attr, self)
88 elif isinstance(attr, set):
89 return TrackedSet(attr, self)
90 return attr
91
92 def untrack(self, value):
93 for store_name in self.stored_ids.keys():
94 if id(value) in self.stored_ids[store_name]:
95 self.stored_ids[store_name].remove(id(value))
96 python_utils.remove_by_id(self.config[store_name][1], value)
97
98 def lock(self, msg):
99 self.locked = True
100 self._lock_violation_msg = msg
101
102 def add_to_store(self, store_name, value):
103 if self.locked:
104 raise ValueError(self._lock_violation_msg)
105 self.config[store_name][1].append(value)
106 self.stored_ids[store_name].add(id(value))
107
108
109 class TrackedList(list):
110 # TODO: override item removal methods?
111 def __init__(self, values=None, tracker=None):
112 self.tracker = tracker
113 if tracker and values:
114 values = [tracker.track(v) for v in values]
115 super().__init__(values or [])
116
117 def append(self, value):
118 if self.tracker:
119 self.tracker.track(value)
120 super().append(value)
121
122 def insert(self, value):
123 if self.tracker:
124 self.tracker.track(value)
125 super().insert(value)
126
127 def extend(self, values):
128 if self.tracker:
129 values = [self.tracker.track(v) for v in values]
130 super().extend(values)
131
132 def remove(self, value):
133 if self.tracker:
134 self.tracker.untrack(value)
135 try:
136 super().remove(value)
137 except ValueError:
138 python_utils.remove_by_id(self, value)
139
140
141 class TrackedDict(dict):
142 # TODO: override item removal methods?
143 def __init__(self, values=None, tracker=None):
144 self.tracker = tracker
145 if tracker and values:
146 values = {k: tracker.track(v) for k, v in values.items()}
147 super().__init__(values or [])
148
149 def __setitem__(self, key, value):
150 if self.tracker:
151 self.tracker.track(value)
152 super().__setitem__(key, value)
153
154 def update(self, mapping):
155 if self.tracker:
156 mapping = {k: self.tracker.track(v) for k, v in mapping.items()}
157 super().update(mapping)
158
159
160 class TrackedSet(set):
161 # TODO: override item removal methods?
162 def __init__(self, values=None, tracker=None):
163 self.tracker = tracker
164 if tracker and values:
165 values = {tracker.track(v) for v in values}
166 super().__init__(values or [])
167
168 def add(self, value):
169 if self.tracker:
170 self.tracker.track(value)
171 super().add(value)
172
173 def update(self, values):
174 if self.tracker:
175 values = [self.tracker.track(v) for v in values]
176 super().update(values)
177
178 def remove(self, value):
179 if self.tracker:
180 self.tracker.untrack(value)
181 super().remove(value)
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py
--- a/keras/utils/tracking.py
+++ b/keras/utils/tracking.py
@@ -107,7 +107,6 @@
class TrackedList(list):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -137,9 +136,28 @@
except ValueError:
python_utils.remove_by_id(self, value)
+ def pop(self, index=-1):
+ if self.tracker:
+ value = self[index]
+ self.tracker.untrack(value)
+ return super().pop(index)
+ else:
+ return super().pop(index)
+
+ def clear(self):
+ if self.tracker:
+ for value in self:
+ self.tracker.untrack(value)
+ super().clear()
+
+ def __delitem__(self, index):
+ value = self[index] # Get value before removing
+ super().__delitem__(index)
+ if self.tracker:
+ self.tracker.untrack(value)
+
class TrackedDict(dict):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -156,9 +174,29 @@
mapping = {k: self.tracker.track(v) for k, v in mapping.items()}
super().update(mapping)
+ def pop(self, key, default=None):
+ if self.tracker:
+ value = super().pop(key, default)
+ if value is not default:
+ self.tracker.untrack(value)
+ return value
+ else:
+ return super().pop(key, default)
+
+ def popitem(self):
+ key, value = super().popitem()
+ if self.tracker:
+ self.tracker.untrack(value)
+ return key, value
+
+ def clear(self):
+ if self.tracker:
+ for value in self.values():
+ self.tracker.untrack(value)
+ super().clear()
+
class TrackedSet(set):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -179,3 +217,15 @@
if self.tracker:
self.tracker.untrack(value)
super().remove(value)
+
+ def pop(self):
+ value = super().pop()
+ if self.tracker:
+ self.tracker.untrack(value)
+ return value
+
+ def clear(self):
+ if self.tracker:
+ for value in self:
+ self.tracker.untrack(value)
+ super().clear()
| {"golden_diff": "diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py\n--- a/keras/utils/tracking.py\n+++ b/keras/utils/tracking.py\n@@ -107,7 +107,6 @@\n \n \n class TrackedList(list):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -137,9 +136,28 @@\n except ValueError:\n python_utils.remove_by_id(self, value)\n \n+ def pop(self, index=-1):\n+ if self.tracker:\n+ value = self[index]\n+ self.tracker.untrack(value)\n+ return super().pop(index)\n+ else:\n+ return super().pop(index)\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self:\n+ self.tracker.untrack(value)\n+ super().clear()\n+\n+ def __delitem__(self, index):\n+ value = self[index] # Get value before removing\n+ super().__delitem__(index)\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+\n \n class TrackedDict(dict):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -156,9 +174,29 @@\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n \n+ def pop(self, key, default=None):\n+ if self.tracker:\n+ value = super().pop(key, default)\n+ if value is not default:\n+ self.tracker.untrack(value)\n+ return value\n+ else:\n+ return super().pop(key, default)\n+\n+ def popitem(self):\n+ key, value = super().popitem()\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+ return key, value\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self.values():\n+ self.tracker.untrack(value)\n+ super().clear()\n+\n \n class TrackedSet(set):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -179,3 +217,15 @@\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n+\n+ def pop(self):\n+ value = super().pop()\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+ return value\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self:\n+ self.tracker.untrack(value)\n+ super().clear()\n", "issue": "chore: override item removal methods in tracking\nBased on the TODO comments in keras/keras/utils/tracking.py\n", "before_files": [{"content": "from functools import wraps\n\nfrom keras.backend.common.global_state import get_global_attribute\nfrom keras.backend.common.global_state import set_global_attribute\nfrom keras.utils import python_utils\n\n\nclass DotNotTrackScope:\n def __enter__(self):\n self.original_value = is_tracking_enabled()\n set_global_attribute(\"tracking_on\", False)\n\n def __exit__(self, *args, **kwargs):\n set_global_attribute(\"tracking_on\", self.original_value)\n\n\ndef is_tracking_enabled():\n return get_global_attribute(\"tracking_on\", True)\n\n\ndef no_automatic_dependency_tracking(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n with DotNotTrackScope():\n return fn(*args, **kwargs)\n\n return wrapper\n\n\nclass Tracker:\n \"\"\"Attribute tracker, used for e.g. Variable tracking.\n\n Monitors certain attribute types\n and put them in appropriate lists in case of a match.\n\n Also passively tracks certain mutable collections\n (dict, list) so that items added to them later\n still get tracked. This is done by wrapping these\n collections into an equivalent, tracking-aware object.\n\n Usage:\n\n ```python\n def __init__(self):\n self.tracker = Tracker(\n # Format: `name: (test_fn, store)`\n {\n \"variables\":\n (lambda x: isinstance(x, Variable), self._variables),\n \"metrics\": (lambda x: isinstance(x, Metric), self._metrics),\n \"layers\": (lambda x: isinstance(x, Layer), self._layers),\n }\n )\n\n def __setattr__(self, name, value):\n if hasattr(self, \"_tracker\"):\n value = self._tracker.track(value)\n return super().__setattr__(name, value)\n ```\n \"\"\"\n\n def __init__(self, config):\n self.config = config\n self.stored_ids = {name: set() for name in self.config.keys()}\n self.locked = False\n self._lock_violation_msg = None\n\n def track(self, attr):\n if not is_tracking_enabled():\n return attr\n\n for store_name, (is_attr_type, _) in self.config.items():\n if is_attr_type(attr):\n if id(attr) not in self.stored_ids[store_name]:\n self.add_to_store(store_name, attr)\n return attr\n if isinstance(attr, tuple):\n wrapped_attr = []\n for e in attr:\n wrapped_attr.append(self.track(e))\n # This should cover tuples and nametuples\n return attr.__class__(wrapped_attr)\n elif isinstance(attr, list):\n return TrackedList(attr, self)\n elif isinstance(attr, dict):\n # TODO: OrderedDict?\n return TrackedDict(attr, self)\n elif isinstance(attr, set):\n return TrackedSet(attr, self)\n return attr\n\n def untrack(self, value):\n for store_name in self.stored_ids.keys():\n if id(value) in self.stored_ids[store_name]:\n self.stored_ids[store_name].remove(id(value))\n python_utils.remove_by_id(self.config[store_name][1], value)\n\n def lock(self, msg):\n self.locked = True\n self._lock_violation_msg = msg\n\n def add_to_store(self, store_name, value):\n if self.locked:\n raise ValueError(self._lock_violation_msg)\n self.config[store_name][1].append(value)\n self.stored_ids[store_name].add(id(value))\n\n\nclass TrackedList(list):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = [tracker.track(v) for v in values]\n super().__init__(values or [])\n\n def append(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().append(value)\n\n def insert(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().insert(value)\n\n def extend(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().extend(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n try:\n super().remove(value)\n except ValueError:\n python_utils.remove_by_id(self, value)\n\n\nclass TrackedDict(dict):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {k: tracker.track(v) for k, v in values.items()}\n super().__init__(values or [])\n\n def __setitem__(self, key, value):\n if self.tracker:\n self.tracker.track(value)\n super().__setitem__(key, value)\n\n def update(self, mapping):\n if self.tracker:\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n\n\nclass TrackedSet(set):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {tracker.track(v) for v in values}\n super().__init__(values or [])\n\n def add(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().add(value)\n\n def update(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().update(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n", "path": "keras/utils/tracking.py"}], "after_files": [{"content": "from functools import wraps\n\nfrom keras.backend.common.global_state import get_global_attribute\nfrom keras.backend.common.global_state import set_global_attribute\nfrom keras.utils import python_utils\n\n\nclass DotNotTrackScope:\n def __enter__(self):\n self.original_value = is_tracking_enabled()\n set_global_attribute(\"tracking_on\", False)\n\n def __exit__(self, *args, **kwargs):\n set_global_attribute(\"tracking_on\", self.original_value)\n\n\ndef is_tracking_enabled():\n return get_global_attribute(\"tracking_on\", True)\n\n\ndef no_automatic_dependency_tracking(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n with DotNotTrackScope():\n return fn(*args, **kwargs)\n\n return wrapper\n\n\nclass Tracker:\n \"\"\"Attribute tracker, used for e.g. Variable tracking.\n\n Monitors certain attribute types\n and put them in appropriate lists in case of a match.\n\n Also passively tracks certain mutable collections\n (dict, list) so that items added to them later\n still get tracked. This is done by wrapping these\n collections into an equivalent, tracking-aware object.\n\n Usage:\n\n ```python\n def __init__(self):\n self.tracker = Tracker(\n # Format: `name: (test_fn, store)`\n {\n \"variables\":\n (lambda x: isinstance(x, Variable), self._variables),\n \"metrics\": (lambda x: isinstance(x, Metric), self._metrics),\n \"layers\": (lambda x: isinstance(x, Layer), self._layers),\n }\n )\n\n def __setattr__(self, name, value):\n if hasattr(self, \"_tracker\"):\n value = self._tracker.track(value)\n return super().__setattr__(name, value)\n ```\n \"\"\"\n\n def __init__(self, config):\n self.config = config\n self.stored_ids = {name: set() for name in self.config.keys()}\n self.locked = False\n self._lock_violation_msg = None\n\n def track(self, attr):\n if not is_tracking_enabled():\n return attr\n\n for store_name, (is_attr_type, _) in self.config.items():\n if is_attr_type(attr):\n if id(attr) not in self.stored_ids[store_name]:\n self.add_to_store(store_name, attr)\n return attr\n if isinstance(attr, tuple):\n wrapped_attr = []\n for e in attr:\n wrapped_attr.append(self.track(e))\n # This should cover tuples and nametuples\n return attr.__class__(wrapped_attr)\n elif isinstance(attr, list):\n return TrackedList(attr, self)\n elif isinstance(attr, dict):\n # TODO: OrderedDict?\n return TrackedDict(attr, self)\n elif isinstance(attr, set):\n return TrackedSet(attr, self)\n return attr\n\n def untrack(self, value):\n for store_name in self.stored_ids.keys():\n if id(value) in self.stored_ids[store_name]:\n self.stored_ids[store_name].remove(id(value))\n python_utils.remove_by_id(self.config[store_name][1], value)\n\n def lock(self, msg):\n self.locked = True\n self._lock_violation_msg = msg\n\n def add_to_store(self, store_name, value):\n if self.locked:\n raise ValueError(self._lock_violation_msg)\n self.config[store_name][1].append(value)\n self.stored_ids[store_name].add(id(value))\n\n\nclass TrackedList(list):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = [tracker.track(v) for v in values]\n super().__init__(values or [])\n\n def append(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().append(value)\n\n def insert(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().insert(value)\n\n def extend(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().extend(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n try:\n super().remove(value)\n except ValueError:\n python_utils.remove_by_id(self, value)\n\n def pop(self, index=-1):\n if self.tracker:\n value = self[index]\n self.tracker.untrack(value)\n return super().pop(index)\n else:\n return super().pop(index)\n\n def clear(self):\n if self.tracker:\n for value in self:\n self.tracker.untrack(value)\n super().clear()\n\n def __delitem__(self, index):\n value = self[index] # Get value before removing\n super().__delitem__(index)\n if self.tracker:\n self.tracker.untrack(value)\n\n\nclass TrackedDict(dict):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {k: tracker.track(v) for k, v in values.items()}\n super().__init__(values or [])\n\n def __setitem__(self, key, value):\n if self.tracker:\n self.tracker.track(value)\n super().__setitem__(key, value)\n\n def update(self, mapping):\n if self.tracker:\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n\n def pop(self, key, default=None):\n if self.tracker:\n value = super().pop(key, default)\n if value is not default:\n self.tracker.untrack(value)\n return value\n else:\n return super().pop(key, default)\n\n def popitem(self):\n key, value = super().popitem()\n if self.tracker:\n self.tracker.untrack(value)\n return key, value\n\n def clear(self):\n if self.tracker:\n for value in self.values():\n self.tracker.untrack(value)\n super().clear()\n\n\nclass TrackedSet(set):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {tracker.track(v) for v in values}\n super().__init__(values or [])\n\n def add(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().add(value)\n\n def update(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().update(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n\n def pop(self):\n value = super().pop()\n if self.tracker:\n self.tracker.untrack(value)\n return value\n\n def clear(self):\n if self.tracker:\n for value in self:\n self.tracker.untrack(value)\n super().clear()\n", "path": "keras/utils/tracking.py"}]} | 1,973 | 648 |
gh_patches_debug_37011 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-1084 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No specific error when dataset tries to import a non-installed module
When a dataset script tries to import a module/library that is not installed, there is no informative error message.
See:
- #1067
- #1068
Related to:
- #976
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `services/worker/src/worker/job_runners/config_names.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5 from http import HTTPStatus
6 from typing import Any, List, Literal, Mapping, Optional, TypedDict, Union
7
8 from datasets import get_dataset_config_names
9 from datasets.data_files import EmptyDatasetError as _EmptyDatasetError
10 from libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION
11 from libcommon.simple_cache import SplitFullName
12
13 from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError
14 from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner
15
16 ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "ConfigNamesError"]
17
18
19 class ConfigNamesJobRunnerError(JobRunnerError):
20 """Base class for job runner exceptions."""
21
22 def __init__(
23 self,
24 message: str,
25 status_code: HTTPStatus,
26 code: ConfigNamesJobRunnerErrorCode,
27 cause: Optional[BaseException] = None,
28 disclose_cause: bool = False,
29 ):
30 super().__init__(
31 message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause
32 )
33
34
35 class EmptyDatasetError(ConfigNamesJobRunnerError):
36 """Raised when the dataset has no data."""
37
38 def __init__(self, message: str, cause: Optional[BaseException] = None):
39 super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "EmptyDatasetError", cause, True)
40
41
42 class ConfigNamesError(ConfigNamesJobRunnerError):
43 """Raised when the config names could not be fetched."""
44
45 def __init__(self, message: str, cause: Optional[BaseException] = None):
46 super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "ConfigNamesError", cause, True)
47
48
49 class ConfigNameItem(TypedDict):
50 dataset: str
51 config: str
52
53
54 class ConfigNamesResponse(TypedDict):
55 config_names: List[ConfigNameItem]
56
57
58 def compute_config_names_response(
59 dataset: str,
60 hf_token: Optional[str] = None,
61 ) -> ConfigNamesResponse:
62 """
63 Get the response of /config-names for one specific dataset on huggingface.co.
64 Dataset can be private or gated if you pass an acceptable token.
65
66 It is assumed that the dataset exists and can be accessed using the token.
67
68 Args:
69 dataset (`str`):
70 A namespace (user or an organization) and a repo name separated
71 by a `/`.
72 hf_token (`str`, *optional*):
73 An authentication token (See https://huggingface.co/settings/token)
74 Returns:
75 `ConfigNamesResponse`: An object with the list of config names.
76 <Tip>
77 Raises the following errors:
78 - [`~job_runners.config_names.EmptyDatasetError`]
79 The dataset is empty.
80 - [`~job_runners.config_names.ConfigNamesError`]
81 If the list of configs could not be obtained using the datasets library.
82 </Tip>
83 """
84 logging.info(f"get config names for dataset={dataset}")
85 use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False
86 # get the list of splits in streaming mode
87 try:
88 config_name_items: List[ConfigNameItem] = [
89 {"dataset": dataset, "config": str(config)}
90 for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
91 ]
92 except _EmptyDatasetError as err:
93 raise EmptyDatasetError("The dataset is empty.", cause=err) from err
94 except Exception as err:
95 raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
96 return ConfigNamesResponse(config_names=config_name_items)
97
98
99 class ConfigNamesJobRunner(DatasetsBasedJobRunner):
100 @staticmethod
101 def get_job_type() -> str:
102 return "/config-names"
103
104 @staticmethod
105 def get_job_runner_version() -> int:
106 return PROCESSING_STEP_CONFIG_NAMES_VERSION
107
108 def compute(self) -> CompleteJobResult:
109 if self.dataset is None:
110 raise ParameterMissingError("'dataset' parameter is required")
111 return CompleteJobResult(
112 compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)
113 )
114
115 def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:
116 """Get the set of new splits, from the content created by the compute."""
117 return {SplitFullName(dataset=s["dataset"], config=s["config"], split=None) for s in content["config_names"]}
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/services/worker/src/worker/job_runners/config_names.py b/services/worker/src/worker/job_runners/config_names.py
--- a/services/worker/src/worker/job_runners/config_names.py
+++ b/services/worker/src/worker/job_runners/config_names.py
@@ -13,7 +13,7 @@
from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError
from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner
-ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "ConfigNamesError"]
+ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "DatasetModuleNotInstalledError", "ConfigNamesError"]
class ConfigNamesJobRunnerError(JobRunnerError):
@@ -39,6 +39,13 @@
super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "EmptyDatasetError", cause, True)
+class DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):
+ """Raised when the dataset tries to import a module that is not installed."""
+
+ def __init__(self, message: str, cause: Optional[BaseException] = None):
+ super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "DatasetModuleNotInstalledError", cause, True)
+
+
class ConfigNamesError(ConfigNamesJobRunnerError):
"""Raised when the config names could not be fetched."""
@@ -77,6 +84,8 @@
Raises the following errors:
- [`~job_runners.config_names.EmptyDatasetError`]
The dataset is empty.
+ - [`~job_runners.config_names.DatasetModuleNotInstalledError`]
+ The dataset tries to import a module that is not installed.
- [`~job_runners.config_names.ConfigNamesError`]
If the list of configs could not be obtained using the datasets library.
</Tip>
@@ -91,6 +100,10 @@
]
except _EmptyDatasetError as err:
raise EmptyDatasetError("The dataset is empty.", cause=err) from err
+ except ImportError as err:
+ raise DatasetModuleNotInstalledError(
+ "The dataset tries to import a module that is not installed.", cause=err
+ ) from err
except Exception as err:
raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
return ConfigNamesResponse(config_names=config_name_items)
| {"golden_diff": "diff --git a/services/worker/src/worker/job_runners/config_names.py b/services/worker/src/worker/job_runners/config_names.py\n--- a/services/worker/src/worker/job_runners/config_names.py\n+++ b/services/worker/src/worker/job_runners/config_names.py\n@@ -13,7 +13,7 @@\n from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\n from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n \n-ConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"ConfigNamesError\"]\n+ConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"DatasetModuleNotInstalledError\", \"ConfigNamesError\"]\n \n \n class ConfigNamesJobRunnerError(JobRunnerError):\n@@ -39,6 +39,13 @@\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n \n \n+class DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):\n+ \"\"\"Raised when the dataset tries to import a module that is not installed.\"\"\"\n+\n+ def __init__(self, message: str, cause: Optional[BaseException] = None):\n+ super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"DatasetModuleNotInstalledError\", cause, True)\n+\n+\n class ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n \n@@ -77,6 +84,8 @@\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n+ - [`~job_runners.config_names.DatasetModuleNotInstalledError`]\n+ The dataset tries to import a module that is not installed.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n@@ -91,6 +100,10 @@\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n+ except ImportError as err:\n+ raise DatasetModuleNotInstalledError(\n+ \"The dataset tries to import a module that is not installed.\", cause=err\n+ ) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n", "issue": "No specific error when dataset tries to import a non-installed module\nWhen a dataset script tries to import a module/library that is not installed, there is no informative error message.\r\n\r\nSee:\r\n- #1067 \r\n- #1068\r\n\r\nRelated to:\r\n- #976\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom http import HTTPStatus\nfrom typing import Any, List, Literal, Mapping, Optional, TypedDict, Union\n\nfrom datasets import get_dataset_config_names\nfrom datasets.data_files import EmptyDatasetError as _EmptyDatasetError\nfrom libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION\nfrom libcommon.simple_cache import SplitFullName\n\nfrom worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\nfrom worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n\nConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"ConfigNamesError\"]\n\n\nclass ConfigNamesJobRunnerError(JobRunnerError):\n \"\"\"Base class for job runner exceptions.\"\"\"\n\n def __init__(\n self,\n message: str,\n status_code: HTTPStatus,\n code: ConfigNamesJobRunnerErrorCode,\n cause: Optional[BaseException] = None,\n disclose_cause: bool = False,\n ):\n super().__init__(\n message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause\n )\n\n\nclass EmptyDatasetError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset has no data.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n\n\nclass ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"ConfigNamesError\", cause, True)\n\n\nclass ConfigNameItem(TypedDict):\n dataset: str\n config: str\n\n\nclass ConfigNamesResponse(TypedDict):\n config_names: List[ConfigNameItem]\n\n\ndef compute_config_names_response(\n dataset: str,\n hf_token: Optional[str] = None,\n) -> ConfigNamesResponse:\n \"\"\"\n Get the response of /config-names for one specific dataset on huggingface.co.\n Dataset can be private or gated if you pass an acceptable token.\n\n It is assumed that the dataset exists and can be accessed using the token.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated\n by a `/`.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n Returns:\n `ConfigNamesResponse`: An object with the list of config names.\n <Tip>\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n \"\"\"\n logging.info(f\"get config names for dataset={dataset}\")\n use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False\n # get the list of splits in streaming mode\n try:\n config_name_items: List[ConfigNameItem] = [\n {\"dataset\": dataset, \"config\": str(config)}\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n\n\nclass ConfigNamesJobRunner(DatasetsBasedJobRunner):\n @staticmethod\n def get_job_type() -> str:\n return \"/config-names\"\n\n @staticmethod\n def get_job_runner_version() -> int:\n return PROCESSING_STEP_CONFIG_NAMES_VERSION\n\n def compute(self) -> CompleteJobResult:\n if self.dataset is None:\n raise ParameterMissingError(\"'dataset' parameter is required\")\n return CompleteJobResult(\n compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)\n )\n\n def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:\n \"\"\"Get the set of new splits, from the content created by the compute.\"\"\"\n return {SplitFullName(dataset=s[\"dataset\"], config=s[\"config\"], split=None) for s in content[\"config_names\"]}\n", "path": "services/worker/src/worker/job_runners/config_names.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom http import HTTPStatus\nfrom typing import Any, List, Literal, Mapping, Optional, TypedDict, Union\n\nfrom datasets import get_dataset_config_names\nfrom datasets.data_files import EmptyDatasetError as _EmptyDatasetError\nfrom libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION\nfrom libcommon.simple_cache import SplitFullName\n\nfrom worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\nfrom worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n\nConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"DatasetModuleNotInstalledError\", \"ConfigNamesError\"]\n\n\nclass ConfigNamesJobRunnerError(JobRunnerError):\n \"\"\"Base class for job runner exceptions.\"\"\"\n\n def __init__(\n self,\n message: str,\n status_code: HTTPStatus,\n code: ConfigNamesJobRunnerErrorCode,\n cause: Optional[BaseException] = None,\n disclose_cause: bool = False,\n ):\n super().__init__(\n message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause\n )\n\n\nclass EmptyDatasetError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset has no data.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n\n\nclass DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset tries to import a module that is not installed.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"DatasetModuleNotInstalledError\", cause, True)\n\n\nclass ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"ConfigNamesError\", cause, True)\n\n\nclass ConfigNameItem(TypedDict):\n dataset: str\n config: str\n\n\nclass ConfigNamesResponse(TypedDict):\n config_names: List[ConfigNameItem]\n\n\ndef compute_config_names_response(\n dataset: str,\n hf_token: Optional[str] = None,\n) -> ConfigNamesResponse:\n \"\"\"\n Get the response of /config-names for one specific dataset on huggingface.co.\n Dataset can be private or gated if you pass an acceptable token.\n\n It is assumed that the dataset exists and can be accessed using the token.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated\n by a `/`.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n Returns:\n `ConfigNamesResponse`: An object with the list of config names.\n <Tip>\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n - [`~job_runners.config_names.DatasetModuleNotInstalledError`]\n The dataset tries to import a module that is not installed.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n \"\"\"\n logging.info(f\"get config names for dataset={dataset}\")\n use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False\n # get the list of splits in streaming mode\n try:\n config_name_items: List[ConfigNameItem] = [\n {\"dataset\": dataset, \"config\": str(config)}\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n except ImportError as err:\n raise DatasetModuleNotInstalledError(\n \"The dataset tries to import a module that is not installed.\", cause=err\n ) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n\n\nclass ConfigNamesJobRunner(DatasetsBasedJobRunner):\n @staticmethod\n def get_job_type() -> str:\n return \"/config-names\"\n\n @staticmethod\n def get_job_runner_version() -> int:\n return PROCESSING_STEP_CONFIG_NAMES_VERSION\n\n def compute(self) -> CompleteJobResult:\n if self.dataset is None:\n raise ParameterMissingError(\"'dataset' parameter is required\")\n return CompleteJobResult(\n compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)\n )\n\n def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:\n \"\"\"Get the set of new splits, from the content created by the compute.\"\"\"\n return {SplitFullName(dataset=s[\"dataset\"], config=s[\"config\"], split=None) for s in content[\"config_names\"]}\n", "path": "services/worker/src/worker/job_runners/config_names.py"}]} | 1,562 | 524 |
gh_patches_debug_19065 | rasdani/github-patches | git_diff | Azure__azure-cli-extensions-2850 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Is it possible to query Log Analytics via the az cli with a saved query?
I can’t tell from the documentation, is it possible to run a saved Log Analytics Query from this CLI command?
If not, a useful enhancement would be to enable the use a saved query in addition to the ability to execute queries in-line. The queries get long and cumbersome to maintain outside of Log Analytics.
If it is, however, possible to run a saved query, would you mind updating the documentation here? Thanks.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f0fd6a58-ac1a-fa45-8d96-579b4af36499
* Version Independent ID: 4098ca97-1b85-eb29-18e9-e6f0495fd030
* Content: [az monitor log-analytics](https://docs.microsoft.com/en-us/cli/azure/ext/log-analytics/monitor/log-analytics?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/log-analytics/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8 from codecs import open
9 from setuptools import setup, find_packages
10
11 VERSION = "0.2.1"
12
13 CLASSIFIERS = [
14 'Development Status :: 4 - Beta',
15 'Intended Audience :: Developers',
16 'Intended Audience :: System Administrators',
17 'Programming Language :: Python',
18 'Programming Language :: Python :: 2',
19 'Programming Language :: Python :: 2.7',
20 'Programming Language :: Python :: 3',
21 'Programming Language :: Python :: 3.4',
22 'Programming Language :: Python :: 3.5',
23 'Programming Language :: Python :: 3.6',
24 'License :: OSI Approved :: MIT License',
25 ]
26
27 DEPENDENCIES = []
28
29 with open('README.rst', 'r', encoding='utf-8') as f:
30 README = f.read()
31 with open('HISTORY.rst', 'r', encoding='utf-8') as f:
32 HISTORY = f.read()
33
34 setup(
35 name='log-analytics',
36 version=VERSION,
37 description='Support for Azure Log Analytics query capabilities.',
38 long_description=README + '\n\n' + HISTORY,
39 license='MIT',
40 author='Ace Eldeib',
41 author_email='[email protected]',
42 url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',
43 classifiers=CLASSIFIERS,
44 packages=find_packages(exclude=["tests"]),
45 package_data={'azext_loganalytics': ['azext_metadata.json']},
46 install_requires=DEPENDENCIES
47 )
48
```
Path: `src/log-analytics/azext_loganalytics/_help.py`
Content:
```
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 from knack.help_files import helps
7
8 # pylint: disable=line-too-long
9
10 helps['monitor log-analytics'] = """
11 type: group
12 short-summary: Commands for querying data in Log Analytics workspaces.
13 """
14
15 helps['monitor log-analytics query'] = """
16 type: command
17 short-summary: Query a Log Analytics workspace.
18 examples:
19 - name: Execute a simple query over past 3.5 days.
20 text: |
21 az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
22 """
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/log-analytics/azext_loganalytics/_help.py b/src/log-analytics/azext_loganalytics/_help.py
--- a/src/log-analytics/azext_loganalytics/_help.py
+++ b/src/log-analytics/azext_loganalytics/_help.py
@@ -18,5 +18,9 @@
examples:
- name: Execute a simple query over past 3.5 days.
text: |
- az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
+ az monitor log-analytics query -w workspace-customId --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
+ - name: Execute a saved query in workspace
+ text: |
+ QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)
+ az monitor log-analytics query -w workspace-customId --analytics-query "$QUERY"
"""
diff --git a/src/log-analytics/setup.py b/src/log-analytics/setup.py
--- a/src/log-analytics/setup.py
+++ b/src/log-analytics/setup.py
@@ -8,7 +8,7 @@
from codecs import open
from setuptools import setup, find_packages
-VERSION = "0.2.1"
+VERSION = "0.2.2"
CLASSIFIERS = [
'Development Status :: 4 - Beta',
| {"golden_diff": "diff --git a/src/log-analytics/azext_loganalytics/_help.py b/src/log-analytics/azext_loganalytics/_help.py\n--- a/src/log-analytics/azext_loganalytics/_help.py\n+++ b/src/log-analytics/azext_loganalytics/_help.py\n@@ -18,5 +18,9 @@\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n- az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n+ az monitor log-analytics query -w workspace-customId --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n+ - name: Execute a saved query in workspace\n+ text: |\n+ QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)\n+ az monitor log-analytics query -w workspace-customId --analytics-query \"$QUERY\"\n \"\"\"\ndiff --git a/src/log-analytics/setup.py b/src/log-analytics/setup.py\n--- a/src/log-analytics/setup.py\n+++ b/src/log-analytics/setup.py\n@@ -8,7 +8,7 @@\n from codecs import open\n from setuptools import setup, find_packages\n \n-VERSION = \"0.2.1\"\n+VERSION = \"0.2.2\"\n \n CLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n", "issue": "Is it possible to query Log Analytics via the az cli with a saved query?\n\r\nI can\u2019t tell from the documentation, is it possible to run a saved Log Analytics Query from this CLI command? \r\n\r\nIf not, a useful enhancement would be to enable the use a saved query in addition to the ability to execute queries in-line. The queries get long and cumbersome to maintain outside of Log Analytics.\r\n\r\nIf it is, however, possible to run a saved query, would you mind updating the documentation here? Thanks.\r\n\r\n\r\n---\r\n#### Document Details\r\n\r\n\u26a0 *Do not edit this section. It is required for docs.microsoft.com \u279f GitHub issue linking.*\r\n\r\n* ID: f0fd6a58-ac1a-fa45-8d96-579b4af36499\r\n* Version Independent ID: 4098ca97-1b85-eb29-18e9-e6f0495fd030\r\n* Content: [az monitor log-analytics](https://docs.microsoft.com/en-us/cli/azure/ext/log-analytics/monitor/log-analytics?view=azure-cli-latest)\r\n* Content Source: [latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml)\r\n* GitHub Login: @rloutlaw\r\n* Microsoft Alias: **routlaw**\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.2.1\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='log-analytics',\n version=VERSION,\n description='Support for Azure Log Analytics query capabilities.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_loganalytics': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/log-analytics/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor log-analytics'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Log Analytics workspaces.\n\"\"\"\n\nhelps['monitor log-analytics query'] = \"\"\"\n type: command\n short-summary: Query a Log Analytics workspace.\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n\"\"\"\n", "path": "src/log-analytics/azext_loganalytics/_help.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.2.2\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='log-analytics',\n version=VERSION,\n description='Support for Azure Log Analytics query capabilities.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_loganalytics': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/log-analytics/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor log-analytics'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Log Analytics workspaces.\n\"\"\"\n\nhelps['monitor log-analytics query'] = \"\"\"\n type: command\n short-summary: Query a Log Analytics workspace.\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n az monitor log-analytics query -w workspace-customId --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n - name: Execute a saved query in workspace\n text: |\n QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)\n az monitor log-analytics query -w workspace-customId --analytics-query \"$QUERY\"\n\"\"\"\n", "path": "src/log-analytics/azext_loganalytics/_help.py"}]} | 1,284 | 361 |
gh_patches_debug_60894 | rasdani/github-patches | git_diff | tiangolo__fastapi-493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FastAPI exceptions module mistakenly references the 'requests' package
**Describe the bug**
Starting up a FastAPI 0.38.0 app displays the following error:
```python
from fastapi import FastAPI
File ".../lib/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI
File ".../lib/site-packages/fastapi/applications.py", line 3, in <module>
from fastapi import routing
File ".../lib/site-packages/fastapi/routing.py", line 7, in <module>
from fastapi.dependencies.models import Dependant
File ".../lib/site-packages/fastapi/dependencies/models.py", line 3, in <module>
from fastapi.security.base import SecurityBase
File ".../lib/site-packages/fastapi/security/__init__.py", line 2, in <module>
from .http import (
File ".../lib/site-packages/fastapi/security/http.py", line 5, in <module>
from fastapi.exceptions import HTTPException
File ".../lib/site-packages/fastapi/exceptions.py", line 5, in <module>
from requests import Request
ModuleNotFoundError: No module named 'requests'
```
**Expected behavior**
The app should start without import errors.
**Environment:**
- OS: Linux, Windows, and macOS
- FastAPI Version: 0.38.0
**Additional context**
It's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`
FastAPI exceptions module mistakenly references the 'requests' package
**Describe the bug**
Starting up a FastAPI 0.38.0 app displays the following error:
```python
from fastapi import FastAPI
File ".../lib/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI
File ".../lib/site-packages/fastapi/applications.py", line 3, in <module>
from fastapi import routing
File ".../lib/site-packages/fastapi/routing.py", line 7, in <module>
from fastapi.dependencies.models import Dependant
File ".../lib/site-packages/fastapi/dependencies/models.py", line 3, in <module>
from fastapi.security.base import SecurityBase
File ".../lib/site-packages/fastapi/security/__init__.py", line 2, in <module>
from .http import (
File ".../lib/site-packages/fastapi/security/http.py", line 5, in <module>
from fastapi.exceptions import HTTPException
File ".../lib/site-packages/fastapi/exceptions.py", line 5, in <module>
from requests import Request
ModuleNotFoundError: No module named 'requests'
```
**Expected behavior**
The app should start without import errors.
**Environment:**
- OS: Linux, Windows, and macOS
- FastAPI Version: 0.38.0
**Additional context**
It's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fastapi/exceptions.py`
Content:
```
1 from typing import Any, Sequence
2
3 from pydantic import ValidationError
4 from pydantic.error_wrappers import ErrorList
5 from requests import Request
6 from starlette.exceptions import HTTPException as StarletteHTTPException
7 from starlette.websockets import WebSocket
8
9
10 class HTTPException(StarletteHTTPException):
11 def __init__(
12 self, status_code: int, detail: Any = None, headers: dict = None
13 ) -> None:
14 super().__init__(status_code=status_code, detail=detail)
15 self.headers = headers
16
17
18 class RequestValidationError(ValidationError):
19 def __init__(self, errors: Sequence[ErrorList]) -> None:
20 super().__init__(errors, Request)
21
22
23 class WebSocketRequestValidationError(ValidationError):
24 def __init__(self, errors: Sequence[ErrorList]) -> None:
25 super().__init__(errors, WebSocket)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fastapi/exceptions.py b/fastapi/exceptions.py
--- a/fastapi/exceptions.py
+++ b/fastapi/exceptions.py
@@ -2,8 +2,8 @@
from pydantic import ValidationError
from pydantic.error_wrappers import ErrorList
-from requests import Request
from starlette.exceptions import HTTPException as StarletteHTTPException
+from starlette.requests import Request
from starlette.websockets import WebSocket
| {"golden_diff": "diff --git a/fastapi/exceptions.py b/fastapi/exceptions.py\n--- a/fastapi/exceptions.py\n+++ b/fastapi/exceptions.py\n@@ -2,8 +2,8 @@\n \n from pydantic import ValidationError\n from pydantic.error_wrappers import ErrorList\n-from requests import Request\n from starlette.exceptions import HTTPException as StarletteHTTPException\n+from starlette.requests import Request\n from starlette.websockets import WebSocket\n", "issue": "FastAPI exceptions module mistakenly references the 'requests' package\n**Describe the bug**\r\nStarting up a FastAPI 0.38.0 app displays the following error:\r\n\r\n```python\r\nfrom fastapi import FastAPI\r\n File \".../lib/site-packages/fastapi/__init__.py\", line 7, in <module>\r\n from .applications import FastAPI\r\n File \".../lib/site-packages/fastapi/applications.py\", line 3, in <module>\r\n from fastapi import routing\r\n File \".../lib/site-packages/fastapi/routing.py\", line 7, in <module>\r\n from fastapi.dependencies.models import Dependant\r\n File \".../lib/site-packages/fastapi/dependencies/models.py\", line 3, in <module>\r\n from fastapi.security.base import SecurityBase\r\n File \".../lib/site-packages/fastapi/security/__init__.py\", line 2, in <module>\r\n from .http import (\r\n File \".../lib/site-packages/fastapi/security/http.py\", line 5, in <module>\r\n from fastapi.exceptions import HTTPException\r\n File \".../lib/site-packages/fastapi/exceptions.py\", line 5, in <module>\r\n from requests import Request\r\nModuleNotFoundError: No module named 'requests'\r\n```\r\n\r\n**Expected behavior**\r\nThe app should start without import errors.\r\n\r\n**Environment:**\r\n - OS: Linux, Windows, and macOS\r\n - FastAPI Version: 0.38.0\r\n\r\n**Additional context**\r\nIt's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`\nFastAPI exceptions module mistakenly references the 'requests' package\n**Describe the bug**\r\nStarting up a FastAPI 0.38.0 app displays the following error:\r\n\r\n```python\r\nfrom fastapi import FastAPI\r\n File \".../lib/site-packages/fastapi/__init__.py\", line 7, in <module>\r\n from .applications import FastAPI\r\n File \".../lib/site-packages/fastapi/applications.py\", line 3, in <module>\r\n from fastapi import routing\r\n File \".../lib/site-packages/fastapi/routing.py\", line 7, in <module>\r\n from fastapi.dependencies.models import Dependant\r\n File \".../lib/site-packages/fastapi/dependencies/models.py\", line 3, in <module>\r\n from fastapi.security.base import SecurityBase\r\n File \".../lib/site-packages/fastapi/security/__init__.py\", line 2, in <module>\r\n from .http import (\r\n File \".../lib/site-packages/fastapi/security/http.py\", line 5, in <module>\r\n from fastapi.exceptions import HTTPException\r\n File \".../lib/site-packages/fastapi/exceptions.py\", line 5, in <module>\r\n from requests import Request\r\nModuleNotFoundError: No module named 'requests'\r\n```\r\n\r\n**Expected behavior**\r\nThe app should start without import errors.\r\n\r\n**Environment:**\r\n - OS: Linux, Windows, and macOS\r\n - FastAPI Version: 0.38.0\r\n\r\n**Additional context**\r\nIt's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`\n", "before_files": [{"content": "from typing import Any, Sequence\n\nfrom pydantic import ValidationError\nfrom pydantic.error_wrappers import ErrorList\nfrom requests import Request\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.websockets import WebSocket\n\n\nclass HTTPException(StarletteHTTPException):\n def __init__(\n self, status_code: int, detail: Any = None, headers: dict = None\n ) -> None:\n super().__init__(status_code=status_code, detail=detail)\n self.headers = headers\n\n\nclass RequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, Request)\n\n\nclass WebSocketRequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, WebSocket)\n", "path": "fastapi/exceptions.py"}], "after_files": [{"content": "from typing import Any, Sequence\n\nfrom pydantic import ValidationError\nfrom pydantic.error_wrappers import ErrorList\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.requests import Request\nfrom starlette.websockets import WebSocket\n\n\nclass HTTPException(StarletteHTTPException):\n def __init__(\n self, status_code: int, detail: Any = None, headers: dict = None\n ) -> None:\n super().__init__(status_code=status_code, detail=detail)\n self.headers = headers\n\n\nclass RequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, Request)\n\n\nclass WebSocketRequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, WebSocket)\n", "path": "fastapi/exceptions.py"}]} | 1,164 | 95 |
gh_patches_debug_9408 | rasdani/github-patches | git_diff | pytorch__pytorch-5108 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torch.nn.DataParallel supporting unequal sizes
As documented [here](http://pytorch.org/docs/master/_modules/torch/nn/parallel/data_parallel.html):
```
The batch size should be larger than the number of GPUs used. It should
also be an integer multiple of the number of GPUs so that each chunk is the
same size (so that each GPU processes the same number of samples).
```
To use `torch.nn.DataParallel`, people should carefully set the batch size according to the number of gpus they plan to use, otherwise it will pop up errors.
This issue becomes more subtle when using `torch.utils.data.DataLoader` with `drop_last=False` by default. As the total number of training/validation samples varies with the dataset, the size of the last batch of data loaded by `torch.utils.data.DataLoader` is easy to become indivisible by the number of GPUs (e.g., 2,3,4,8,...).
A feature request would be:
supporting `torch.nn.DataParallel` with batch size indivisible by the number of GPUs used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/nn/parallel/data_parallel.py`
Content:
```
1 import torch
2 from ..modules import Module
3 from .scatter_gather import scatter_kwargs, gather
4 from .replicate import replicate
5 from .parallel_apply import parallel_apply
6
7
8 class DataParallel(Module):
9 r"""Implements data parallelism at the module level.
10
11 This container parallelizes the application of the given module by
12 splitting the input across the specified devices by chunking in the batch
13 dimension. In the forward pass, the module is replicated on each device,
14 and each replica handles a portion of the input. During the backwards
15 pass, gradients from each replica are summed into the original module.
16
17 The batch size should be larger than the number of GPUs used. It should
18 also be an integer multiple of the number of GPUs so that each chunk is the
19 same size (so that each GPU processes the same number of samples).
20
21 See also: :ref:`cuda-nn-dataparallel-instead`
22
23 Arbitrary positional and keyword inputs are allowed to be passed into
24 DataParallel EXCEPT Tensors. All variables will be scattered on dim
25 specified (default 0). Primitive types will be broadcasted, but all
26 other types will be a shallow copy and can be corrupted if written to in
27 the model's forward pass.
28
29 .. warning::
30 Forward and backwrad hooks defined on :attr:`module` and its submodules
31 won't be invoked anymore, unless the hooks are initialized in the
32 :meth:`forward` method.
33
34 Args:
35 module: module to be parallelized
36 device_ids: CUDA devices (default: all devices)
37 output_device: device location of output (default: device_ids[0])
38
39 Example::
40
41 >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
42 >>> output = net(input_var)
43 """
44
45 # TODO: update notes/cuda.rst when this class handles 8+ GPUs well
46
47 def __init__(self, module, device_ids=None, output_device=None, dim=0):
48 super(DataParallel, self).__init__()
49
50 if not torch.cuda.is_available():
51 self.module = module
52 self.device_ids = []
53 return
54
55 if device_ids is None:
56 device_ids = list(range(torch.cuda.device_count()))
57 if output_device is None:
58 output_device = device_ids[0]
59 self.dim = dim
60 self.module = module
61 self.device_ids = device_ids
62 self.output_device = output_device
63 if len(self.device_ids) == 1:
64 self.module.cuda(device_ids[0])
65
66 def forward(self, *inputs, **kwargs):
67 if not self.device_ids:
68 return self.module(*inputs, **kwargs)
69 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
70 if len(self.device_ids) == 1:
71 return self.module(*inputs[0], **kwargs[0])
72 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
73 outputs = self.parallel_apply(replicas, inputs, kwargs)
74 return self.gather(outputs, self.output_device)
75
76 def replicate(self, module, device_ids):
77 return replicate(module, device_ids)
78
79 def scatter(self, inputs, kwargs, device_ids):
80 return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
81
82 def parallel_apply(self, replicas, inputs, kwargs):
83 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
84
85 def gather(self, outputs, output_device):
86 return gather(outputs, output_device, dim=self.dim)
87
88
89 def data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):
90 r"""Evaluates module(input) in parallel across the GPUs given in device_ids.
91
92 This is the functional version of the DataParallel module.
93
94 Args:
95 module: the module to evaluate in parallel
96 inputs: inputs to the module
97 device_ids: GPU ids on which to replicate module
98 output_device: GPU location of the output Use -1 to indicate the CPU.
99 (default: device_ids[0])
100 Returns:
101 a Variable containing the result of module(input) located on
102 output_device
103 """
104 if not isinstance(inputs, tuple):
105 inputs = (inputs,)
106
107 if device_ids is None:
108 device_ids = list(range(torch.cuda.device_count()))
109
110 if output_device is None:
111 output_device = device_ids[0]
112
113 inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)
114 if len(device_ids) == 1:
115 return module(*inputs[0], **module_kwargs[0])
116 used_device_ids = device_ids[:len(inputs)]
117 replicas = replicate(module, used_device_ids)
118 outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
119 return gather(outputs, output_device, dim)
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -14,9 +14,7 @@
and each replica handles a portion of the input. During the backwards
pass, gradients from each replica are summed into the original module.
- The batch size should be larger than the number of GPUs used. It should
- also be an integer multiple of the number of GPUs so that each chunk is the
- same size (so that each GPU processes the same number of samples).
+ The batch size should be larger than the number of GPUs used.
See also: :ref:`cuda-nn-dataparallel-instead`
| {"golden_diff": "diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py\n--- a/torch/nn/parallel/data_parallel.py\n+++ b/torch/nn/parallel/data_parallel.py\n@@ -14,9 +14,7 @@\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n \n- The batch size should be larger than the number of GPUs used. It should\n- also be an integer multiple of the number of GPUs so that each chunk is the\n- same size (so that each GPU processes the same number of samples).\n+ The batch size should be larger than the number of GPUs used.\n \n See also: :ref:`cuda-nn-dataparallel-instead`\n", "issue": "torch.nn.DataParallel supporting unequal sizes\nAs documented [here](http://pytorch.org/docs/master/_modules/torch/nn/parallel/data_parallel.html):\r\n```\r\nThe batch size should be larger than the number of GPUs used. It should\r\n also be an integer multiple of the number of GPUs so that each chunk is the\r\n same size (so that each GPU processes the same number of samples).\r\n```\r\nTo use `torch.nn.DataParallel`, people should carefully set the batch size according to the number of gpus they plan to use, otherwise it will pop up errors. \r\n\r\nThis issue becomes more subtle when using `torch.utils.data.DataLoader` with `drop_last=False` by default. As the total number of training/validation samples varies with the dataset, the size of the last batch of data loaded by `torch.utils.data.DataLoader` is easy to become indivisible by the number of GPUs (e.g., 2,3,4,8,...).\r\n\r\nA feature request would be:\r\nsupporting `torch.nn.DataParallel` with batch size indivisible by the number of GPUs used.\n", "before_files": [{"content": "import torch\nfrom ..modules import Module\nfrom .scatter_gather import scatter_kwargs, gather\nfrom .replicate import replicate\nfrom .parallel_apply import parallel_apply\n\n\nclass DataParallel(Module):\n r\"\"\"Implements data parallelism at the module level.\n\n This container parallelizes the application of the given module by\n splitting the input across the specified devices by chunking in the batch\n dimension. In the forward pass, the module is replicated on each device,\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n\n The batch size should be larger than the number of GPUs used. It should\n also be an integer multiple of the number of GPUs so that each chunk is the\n same size (so that each GPU processes the same number of samples).\n\n See also: :ref:`cuda-nn-dataparallel-instead`\n\n Arbitrary positional and keyword inputs are allowed to be passed into\n DataParallel EXCEPT Tensors. All variables will be scattered on dim\n specified (default 0). Primitive types will be broadcasted, but all\n other types will be a shallow copy and can be corrupted if written to in\n the model's forward pass.\n\n .. warning::\n Forward and backwrad hooks defined on :attr:`module` and its submodules\n won't be invoked anymore, unless the hooks are initialized in the\n :meth:`forward` method.\n\n Args:\n module: module to be parallelized\n device_ids: CUDA devices (default: all devices)\n output_device: device location of output (default: device_ids[0])\n\n Example::\n\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var)\n \"\"\"\n\n # TODO: update notes/cuda.rst when this class handles 8+ GPUs well\n\n def __init__(self, module, device_ids=None, output_device=None, dim=0):\n super(DataParallel, self).__init__()\n\n if not torch.cuda.is_available():\n self.module = module\n self.device_ids = []\n return\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n if output_device is None:\n output_device = device_ids[0]\n self.dim = dim\n self.module = module\n self.device_ids = device_ids\n self.output_device = output_device\n if len(self.device_ids) == 1:\n self.module.cuda(device_ids[0])\n\n def forward(self, *inputs, **kwargs):\n if not self.device_ids:\n return self.module(*inputs, **kwargs)\n inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)\n if len(self.device_ids) == 1:\n return self.module(*inputs[0], **kwargs[0])\n replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n return self.gather(outputs, self.output_device)\n\n def replicate(self, module, device_ids):\n return replicate(module, device_ids)\n\n def scatter(self, inputs, kwargs, device_ids):\n return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)\n\n def parallel_apply(self, replicas, inputs, kwargs):\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\n\n def gather(self, outputs, output_device):\n return gather(outputs, output_device, dim=self.dim)\n\n\ndef data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):\n r\"\"\"Evaluates module(input) in parallel across the GPUs given in device_ids.\n\n This is the functional version of the DataParallel module.\n\n Args:\n module: the module to evaluate in parallel\n inputs: inputs to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Variable containing the result of module(input) located on\n output_device\n \"\"\"\n if not isinstance(inputs, tuple):\n inputs = (inputs,)\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n\n if output_device is None:\n output_device = device_ids[0]\n\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n used_device_ids = device_ids[:len(inputs)]\n replicas = replicate(module, used_device_ids)\n outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)\n return gather(outputs, output_device, dim)\n", "path": "torch/nn/parallel/data_parallel.py"}], "after_files": [{"content": "import torch\nfrom ..modules import Module\nfrom .scatter_gather import scatter_kwargs, gather\nfrom .replicate import replicate\nfrom .parallel_apply import parallel_apply\n\n\nclass DataParallel(Module):\n r\"\"\"Implements data parallelism at the module level.\n\n This container parallelizes the application of the given module by\n splitting the input across the specified devices by chunking in the batch\n dimension. In the forward pass, the module is replicated on each device,\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n\n The batch size should be larger than the number of GPUs used.\n\n See also: :ref:`cuda-nn-dataparallel-instead`\n\n Arbitrary positional and keyword inputs are allowed to be passed into\n DataParallel EXCEPT Tensors. All variables will be scattered on dim\n specified (default 0). Primitive types will be broadcasted, but all\n other types will be a shallow copy and can be corrupted if written to in\n the model's forward pass.\n\n .. warning::\n Forward and backwrad hooks defined on :attr:`module` and its submodules\n won't be invoked anymore, unless the hooks are initialized in the\n :meth:`forward` method.\n\n Args:\n module: module to be parallelized\n device_ids: CUDA devices (default: all devices)\n output_device: device location of output (default: device_ids[0])\n\n Example::\n\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var)\n \"\"\"\n\n # TODO: update notes/cuda.rst when this class handles 8+ GPUs well\n\n def __init__(self, module, device_ids=None, output_device=None, dim=0):\n super(DataParallel, self).__init__()\n\n if not torch.cuda.is_available():\n self.module = module\n self.device_ids = []\n return\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n if output_device is None:\n output_device = device_ids[0]\n self.dim = dim\n self.module = module\n self.device_ids = device_ids\n self.output_device = output_device\n if len(self.device_ids) == 1:\n self.module.cuda(device_ids[0])\n\n def forward(self, *inputs, **kwargs):\n if not self.device_ids:\n return self.module(*inputs, **kwargs)\n inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)\n if len(self.device_ids) == 1:\n return self.module(*inputs[0], **kwargs[0])\n replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n return self.gather(outputs, self.output_device)\n\n def replicate(self, module, device_ids):\n return replicate(module, device_ids)\n\n def scatter(self, inputs, kwargs, device_ids):\n return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)\n\n def parallel_apply(self, replicas, inputs, kwargs):\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\n\n def gather(self, outputs, output_device):\n return gather(outputs, output_device, dim=self.dim)\n\n\ndef data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):\n r\"\"\"Evaluates module(input) in parallel across the GPUs given in device_ids.\n\n This is the functional version of the DataParallel module.\n\n Args:\n module: the module to evaluate in parallel\n inputs: inputs to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Variable containing the result of module(input) located on\n output_device\n \"\"\"\n if not isinstance(inputs, tuple):\n inputs = (inputs,)\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n\n if output_device is None:\n output_device = device_ids[0]\n\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n used_device_ids = device_ids[:len(inputs)]\n replicas = replicate(module, used_device_ids)\n outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)\n return gather(outputs, output_device, dim)\n", "path": "torch/nn/parallel/data_parallel.py"}]} | 1,783 | 176 |
gh_patches_debug_965 | rasdani/github-patches | git_diff | tiangolo__fastapi-9468 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FastAPI tests in pydantic failing due to flask deprecation
### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
hope you don't mind me creating an issue, pydantic's 1.10.X tests are failing due to a new issue with running our fastapi tests, see
https://github.com/pydantic/pydantic/actions/runs/4832692304/jobs/8611783607?pr=5628
output from pydantic's tests:
```
==================================== ERRORS ====================================
______ ERROR collecting tests/test_tutorial/test_wsgi/test_tutorial001.py ______
tests/test_tutorial/test_wsgi/test_tutorial001.py:3: in <module>
from docs_src.wsgi.tutorial001 import app
docs_src/wsgi/tutorial001.py:3: in <module>
from flask import Flask, escape, request
<frozen importlib._bootstrap>:1075: in _handle_fromlist
???
/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/flask/__init__.py:71: in __getattr__
warnings.warn(
E DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' instead.
=========================== short test summary info ============================
ERROR tests/test_tutorial/test_wsgi/test_tutorial001.py - DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape'
```
related to https://github.com/pydantic/pydantic/pull/5628
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs_src/wsgi/tutorial001.py`
Content:
```
1 from fastapi import FastAPI
2 from fastapi.middleware.wsgi import WSGIMiddleware
3 from flask import Flask, escape, request
4
5 flask_app = Flask(__name__)
6
7
8 @flask_app.route("/")
9 def flask_main():
10 name = request.args.get("name", "World")
11 return f"Hello, {escape(name)} from Flask!"
12
13
14 app = FastAPI()
15
16
17 @app.get("/v2")
18 def read_main():
19 return {"message": "Hello World"}
20
21
22 app.mount("/v1", WSGIMiddleware(flask_app))
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs_src/wsgi/tutorial001.py b/docs_src/wsgi/tutorial001.py
--- a/docs_src/wsgi/tutorial001.py
+++ b/docs_src/wsgi/tutorial001.py
@@ -1,6 +1,7 @@
from fastapi import FastAPI
from fastapi.middleware.wsgi import WSGIMiddleware
-from flask import Flask, escape, request
+from flask import Flask, request
+from markupsafe import escape
flask_app = Flask(__name__)
| {"golden_diff": "diff --git a/docs_src/wsgi/tutorial001.py b/docs_src/wsgi/tutorial001.py\n--- a/docs_src/wsgi/tutorial001.py\n+++ b/docs_src/wsgi/tutorial001.py\n@@ -1,6 +1,7 @@\n from fastapi import FastAPI\n from fastapi.middleware.wsgi import WSGIMiddleware\n-from flask import Flask, escape, request\n+from flask import Flask, request\n+from markupsafe import escape\n \n flask_app = Flask(__name__)\n", "issue": "FastAPI tests in pydantic failing due to flask deprecation\n### Privileged issue\n\n- [X] I'm @tiangolo or he asked me directly to create an issue here.\n\n### Issue Content\n\nhope you don't mind me creating an issue, pydantic's 1.10.X tests are failing due to a new issue with running our fastapi tests, see\r\n\r\nhttps://github.com/pydantic/pydantic/actions/runs/4832692304/jobs/8611783607?pr=5628\r\n\r\noutput from pydantic's tests:\r\n\r\n```\r\n==================================== ERRORS ====================================\r\n______ ERROR collecting tests/test_tutorial/test_wsgi/test_tutorial001.py ______\r\ntests/test_tutorial/test_wsgi/test_tutorial001.py:3: in <module>\r\n from docs_src.wsgi.tutorial001 import app\r\ndocs_src/wsgi/tutorial001.py:3: in <module>\r\n from flask import Flask, escape, request\r\n<frozen importlib._bootstrap>:1075: in _handle_fromlist\r\n ???\r\n/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/flask/__init__.py:71: in __getattr__\r\n warnings.warn(\r\nE DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' instead.\r\n=========================== short test summary info ============================\r\nERROR tests/test_tutorial/test_wsgi/test_tutorial001.py - DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' \r\n```\r\n\r\nrelated to https://github.com/pydantic/pydantic/pull/5628\n", "before_files": [{"content": "from fastapi import FastAPI\nfrom fastapi.middleware.wsgi import WSGIMiddleware\nfrom flask import Flask, escape, request\n\nflask_app = Flask(__name__)\n\n\n@flask_app.route(\"/\")\ndef flask_main():\n name = request.args.get(\"name\", \"World\")\n return f\"Hello, {escape(name)} from Flask!\"\n\n\napp = FastAPI()\n\n\[email protected](\"/v2\")\ndef read_main():\n return {\"message\": \"Hello World\"}\n\n\napp.mount(\"/v1\", WSGIMiddleware(flask_app))\n", "path": "docs_src/wsgi/tutorial001.py"}], "after_files": [{"content": "from fastapi import FastAPI\nfrom fastapi.middleware.wsgi import WSGIMiddleware\nfrom flask import Flask, request\nfrom markupsafe import escape\n\nflask_app = Flask(__name__)\n\n\n@flask_app.route(\"/\")\ndef flask_main():\n name = request.args.get(\"name\", \"World\")\n return f\"Hello, {escape(name)} from Flask!\"\n\n\napp = FastAPI()\n\n\[email protected](\"/v2\")\ndef read_main():\n return {\"message\": \"Hello World\"}\n\n\napp.mount(\"/v1\", WSGIMiddleware(flask_app))\n", "path": "docs_src/wsgi/tutorial001.py"}]} | 819 | 109 |
gh_patches_debug_4721 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-3240 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: src/system/os.py does not correctly return architecture for bsd platform
### Describe the bug
Run `uname -m` will return follow in the freebsd:
```
amd64
```
The code here does not support `amd64` as input:
https://github.com/opensearch-project/opensearch-build/blob/main/src/system/os.py#L12-L19
```
def current_architecture() -> str:
architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
if architecture == "x86_64":
return "x64"
elif architecture == "aarch64" or architecture == "arm64":
return "arm64"
else:
raise ValueError(f"Unsupported architecture: {architecture}")
```
Thanks.
### To reproduce
Run the build process on a freebsd server and see output:
```
$ ./build.sh manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch
Installing dependencies in . ...
Installing dependencies from Pipfile.lock (b36c9c)...
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
Running ./src/run_build.py manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch ...
2023-02-23 23:15:47 INFO Building in /tmp/tmpllimwxjs
2023-02-23 23:15:47 INFO Removing /tmp/tmpllimwxjs
Traceback (most recent call last):
File "./src/run_build.py", line 81, in <module>
sys.exit(main())
File "./src/run_build.py", line 55, in main
architecture=args.architecture or manifest.build.architecture,
File "/usr/share/opensearch/opensearch-build/src/build_workflow/build_target.py", line 45, in __init__
self.architecture = architecture or current_architecture()
File "/usr/share/opensearch/opensearch-build/src/system/os.py", line 20, in current_architecture
raise ValueError(f"Unsupported architecture: {architecture}")
ValueError: Unsupported architecture: amd64
```
### Expected behavior
The bsd x64 hosts can run the code without specifying --architecture x64.
### Screenshots
If applicable, add screenshots to help explain your problem.
### Host / Environment
_No response_
### Additional context
_No response_
### Relevant log output
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/system/os.py`
Content:
```
1 # Copyright OpenSearch Contributors
2 # SPDX-License-Identifier: Apache-2.0
3 #
4 # The OpenSearch Contributors require contributions made to
5 # this file be licensed under the Apache-2.0 license or a
6 # compatible open source license.
7
8 import os
9 import subprocess
10
11
12 def current_architecture() -> str:
13 architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
14 if architecture == "x86_64":
15 return "x64"
16 elif architecture == "aarch64" or architecture == "arm64":
17 return "arm64"
18 else:
19 raise ValueError(f"Unsupported architecture: {architecture}")
20
21
22 def current_platform() -> str:
23 if os.name == "nt":
24 return "windows"
25 else:
26 return subprocess.check_output(["uname", "-s"]).decode().strip().lower()
27
28
29 def deb_architecture(architecture: str) -> str:
30 # This would convert arch from "current_architecture" to deb specific architecture alternatives
31
32 deb_architecture_map = {
33 "x64": "amd64",
34 "arm64": "arm64",
35 }
36
37 return deb_architecture_map[architecture]
38
39
40 def rpm_architecture(architecture: str) -> str:
41 # This would convert arch from "current_architecture" to rpm specific architecture alternatives
42
43 rpm_architecture_map = {
44 "x64": "x86_64",
45 "arm64": "aarch64",
46 }
47
48 return rpm_architecture_map[architecture]
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/system/os.py b/src/system/os.py
--- a/src/system/os.py
+++ b/src/system/os.py
@@ -11,7 +11,7 @@
def current_architecture() -> str:
architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
- if architecture == "x86_64":
+ if architecture == "x86_64" or architecture == "amd64":
return "x64"
elif architecture == "aarch64" or architecture == "arm64":
return "arm64"
| {"golden_diff": "diff --git a/src/system/os.py b/src/system/os.py\n--- a/src/system/os.py\n+++ b/src/system/os.py\n@@ -11,7 +11,7 @@\n \n def current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n- if architecture == \"x86_64\":\n+ if architecture == \"x86_64\" or architecture == \"amd64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n", "issue": "[Bug]: src/system/os.py does not correctly return architecture for bsd platform\n### Describe the bug\r\n\r\n\r\nRun `uname -m` will return follow in the freebsd:\r\n```\r\namd64\r\n```\r\n\r\nThe code here does not support `amd64` as input:\r\nhttps://github.com/opensearch-project/opensearch-build/blob/main/src/system/os.py#L12-L19\r\n```\r\ndef current_architecture() -> str:\r\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\r\n if architecture == \"x86_64\":\r\n return \"x64\"\r\n elif architecture == \"aarch64\" or architecture == \"arm64\":\r\n return \"arm64\"\r\n else:\r\n raise ValueError(f\"Unsupported architecture: {architecture}\")\r\n```\r\n\r\n\r\n\r\nThanks.\r\n\r\n\r\n### To reproduce\r\n\r\nRun the build process on a freebsd server and see output:\r\n```\r\n$ ./build.sh manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch\r\nInstalling dependencies in . ...\r\nInstalling dependencies from Pipfile.lock (b36c9c)...\r\nTo activate this project's virtualenv, run pipenv shell.\r\nAlternatively, run a command inside the virtualenv with pipenv run.\r\nRunning ./src/run_build.py manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch ...\r\n2023-02-23 23:15:47 INFO Building in /tmp/tmpllimwxjs\r\n2023-02-23 23:15:47 INFO Removing /tmp/tmpllimwxjs\r\nTraceback (most recent call last):\r\n File \"./src/run_build.py\", line 81, in <module>\r\n sys.exit(main())\r\n File \"./src/run_build.py\", line 55, in main\r\n architecture=args.architecture or manifest.build.architecture,\r\n File \"/usr/share/opensearch/opensearch-build/src/build_workflow/build_target.py\", line 45, in __init__\r\n self.architecture = architecture or current_architecture()\r\n File \"/usr/share/opensearch/opensearch-build/src/system/os.py\", line 20, in current_architecture\r\n raise ValueError(f\"Unsupported architecture: {architecture}\")\r\nValueError: Unsupported architecture: amd64\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nThe bsd x64 hosts can run the code without specifying --architecture x64.\r\n\r\n### Screenshots\r\n\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n### Host / Environment\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Relevant log output\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport subprocess\n\n\ndef current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n if architecture == \"x86_64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n else:\n raise ValueError(f\"Unsupported architecture: {architecture}\")\n\n\ndef current_platform() -> str:\n if os.name == \"nt\":\n return \"windows\"\n else:\n return subprocess.check_output([\"uname\", \"-s\"]).decode().strip().lower()\n\n\ndef deb_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to deb specific architecture alternatives\n\n deb_architecture_map = {\n \"x64\": \"amd64\",\n \"arm64\": \"arm64\",\n }\n\n return deb_architecture_map[architecture]\n\n\ndef rpm_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to rpm specific architecture alternatives\n\n rpm_architecture_map = {\n \"x64\": \"x86_64\",\n \"arm64\": \"aarch64\",\n }\n\n return rpm_architecture_map[architecture]\n", "path": "src/system/os.py"}], "after_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport subprocess\n\n\ndef current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n if architecture == \"x86_64\" or architecture == \"amd64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n else:\n raise ValueError(f\"Unsupported architecture: {architecture}\")\n\n\ndef current_platform() -> str:\n if os.name == \"nt\":\n return \"windows\"\n else:\n return subprocess.check_output([\"uname\", \"-s\"]).decode().strip().lower()\n\n\ndef deb_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to deb specific architecture alternatives\n\n deb_architecture_map = {\n \"x64\": \"amd64\",\n \"arm64\": \"arm64\",\n }\n\n return deb_architecture_map[architecture]\n\n\ndef rpm_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to rpm specific architecture alternatives\n\n rpm_architecture_map = {\n \"x64\": \"x86_64\",\n \"arm64\": \"aarch64\",\n }\n\n return rpm_architecture_map[architecture]\n", "path": "src/system/os.py"}]} | 1,243 | 131 |
gh_patches_debug_558 | rasdani/github-patches | git_diff | pex-tool__pex-691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 1.6.4
On the docket:
+ [x] Restore pex.pex_bootstrapper.is_compressed API #684
+ [ ] Release more flexible pex binaries. #654
+ [x] If an `--interpreter-constraint` is set, it should always be honored. #656
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.6.3'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.6.3'
+__version__ = '1.6.4'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.6.3'\n+__version__ = '1.6.4'\n", "issue": "Release 1.6.4\nOn the docket:\r\n+ [x] Restore pex.pex_bootstrapper.is_compressed API #684\r\n+ [ ] Release more flexible pex binaries. #654\r\n + [x] If an `--interpreter-constraint` is set, it should always be honored. #656\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.3'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.4'\n", "path": "pex/version.py"}]} | 382 | 94 |
gh_patches_debug_25746 | rasdani/github-patches | git_diff | mito-ds__mito-359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
spelling mistake during mitoinstaller install
**Describe the bug**
Small issue, very minor, found a spelling mistake when running mitoinstaller install,
Starting install...
Create mito user
Upgrade mitoinstaller
Setting up **enviornment** <--- **environment**
Check dependencies
Remove mitosheet3 if present
Install mitosheet
This might take a few moments...
**To Reproduce**
Steps to reproduce the behavior:
1. run python -m mitoinstaller install
Please include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.
**Expected behavior**
should be corrected to "environment"
**Screenshots**

**Desktop (please complete the following information):**
N/A
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitoinstaller/mitoinstaller/__main__.py`
Content:
```
1 """
2 The Mito Installer package contains utils for installing
3 Mito within your Python enviornment.
4
5 Long term, we aim to meet:
6 1. This package has minimal dependencies, both for speed of download and the ultimate portability.
7 2. The installation attempts to fail as early as possible, and to give the user as much help
8 help as possible while doing so.
9 """
10 from colorama import init
11 from termcolor import colored # type: ignore
12
13 from mitoinstaller.install import do_install
14
15
16 def main() -> None:
17 """
18 The main function of the Mito installer, this function is responsible
19 for installing and upgrading the `mitosheet` package.
20
21 To install Mito:
22 python -m mitoinstaller install
23
24 To upgrade Mito:
25 python -m mitoinstaller upgrade
26
27 To install Mito from TestPyPi
28 python -m mitoinstaller install --test-pypi
29 """
30 import sys
31 init()
32
33 if len(sys.argv) > 1:
34 command = sys.argv[1]
35 else:
36 command = ''
37
38 if command == 'install' or command == 'upgrade':
39 do_install()
40 elif command == 'uninstall':
41 print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))
42 else:
43 # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.
44 # We only send this command to the users who need to know this (namely, those that need to upgrade)
45 print('\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\n\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\n')
46
47
48 if __name__ == '__main__':
49 main()
50
```
Path: `mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py`
Content:
```
1 import importlib
2 import os
3 import sys
4
5 from mitoinstaller import __version__
6 from mitoinstaller.commands import upgrade_mito_installer
7 from mitoinstaller.installer_steps.installer_step import InstallerStep
8 from mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable
9 from mitoinstaller.log_utils import identify, log
10 from mitoinstaller.user_install import (USER_JSON_PATH, go_pro,
11 try_create_user_json_file)
12
13
14 def initial_install_step_create_user():
15
16 if not os.path.exists(USER_JSON_PATH):
17 try_create_user_json_file(is_pro=('--pro' in sys.argv))
18
19 if not ('--pro' in sys.argv):
20 # Only try and log if we're not pro
21 identify()
22 log('install_started', {
23 'mitoinstaller_version': __version__
24 })
25 else:
26 # If the user is going pro, make sure they are set to pro
27 go_pro()
28
29 def initial_install_step_add_env_for_which_jupyter():
30 """
31 This install steps checks, up front, which very of jupyter we should
32 launch: lab or notebook. It then stores this as an enviornment variable
33 so that the final installer steps can launch it.
34
35 We do this up front, so that we can see which packages that user has
36 installed before installing Mito.
37 """
38 set_prefered_jupyter_env_variable()
39
40
41 INITIAL_INSTALLER_STEPS = [
42 InstallerStep(
43 'Create mito user',
44 initial_install_step_create_user
45 ),
46 InstallerStep(
47 'Upgrade mitoinstaller',
48 upgrade_mito_installer,
49 optional=True
50 ),
51 InstallerStep(
52 'Setting up enviornment',
53 initial_install_step_add_env_for_which_jupyter,
54 ),
55 ]
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitoinstaller/mitoinstaller/__main__.py b/mitoinstaller/mitoinstaller/__main__.py
--- a/mitoinstaller/mitoinstaller/__main__.py
+++ b/mitoinstaller/mitoinstaller/__main__.py
@@ -1,6 +1,6 @@
"""
The Mito Installer package contains utils for installing
-Mito within your Python enviornment.
+Mito within your Python environment.
Long term, we aim to meet:
1. This package has minimal dependencies, both for speed of download and the ultimate portability.
diff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
@@ -29,7 +29,7 @@
def initial_install_step_add_env_for_which_jupyter():
"""
This install steps checks, up front, which very of jupyter we should
- launch: lab or notebook. It then stores this as an enviornment variable
+ launch: lab or notebook. It then stores this as an environment variable
so that the final installer steps can launch it.
We do this up front, so that we can see which packages that user has
@@ -49,7 +49,7 @@
optional=True
),
InstallerStep(
- 'Setting up enviornment',
+ 'Setting up environment',
initial_install_step_add_env_for_which_jupyter,
),
]
| {"golden_diff": "diff --git a/mitoinstaller/mitoinstaller/__main__.py b/mitoinstaller/mitoinstaller/__main__.py\n--- a/mitoinstaller/mitoinstaller/__main__.py\n+++ b/mitoinstaller/mitoinstaller/__main__.py\n@@ -1,6 +1,6 @@\n \"\"\"\n The Mito Installer package contains utils for installing\n-Mito within your Python enviornment.\n+Mito within your Python environment.\n \n Long term, we aim to meet:\n 1. This package has minimal dependencies, both for speed of download and the ultimate portability.\ndiff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n@@ -29,7 +29,7 @@\n def initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n- launch: lab or notebook. It then stores this as an enviornment variable\n+ launch: lab or notebook. It then stores this as an environment variable\n so that the final installer steps can launch it. \n \n We do this up front, so that we can see which packages that user has \n@@ -49,7 +49,7 @@\n optional=True\n ),\n InstallerStep(\n- 'Setting up enviornment',\n+ 'Setting up environment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n ]\n", "issue": "spelling mistake during mitoinstaller install\n**Describe the bug**\r\nSmall issue, very minor, found a spelling mistake when running mitoinstaller install, \r\n\r\nStarting install...\r\nCreate mito user\r\nUpgrade mitoinstaller\r\nSetting up **enviornment** <--- **environment**\r\nCheck dependencies\r\nRemove mitosheet3 if present\r\nInstall mitosheet\r\nThis might take a few moments...\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. run python -m mitoinstaller install\r\n\r\nPlease include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.\r\n\r\n**Expected behavior**\r\nshould be corrected to \"environment\"\r\n\r\n**Screenshots**\r\n\r\n\r\n**Desktop (please complete the following information):**\r\nN/A\r\n\r\n**Additional context**\r\nN/A\r\n\n", "before_files": [{"content": "\"\"\"\nThe Mito Installer package contains utils for installing\nMito within your Python enviornment.\n\nLong term, we aim to meet:\n1. This package has minimal dependencies, both for speed of download and the ultimate portability.\n2. The installation attempts to fail as early as possible, and to give the user as much help\n help as possible while doing so.\n\"\"\"\nfrom colorama import init\nfrom termcolor import colored # type: ignore\n\nfrom mitoinstaller.install import do_install\n\n\ndef main() -> None:\n \"\"\"\n The main function of the Mito installer, this function is responsible\n for installing and upgrading the `mitosheet` package.\n\n To install Mito:\n python -m mitoinstaller install\n\n To upgrade Mito:\n python -m mitoinstaller upgrade\n\n To install Mito from TestPyPi\n python -m mitoinstaller install --test-pypi\n \"\"\"\n import sys\n init()\n\n if len(sys.argv) > 1:\n command = sys.argv[1]\n else:\n command = ''\n\n if command == 'install' or command == 'upgrade':\n do_install()\n elif command == 'uninstall':\n print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))\n else:\n # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.\n # We only send this command to the users who need to know this (namely, those that need to upgrade)\n print('\\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\\n\\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\\n')\n \n\nif __name__ == '__main__':\n main()\n", "path": "mitoinstaller/mitoinstaller/__main__.py"}, {"content": "import importlib\nimport os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n if not ('--pro' in sys.argv):\n # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n else:\n # If the user is going pro, make sure they are set to pro\n go_pro()\n\ndef initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n launch: lab or notebook. It then stores this as an enviornment variable\n so that the final installer steps can launch it. \n\n We do this up front, so that we can see which packages that user has \n installed before installing Mito.\n \"\"\"\n set_prefered_jupyter_env_variable()\n\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n InstallerStep(\n 'Setting up enviornment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}], "after_files": [{"content": "\"\"\"\nThe Mito Installer package contains utils for installing\nMito within your Python environment.\n\nLong term, we aim to meet:\n1. This package has minimal dependencies, both for speed of download and the ultimate portability.\n2. The installation attempts to fail as early as possible, and to give the user as much help\n help as possible while doing so.\n\"\"\"\nfrom colorama import init\nfrom termcolor import colored # type: ignore\n\nfrom mitoinstaller.install import do_install\n\n\ndef main() -> None:\n \"\"\"\n The main function of the Mito installer, this function is responsible\n for installing and upgrading the `mitosheet` package.\n\n To install Mito:\n python -m mitoinstaller install\n\n To upgrade Mito:\n python -m mitoinstaller upgrade\n\n To install Mito from TestPyPi\n python -m mitoinstaller install --test-pypi\n \"\"\"\n import sys\n init()\n\n if len(sys.argv) > 1:\n command = sys.argv[1]\n else:\n command = ''\n\n if command == 'install' or command == 'upgrade':\n do_install()\n elif command == 'uninstall':\n print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))\n else:\n # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.\n # We only send this command to the users who need to know this (namely, those that need to upgrade)\n print('\\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\\n\\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\\n')\n \n\nif __name__ == '__main__':\n main()\n", "path": "mitoinstaller/mitoinstaller/__main__.py"}, {"content": "import importlib\nimport os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n if not ('--pro' in sys.argv):\n # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n else:\n # If the user is going pro, make sure they are set to pro\n go_pro()\n\ndef initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n launch: lab or notebook. It then stores this as an environment variable\n so that the final installer steps can launch it. \n\n We do this up front, so that we can see which packages that user has \n installed before installing Mito.\n \"\"\"\n set_prefered_jupyter_env_variable()\n\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n InstallerStep(\n 'Setting up environment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}]} | 1,519 | 366 |
gh_patches_debug_1231 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support Python 3.9
Python 3.9 will be released 2020-10-05.
Here are some steps before its release:
* Start testing with prerelease
After release:
* Ensure tests run with released version
* Add 3.9 PyPI classifier
* Enable PYthon wheel building in release
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import os
5 import sys
6
7 from setuptools import Extension, find_packages, setup
8
9 with open("README.md", "r") as fp:
10 long_description = fp.read()
11
12 packages = find_packages("src")
13 if sys.version_info < (3, 6):
14 packages = [p for p in packages if not p.startswith("scout_apm.async_")]
15
16 compile_extensions = (
17 # Python 3+
18 sys.version_info >= (3,)
19 # Not Jython
20 and not sys.platform.startswith("java")
21 # Not PyPy
22 and "__pypy__" not in sys.builtin_module_names
23 # Not explicitly disabled
24 and (os.environ.get("SCOUT_DISABLE_EXTENSIONS", "") == "")
25 )
26 if compile_extensions:
27 ext_modules = [
28 Extension(
29 name=str("scout_apm.core._objtrace"),
30 sources=[str("src/scout_apm/core/_objtrace.c")],
31 optional=True,
32 )
33 ]
34 else:
35 ext_modules = []
36
37 setup(
38 name="scout_apm",
39 version="2.16.2",
40 description="Scout Application Performance Monitoring Agent",
41 long_description=long_description,
42 long_description_content_type="text/markdown",
43 url="https://github.com/scoutapp/scout_apm_python",
44 project_urls={
45 "Documentation": "https://docs.scoutapm.com/#python-agent",
46 "Changelog": (
47 "https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md"
48 ),
49 },
50 author="Scout",
51 author_email="[email protected]",
52 license="MIT",
53 zip_safe=False,
54 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
55 packages=packages,
56 package_dir={str(""): str("src")},
57 ext_modules=ext_modules,
58 entry_points={
59 "console_scripts": [
60 "core-agent-manager = scout_apm.core.cli.core_agent_manager:main"
61 ]
62 },
63 install_requires=[
64 'asgiref ; python_version >= "3.5"',
65 'importlib-metadata ; python_version < "3.8"',
66 "psutil>=5,<6",
67 'urllib3[secure] < 1.25 ; python_version < "3.5"',
68 'urllib3[secure] < 2 ; python_version >= "3.5"',
69 "wrapt>=1.10,<2.0",
70 ],
71 keywords="apm performance monitoring development",
72 classifiers=[
73 "Development Status :: 5 - Production/Stable",
74 "Framework :: Bottle",
75 "Framework :: Django",
76 "Framework :: Django :: 1.8",
77 "Framework :: Django :: 1.9",
78 "Framework :: Django :: 1.10",
79 "Framework :: Django :: 1.11",
80 "Framework :: Django :: 2.0",
81 "Framework :: Django :: 2.1",
82 "Framework :: Django :: 2.2",
83 "Framework :: Django :: 3.0",
84 "Framework :: Django :: 3.1",
85 "Framework :: Flask",
86 "Framework :: Pyramid",
87 "Intended Audience :: Developers",
88 "Topic :: System :: Monitoring",
89 "License :: OSI Approved :: MIT License",
90 "Operating System :: MacOS",
91 "Operating System :: POSIX",
92 "Operating System :: POSIX :: Linux",
93 "Programming Language :: Python :: 2",
94 "Programming Language :: Python :: 2.7",
95 "Programming Language :: Python :: 3",
96 "Programming Language :: Python :: 3.4",
97 "Programming Language :: Python :: 3.5",
98 "Programming Language :: Python :: 3.6",
99 "Programming Language :: Python :: 3.7",
100 "Programming Language :: Python :: 3.8",
101 ],
102 )
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -98,5 +98,6 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -98,5 +98,6 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n ],\n )\n", "issue": "Support Python 3.9\nPython 3.9 will be released 2020-10-05.\r\n\r\nHere are some steps before its release:\r\n\r\n* Start testing with prerelease\r\n\r\nAfter release:\r\n* Ensure tests run with released version\r\n* Add 3.9 PyPI classifier\r\n* Enable PYthon wheel building in release\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n # Not explicitly disabled\n and (os.environ.get(\"SCOUT_DISABLE_EXTENSIONS\", \"\") == \"\")\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n name=str(\"scout_apm.core._objtrace\"),\n sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n optional=True,\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.16.2\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=\"apm performance monitoring development\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n # Not explicitly disabled\n and (os.environ.get(\"SCOUT_DISABLE_EXTENSIONS\", \"\") == \"\")\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n name=str(\"scout_apm.core._objtrace\"),\n sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n optional=True,\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.16.2\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=\"apm performance monitoring development\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]} | 1,387 | 84 |
gh_patches_debug_34386 | rasdani/github-patches | git_diff | optuna__optuna-1678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use function annotation syntax for Type Hints.
After dropping Python 2.7 support at #710, we can define type hints with function annotation syntax.
~~Do you have a plan to update the coding style guideline?~~
https://github.com/optuna/optuna/wiki/Coding-Style-Conventions
## Progress
- [x] `optuna/integration/sklearn.py` (#1735)
- [x] `optuna/study.py` - assigned to harpy
## Note to the questioner
We still cannot use variable annotation syntax introduced by [PEP 526](https://www.python.org/dev/peps/pep-0526/) because we supports Python 3.5.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/samplers/_random.py`
Content:
```
1 import numpy
2
3 from optuna import distributions
4 from optuna.samplers import BaseSampler
5 from optuna import type_checking
6
7 if type_checking.TYPE_CHECKING:
8 from typing import Any # NOQA
9 from typing import Dict # NOQA
10 from typing import Optional # NOQA
11
12 from optuna.distributions import BaseDistribution # NOQA
13 from optuna.study import Study # NOQA
14 from optuna.trial import FrozenTrial # NOQA
15
16
17 class RandomSampler(BaseSampler):
18 """Sampler using random sampling.
19
20 This sampler is based on *independent sampling*.
21 See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.
22
23 Example:
24
25 .. testcode::
26
27 import optuna
28 from optuna.samplers import RandomSampler
29
30 def objective(trial):
31 x = trial.suggest_uniform('x', -5, 5)
32 return x**2
33
34 study = optuna.create_study(sampler=RandomSampler())
35 study.optimize(objective, n_trials=10)
36
37 Args:
38 seed: Seed for random number generator.
39 """
40
41 def __init__(self, seed=None):
42 # type: (Optional[int]) -> None
43
44 self._rng = numpy.random.RandomState(seed)
45
46 def reseed_rng(self) -> None:
47
48 self._rng = numpy.random.RandomState()
49
50 def infer_relative_search_space(self, study, trial):
51 # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]
52
53 return {}
54
55 def sample_relative(self, study, trial, search_space):
56 # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]
57
58 return {}
59
60 def sample_independent(self, study, trial, param_name, param_distribution):
61 # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any
62
63 if isinstance(param_distribution, distributions.UniformDistribution):
64 return self._rng.uniform(param_distribution.low, param_distribution.high)
65 elif isinstance(param_distribution, distributions.LogUniformDistribution):
66 log_low = numpy.log(param_distribution.low)
67 log_high = numpy.log(param_distribution.high)
68 return float(numpy.exp(self._rng.uniform(log_low, log_high)))
69 elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):
70 q = param_distribution.q
71 r = param_distribution.high - param_distribution.low
72 # [low, high] is shifted to [0, r] to align sampled values at regular intervals.
73 low = 0 - 0.5 * q
74 high = r + 0.5 * q
75 s = self._rng.uniform(low, high)
76 v = numpy.round(s / q) * q + param_distribution.low
77 # v may slightly exceed range due to round-off errors.
78 return float(min(max(v, param_distribution.low), param_distribution.high))
79 elif isinstance(param_distribution, distributions.IntUniformDistribution):
80 # [low, high] is shifted to [0, r] to align sampled values at regular intervals.
81 r = (param_distribution.high - param_distribution.low) / param_distribution.step
82 # numpy.random.randint includes low but excludes high.
83 s = self._rng.randint(0, r + 1)
84 v = s * param_distribution.step + param_distribution.low
85 return int(v)
86 elif isinstance(param_distribution, distributions.IntLogUniformDistribution):
87 log_low = numpy.log(param_distribution.low - 0.5)
88 log_high = numpy.log(param_distribution.high + 0.5)
89 s = numpy.exp(self._rng.uniform(log_low, log_high))
90 v = numpy.round(s)
91 return int(min(max(v, param_distribution.low), param_distribution.high))
92 elif isinstance(param_distribution, distributions.CategoricalDistribution):
93 choices = param_distribution.choices
94 index = self._rng.randint(0, len(choices))
95 return choices[index]
96 else:
97 raise NotImplementedError
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/samplers/_random.py b/optuna/samplers/_random.py
--- a/optuna/samplers/_random.py
+++ b/optuna/samplers/_random.py
@@ -1,17 +1,14 @@
+from typing import Any
+from typing import Dict
+from typing import Optional
+
import numpy
from optuna import distributions
+from optuna.distributions import BaseDistribution
from optuna.samplers import BaseSampler
-from optuna import type_checking
-
-if type_checking.TYPE_CHECKING:
- from typing import Any # NOQA
- from typing import Dict # NOQA
- from typing import Optional # NOQA
-
- from optuna.distributions import BaseDistribution # NOQA
- from optuna.study import Study # NOQA
- from optuna.trial import FrozenTrial # NOQA
+from optuna.study import Study
+from optuna.trial import FrozenTrial
class RandomSampler(BaseSampler):
@@ -38,8 +35,7 @@
seed: Seed for random number generator.
"""
- def __init__(self, seed=None):
- # type: (Optional[int]) -> None
+ def __init__(self, seed: Optional[int] = None) -> None:
self._rng = numpy.random.RandomState(seed)
@@ -47,18 +43,25 @@
self._rng = numpy.random.RandomState()
- def infer_relative_search_space(self, study, trial):
- # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]
+ def infer_relative_search_space(
+ self, study: Study, trial: FrozenTrial
+ ) -> Dict[str, BaseDistribution]:
return {}
- def sample_relative(self, study, trial, search_space):
- # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]
+ def sample_relative(
+ self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]
+ ) -> Dict[str, Any]:
return {}
- def sample_independent(self, study, trial, param_name, param_distribution):
- # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any
+ def sample_independent(
+ self,
+ study: Study,
+ trial: FrozenTrial,
+ param_name: str,
+ param_distribution: distributions.BaseDistribution,
+ ) -> Any:
if isinstance(param_distribution, distributions.UniformDistribution):
return self._rng.uniform(param_distribution.low, param_distribution.high)
| {"golden_diff": "diff --git a/optuna/samplers/_random.py b/optuna/samplers/_random.py\n--- a/optuna/samplers/_random.py\n+++ b/optuna/samplers/_random.py\n@@ -1,17 +1,14 @@\n+from typing import Any\n+from typing import Dict\n+from typing import Optional\n+\n import numpy\n \n from optuna import distributions\n+from optuna.distributions import BaseDistribution\n from optuna.samplers import BaseSampler\n-from optuna import type_checking\n-\n-if type_checking.TYPE_CHECKING:\n- from typing import Any # NOQA\n- from typing import Dict # NOQA\n- from typing import Optional # NOQA\n-\n- from optuna.distributions import BaseDistribution # NOQA\n- from optuna.study import Study # NOQA\n- from optuna.trial import FrozenTrial # NOQA\n+from optuna.study import Study\n+from optuna.trial import FrozenTrial\n \n \n class RandomSampler(BaseSampler):\n@@ -38,8 +35,7 @@\n seed: Seed for random number generator.\n \"\"\"\n \n- def __init__(self, seed=None):\n- # type: (Optional[int]) -> None\n+ def __init__(self, seed: Optional[int] = None) -> None:\n \n self._rng = numpy.random.RandomState(seed)\n \n@@ -47,18 +43,25 @@\n \n self._rng = numpy.random.RandomState()\n \n- def infer_relative_search_space(self, study, trial):\n- # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]\n+ def infer_relative_search_space(\n+ self, study: Study, trial: FrozenTrial\n+ ) -> Dict[str, BaseDistribution]:\n \n return {}\n \n- def sample_relative(self, study, trial, search_space):\n- # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]\n+ def sample_relative(\n+ self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n+ ) -> Dict[str, Any]:\n \n return {}\n \n- def sample_independent(self, study, trial, param_name, param_distribution):\n- # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any\n+ def sample_independent(\n+ self,\n+ study: Study,\n+ trial: FrozenTrial,\n+ param_name: str,\n+ param_distribution: distributions.BaseDistribution,\n+ ) -> Any:\n \n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n", "issue": "Use function annotation syntax for Type Hints.\nAfter dropping Python 2.7 support at #710, we can define type hints with function annotation syntax. \r\n~~Do you have a plan to update the coding style guideline?~~\r\nhttps://github.com/optuna/optuna/wiki/Coding-Style-Conventions\r\n\r\n## Progress\r\n\r\n- [x] `optuna/integration/sklearn.py` (#1735)\r\n- [x] `optuna/study.py` - assigned to harpy\r\n\r\n## Note to the questioner\r\n\r\nWe still cannot use variable annotation syntax introduced by [PEP 526](https://www.python.org/dev/peps/pep-0526/) because we supports Python 3.5.\n", "before_files": [{"content": "import numpy\n\nfrom optuna import distributions\nfrom optuna.samplers import BaseSampler\nfrom optuna import type_checking\n\nif type_checking.TYPE_CHECKING:\n from typing import Any # NOQA\n from typing import Dict # NOQA\n from typing import Optional # NOQA\n\n from optuna.distributions import BaseDistribution # NOQA\n from optuna.study import Study # NOQA\n from optuna.trial import FrozenTrial # NOQA\n\n\nclass RandomSampler(BaseSampler):\n \"\"\"Sampler using random sampling.\n\n This sampler is based on *independent sampling*.\n See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.\n\n Example:\n\n .. testcode::\n\n import optuna\n from optuna.samplers import RandomSampler\n\n def objective(trial):\n x = trial.suggest_uniform('x', -5, 5)\n return x**2\n\n study = optuna.create_study(sampler=RandomSampler())\n study.optimize(objective, n_trials=10)\n\n Args:\n seed: Seed for random number generator.\n \"\"\"\n\n def __init__(self, seed=None):\n # type: (Optional[int]) -> None\n\n self._rng = numpy.random.RandomState(seed)\n\n def reseed_rng(self) -> None:\n\n self._rng = numpy.random.RandomState()\n\n def infer_relative_search_space(self, study, trial):\n # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]\n\n return {}\n\n def sample_relative(self, study, trial, search_space):\n # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]\n\n return {}\n\n def sample_independent(self, study, trial, param_name, param_distribution):\n # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any\n\n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n elif isinstance(param_distribution, distributions.LogUniformDistribution):\n log_low = numpy.log(param_distribution.low)\n log_high = numpy.log(param_distribution.high)\n return float(numpy.exp(self._rng.uniform(log_low, log_high)))\n elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):\n q = param_distribution.q\n r = param_distribution.high - param_distribution.low\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n low = 0 - 0.5 * q\n high = r + 0.5 * q\n s = self._rng.uniform(low, high)\n v = numpy.round(s / q) * q + param_distribution.low\n # v may slightly exceed range due to round-off errors.\n return float(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.IntUniformDistribution):\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n r = (param_distribution.high - param_distribution.low) / param_distribution.step\n # numpy.random.randint includes low but excludes high.\n s = self._rng.randint(0, r + 1)\n v = s * param_distribution.step + param_distribution.low\n return int(v)\n elif isinstance(param_distribution, distributions.IntLogUniformDistribution):\n log_low = numpy.log(param_distribution.low - 0.5)\n log_high = numpy.log(param_distribution.high + 0.5)\n s = numpy.exp(self._rng.uniform(log_low, log_high))\n v = numpy.round(s)\n return int(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n index = self._rng.randint(0, len(choices))\n return choices[index]\n else:\n raise NotImplementedError\n", "path": "optuna/samplers/_random.py"}], "after_files": [{"content": "from typing import Any\nfrom typing import Dict\nfrom typing import Optional\n\nimport numpy\n\nfrom optuna import distributions\nfrom optuna.distributions import BaseDistribution\nfrom optuna.samplers import BaseSampler\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\n\n\nclass RandomSampler(BaseSampler):\n \"\"\"Sampler using random sampling.\n\n This sampler is based on *independent sampling*.\n See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.\n\n Example:\n\n .. testcode::\n\n import optuna\n from optuna.samplers import RandomSampler\n\n def objective(trial):\n x = trial.suggest_uniform('x', -5, 5)\n return x**2\n\n study = optuna.create_study(sampler=RandomSampler())\n study.optimize(objective, n_trials=10)\n\n Args:\n seed: Seed for random number generator.\n \"\"\"\n\n def __init__(self, seed: Optional[int] = None) -> None:\n\n self._rng = numpy.random.RandomState(seed)\n\n def reseed_rng(self) -> None:\n\n self._rng = numpy.random.RandomState()\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: distributions.BaseDistribution,\n ) -> Any:\n\n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n elif isinstance(param_distribution, distributions.LogUniformDistribution):\n log_low = numpy.log(param_distribution.low)\n log_high = numpy.log(param_distribution.high)\n return float(numpy.exp(self._rng.uniform(log_low, log_high)))\n elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):\n q = param_distribution.q\n r = param_distribution.high - param_distribution.low\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n low = 0 - 0.5 * q\n high = r + 0.5 * q\n s = self._rng.uniform(low, high)\n v = numpy.round(s / q) * q + param_distribution.low\n # v may slightly exceed range due to round-off errors.\n return float(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.IntUniformDistribution):\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n r = (param_distribution.high - param_distribution.low) / param_distribution.step\n # numpy.random.randint includes low but excludes high.\n s = self._rng.randint(0, r + 1)\n v = s * param_distribution.step + param_distribution.low\n return int(v)\n elif isinstance(param_distribution, distributions.IntLogUniformDistribution):\n log_low = numpy.log(param_distribution.low - 0.5)\n log_high = numpy.log(param_distribution.high + 0.5)\n s = numpy.exp(self._rng.uniform(log_low, log_high))\n v = numpy.round(s)\n return int(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n index = self._rng.randint(0, len(choices))\n return choices[index]\n else:\n raise NotImplementedError\n", "path": "optuna/samplers/_random.py"}]} | 1,447 | 583 |
gh_patches_debug_3298 | rasdani/github-patches | git_diff | pytorch__ignite-887 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
utils.convert_tensor considers `device = 0` to be no device
## 🐛 Bug description
In utils.convert_tensor, this line appears:
`return tensor.to(device=device, non_blocking=non_blocking) if device else tensor`
This means that for `device = 0` (as returned from `torch.cuda.current_device`) no conversion is applied, which can be very confusing. I might add a PR for that tomorrow, unless people tell me there's a reason to leave that line as it is.
For reproduction:
```python
import torch
from ignite.utils import convert_tensor
mytens = torch.zeros(2)
device = torch.cuda.current_device()
converted_tens = convert_tensor(mytens, device)
assert converted_tens.device == device
```
## Environment
- PyTorch Version (e.g., 1.4): 1.4
- Ignite Version (e.g., 0.3.0): 0.3
- OS (e.g., Linux): Windows 10
- How you installed Ignite (`conda`, `pip`, source): conda
- Python version: 3.7.6
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/utils.py`
Content:
```
1 import collections.abc as collections
2 import logging
3 from typing import Union, Optional, Callable, Any, Type, Tuple
4
5 import torch
6
7 __all__ = ["convert_tensor", "apply_to_tensor", "apply_to_type", "to_onehot", "setup_logger"]
8
9
10 def convert_tensor(
11 input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],
12 device: Optional[Union[str, torch.device]] = None,
13 non_blocking: bool = False,
14 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
15 """Move tensors to relevant device."""
16
17 def _func(tensor: torch.Tensor) -> torch.Tensor:
18 return tensor.to(device=device, non_blocking=non_blocking) if device else tensor
19
20 return apply_to_tensor(input_, _func)
21
22
23 def apply_to_tensor(
24 input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable
25 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
26 """Apply a function on a tensor or mapping, or sequence of tensors.
27 """
28 return apply_to_type(input_, torch.Tensor, func)
29
30
31 def apply_to_type(
32 input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],
33 input_type: Union[Type, Tuple[Type[Any], Any]],
34 func: Callable,
35 ) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:
36 """Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.
37 """
38 if isinstance(input_, input_type):
39 return func(input_)
40 elif isinstance(input_, (str, bytes)):
41 return input_
42 elif isinstance(input_, collections.Mapping):
43 return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})
44 elif isinstance(input_, tuple) and hasattr(input_, "_fields"): # namedtuple
45 return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))
46 elif isinstance(input_, collections.Sequence):
47 return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])
48 else:
49 raise TypeError(("input must contain {}, dicts or lists; found {}".format(input_type, type(input_))))
50
51
52 def to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:
53 """Convert a tensor of indices of any shape `(N, ...)` to a
54 tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the
55 input's device`.
56 """
57 onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)
58 return onehot.scatter_(1, indices.unsqueeze(1), 1)
59
60
61 def setup_logger(
62 name: str,
63 level: int = logging.INFO,
64 format: str = "%(asctime)s %(name)s %(levelname)s: %(message)s",
65 filepath: Optional[str] = None,
66 distributed_rank: int = 0,
67 ) -> logging.Logger:
68 """Setups logger: name, level, format etc.
69
70 Args:
71 name (str): new name for the logger.
72 level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG
73 format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`
74 filepath (str, optional): Optional logging file path. If not None, logs are written to the file.
75 distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.
76
77 Returns:
78 logging.Logger
79
80 For example, to improve logs readability when training with a trainer and evaluator:
81
82 .. code-block:: python
83
84 from ignite.utils import setup_logger
85
86 trainer = ...
87 evaluator = ...
88
89 trainer.logger = setup_logger("trainer")
90 evaluator.logger = setup_logger("evaluator")
91
92 trainer.run(data, max_epochs=10)
93
94 # Logs will look like
95 # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.
96 # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23
97 # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.
98 # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02
99 # ...
100
101 """
102 logger = logging.getLogger(name)
103
104 if distributed_rank > 0:
105 return logger
106
107 logger.setLevel(level)
108
109 # Remove previous handlers
110 if logger.hasHandlers():
111 for h in list(logger.handlers):
112 logger.removeHandler(h)
113
114 formatter = logging.Formatter(format)
115
116 ch = logging.StreamHandler()
117 ch.setLevel(level)
118 ch.setFormatter(formatter)
119 logger.addHandler(ch)
120
121 if filepath is not None:
122 fh = logging.FileHandler(filepath)
123 fh.setLevel(level)
124 fh.setFormatter(formatter)
125 logger.addHandler(fh)
126
127 return logger
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/utils.py b/ignite/utils.py
--- a/ignite/utils.py
+++ b/ignite/utils.py
@@ -15,7 +15,7 @@
"""Move tensors to relevant device."""
def _func(tensor: torch.Tensor) -> torch.Tensor:
- return tensor.to(device=device, non_blocking=non_blocking) if device else tensor
+ return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor
return apply_to_tensor(input_, _func)
| {"golden_diff": "diff --git a/ignite/utils.py b/ignite/utils.py\n--- a/ignite/utils.py\n+++ b/ignite/utils.py\n@@ -15,7 +15,7 @@\n \"\"\"Move tensors to relevant device.\"\"\"\n \n def _func(tensor: torch.Tensor) -> torch.Tensor:\n- return tensor.to(device=device, non_blocking=non_blocking) if device else tensor\n+ return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n \n return apply_to_tensor(input_, _func)\n", "issue": "utils.convert_tensor considers `device = 0` to be no device\n## \ud83d\udc1b Bug description\r\nIn utils.convert_tensor, this line appears:\r\n`return tensor.to(device=device, non_blocking=non_blocking) if device else tensor`\r\n\r\nThis means that for `device = 0` (as returned from `torch.cuda.current_device`) no conversion is applied, which can be very confusing. I might add a PR for that tomorrow, unless people tell me there's a reason to leave that line as it is.\r\n\r\nFor reproduction:\r\n```python\r\nimport torch\r\nfrom ignite.utils import convert_tensor\r\n\r\nmytens = torch.zeros(2)\r\ndevice = torch.cuda.current_device()\r\nconverted_tens = convert_tensor(mytens, device)\r\nassert converted_tens.device == device\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.4\r\n - Ignite Version (e.g., 0.3.0): 0.3\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed Ignite (`conda`, `pip`, source): conda\r\n - Python version: 3.7.6\r\n\r\n\n", "before_files": [{"content": "import collections.abc as collections\nimport logging\nfrom typing import Union, Optional, Callable, Any, Type, Tuple\n\nimport torch\n\n__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\"]\n\n\ndef convert_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device else tensor\n\n return apply_to_tensor(input_, _func)\n\n\ndef apply_to_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n \"\"\"\n return apply_to_type(input_, torch.Tensor, func)\n\n\ndef apply_to_type(\n input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.\n \"\"\"\n if isinstance(input_, input_type):\n return func(input_)\n elif isinstance(input_, (str, bytes)):\n return input_\n elif isinstance(input_, collections.Mapping):\n return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})\n elif isinstance(input_, tuple) and hasattr(input_, \"_fields\"): # namedtuple\n return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))\n elif isinstance(input_, collections.Sequence):\n return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])\n else:\n raise TypeError((\"input must contain {}, dicts or lists; found {}\".format(input_type, type(input_))))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the\n input's device`.\n \"\"\"\n onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: str,\n level: int = logging.INFO,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: int = 0,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name (str): new name for the logger.\n level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG\n format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`\n filepath (str, optional): Optional logging file path. If not None, logs are written to the file.\n distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.\n\n Returns:\n logging.Logger\n\n For example, to improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n \"\"\"\n logger = logging.getLogger(name)\n\n if distributed_rank > 0:\n return logger\n\n logger.setLevel(level)\n\n # Remove previous handlers\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler()\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n return logger\n", "path": "ignite/utils.py"}], "after_files": [{"content": "import collections.abc as collections\nimport logging\nfrom typing import Union, Optional, Callable, Any, Type, Tuple\n\nimport torch\n\n__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\"]\n\n\ndef convert_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n\n return apply_to_tensor(input_, _func)\n\n\ndef apply_to_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n \"\"\"\n return apply_to_type(input_, torch.Tensor, func)\n\n\ndef apply_to_type(\n input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.\n \"\"\"\n if isinstance(input_, input_type):\n return func(input_)\n elif isinstance(input_, (str, bytes)):\n return input_\n elif isinstance(input_, collections.Mapping):\n return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})\n elif isinstance(input_, tuple) and hasattr(input_, \"_fields\"): # namedtuple\n return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))\n elif isinstance(input_, collections.Sequence):\n return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])\n else:\n raise TypeError((\"input must contain {}, dicts or lists; found {}\".format(input_type, type(input_))))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the\n input's device`.\n \"\"\"\n onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: str,\n level: int = logging.INFO,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: int = 0,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name (str): new name for the logger.\n level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG\n format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`\n filepath (str, optional): Optional logging file path. If not None, logs are written to the file.\n distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.\n\n Returns:\n logging.Logger\n\n For example, to improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n \"\"\"\n logger = logging.getLogger(name)\n\n if distributed_rank > 0:\n return logger\n\n logger.setLevel(level)\n\n # Remove previous handlers\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler()\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n return logger\n", "path": "ignite/utils.py"}]} | 1,961 | 113 |
gh_patches_debug_37113 | rasdani/github-patches | git_diff | sublimelsp__LSP-472 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LS always starts in first folder of workspace
LSP always starts a language server in the first project of your workspace, regardless of which one you're working on. For example, with the following workspace:

When I open any Rust files in `bserver`, RLS is still started in `LSP`, since it appears first in the list. This causes RLS to throw a warning:

and effectively breaks all useful functionality of the LSP plugin--nothing works, because RLS is staring at the wrong directory.
I'm still digging as to why this is, but it looks like the issue is [an oversight with branching right here](https://github.com/tomv564/LSP/blob/master/plugin/core/workspace.py#L16). I'll submit a PR shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/workspace.py`
Content:
```
1 import os
2 try:
3 from typing import List, Optional, Any
4 assert List and Optional and Any
5 except ImportError:
6 pass
7
8 from .logging import debug
9 # from .types import WindowLike
10
11
12 def get_project_path(window: 'Any') -> 'Optional[str]':
13 """
14 Returns the first project folder or the parent folder of the active view
15 """
16 if len(window.folders()):
17 folder_paths = window.folders()
18 return folder_paths[0]
19 else:
20 view = window.active_view()
21 if view:
22 filename = view.file_name()
23 if filename:
24 project_path = os.path.dirname(filename)
25 debug("Couldn't determine project directory since no folders are open!",
26 "Using", project_path, "as a fallback.")
27 return project_path
28 else:
29 debug("Couldn't determine project directory since no folders are open",
30 "and the current file isn't saved on the disk.")
31 return None
32 else:
33 debug("No view is active in current window")
34 return None # https://github.com/tomv564/LSP/issues/219
35
36
37 def get_common_parent(paths: 'List[str]') -> str:
38 """
39 Get the common parent directory of multiple paths.
40
41 Python 3.5+ includes os.path.commonpath which does this, however Sublime
42 currently embeds Python 3.3.
43 """
44 return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')
45
46
47 def is_in_workspace(window: 'Any', file_path: str) -> bool:
48 workspace_path = get_project_path(window)
49 if workspace_path is None:
50 return False
51
52 common_dir = get_common_parent([workspace_path, file_path])
53 return workspace_path == common_dir
54
55
56 def enable_in_project(window, config_name: str) -> None:
57 project_data = window.project_data()
58 if isinstance(project_data, dict):
59 project_settings = project_data.setdefault('settings', dict())
60 project_lsp_settings = project_settings.setdefault('LSP', dict())
61 project_client_settings = project_lsp_settings.setdefault(config_name, dict())
62 project_client_settings['enabled'] = True
63 window.set_project_data(project_data)
64 else:
65 debug('non-dict returned in project_settings: ', project_data)
66
67
68 def disable_in_project(window, config_name: str) -> None:
69 project_data = window.project_data()
70 if isinstance(project_data, dict):
71 project_settings = project_data.setdefault('settings', dict())
72 project_lsp_settings = project_settings.setdefault('LSP', dict())
73 project_client_settings = project_lsp_settings.setdefault(config_name, dict())
74 project_client_settings['enabled'] = False
75 window.set_project_data(project_data)
76 else:
77 debug('non-dict returned in project_settings: ', project_data)
78
79
80 def get_project_config(window: 'Any') -> dict:
81 project_data = window.project_data() or dict()
82 if isinstance(project_data, dict):
83 project_settings = project_data.setdefault('settings', dict())
84 project_lsp_settings = project_settings.setdefault('LSP', dict())
85 return project_lsp_settings
86 else:
87 debug('non-dict returned in project_settings: ', project_data)
88 return dict()
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/core/workspace.py b/plugin/core/workspace.py
--- a/plugin/core/workspace.py
+++ b/plugin/core/workspace.py
@@ -1,37 +1,69 @@
import os
try:
- from typing import List, Optional, Any
- assert List and Optional and Any
+ from typing import List, Optional, Any, Iterable
+ assert List and Optional and Any and Iterable
except ImportError:
pass
from .logging import debug
-# from .types import WindowLike
+from .types import ViewLike
+
+
+def get_filename_from_view(view: ViewLike) -> 'Optional[str]':
+ if not view:
+ debug("No view is active in current window")
+ return None # https://github.com/tomv564/LSP/issues/219
+ filename = view.file_name()
+ if not filename:
+ debug("Couldn't determine project directory since no folders are open",
+ "and the current file isn't saved on the disk.")
+ return filename
+
+
+def get_directory_name(view: ViewLike) -> 'Optional[str]':
+ filename = get_filename_from_view(view)
+ if filename:
+ project_path = os.path.dirname(filename)
+ return project_path
+ return None
+
+
+def find_path_among_multi_folders(folders: 'Iterable[str]',
+ view: ViewLike) -> 'Optional[str]':
+ filename = get_filename_from_view(view)
+ if not filename:
+ return None
+ folders = [os.path.realpath(f) for f in folders]
+ file = view.file_name()
+ if not file:
+ return None
+ file = os.path.realpath(file)
+ while file not in folders:
+ file = os.path.dirname(file)
+ if os.path.dirname(file) == file:
+ # We're at the root of the filesystem.
+ file = None
+ break
+ debug('project path is', file)
+ return file
def get_project_path(window: 'Any') -> 'Optional[str]':
"""
- Returns the first project folder or the parent folder of the active view
+ Returns the project folder or the parent folder of the active view
"""
- if len(window.folders()):
+ if not window:
+ return None
+ num_folders = len(window.folders())
+ if num_folders == 0:
+ return get_directory_name(window.active_view())
+ elif num_folders == 1:
folder_paths = window.folders()
return folder_paths[0]
- else:
- view = window.active_view()
- if view:
- filename = view.file_name()
- if filename:
- project_path = os.path.dirname(filename)
- debug("Couldn't determine project directory since no folders are open!",
- "Using", project_path, "as a fallback.")
- return project_path
- else:
- debug("Couldn't determine project directory since no folders are open",
- "and the current file isn't saved on the disk.")
- return None
- else:
- debug("No view is active in current window")
- return None # https://github.com/tomv564/LSP/issues/219
+ else: # num_folders > 1
+ return find_path_among_multi_folders(
+ window.folders(),
+ window.active_view())
def get_common_parent(paths: 'List[str]') -> str:
| {"golden_diff": "diff --git a/plugin/core/workspace.py b/plugin/core/workspace.py\n--- a/plugin/core/workspace.py\n+++ b/plugin/core/workspace.py\n@@ -1,37 +1,69 @@\n import os\n try:\n- from typing import List, Optional, Any\n- assert List and Optional and Any\n+ from typing import List, Optional, Any, Iterable\n+ assert List and Optional and Any and Iterable\n except ImportError:\n pass\n \n from .logging import debug\n-# from .types import WindowLike\n+from .types import ViewLike\n+\n+\n+def get_filename_from_view(view: ViewLike) -> 'Optional[str]':\n+ if not view:\n+ debug(\"No view is active in current window\")\n+ return None # https://github.com/tomv564/LSP/issues/219\n+ filename = view.file_name()\n+ if not filename:\n+ debug(\"Couldn't determine project directory since no folders are open\",\n+ \"and the current file isn't saved on the disk.\")\n+ return filename\n+\n+\n+def get_directory_name(view: ViewLike) -> 'Optional[str]':\n+ filename = get_filename_from_view(view)\n+ if filename:\n+ project_path = os.path.dirname(filename)\n+ return project_path\n+ return None\n+\n+\n+def find_path_among_multi_folders(folders: 'Iterable[str]',\n+ view: ViewLike) -> 'Optional[str]':\n+ filename = get_filename_from_view(view)\n+ if not filename:\n+ return None\n+ folders = [os.path.realpath(f) for f in folders]\n+ file = view.file_name()\n+ if not file:\n+ return None\n+ file = os.path.realpath(file)\n+ while file not in folders:\n+ file = os.path.dirname(file)\n+ if os.path.dirname(file) == file:\n+ # We're at the root of the filesystem.\n+ file = None\n+ break\n+ debug('project path is', file)\n+ return file\n \n \n def get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n- Returns the first project folder or the parent folder of the active view\n+ Returns the project folder or the parent folder of the active view\n \"\"\"\n- if len(window.folders()):\n+ if not window:\n+ return None\n+ num_folders = len(window.folders())\n+ if num_folders == 0:\n+ return get_directory_name(window.active_view())\n+ elif num_folders == 1:\n folder_paths = window.folders()\n return folder_paths[0]\n- else:\n- view = window.active_view()\n- if view:\n- filename = view.file_name()\n- if filename:\n- project_path = os.path.dirname(filename)\n- debug(\"Couldn't determine project directory since no folders are open!\",\n- \"Using\", project_path, \"as a fallback.\")\n- return project_path\n- else:\n- debug(\"Couldn't determine project directory since no folders are open\",\n- \"and the current file isn't saved on the disk.\")\n- return None\n- else:\n- debug(\"No view is active in current window\")\n- return None # https://github.com/tomv564/LSP/issues/219\n+ else: # num_folders > 1\n+ return find_path_among_multi_folders(\n+ window.folders(),\n+ window.active_view())\n \n \n def get_common_parent(paths: 'List[str]') -> str:\n", "issue": "LS always starts in first folder of workspace\nLSP always starts a language server in the first project of your workspace, regardless of which one you're working on. For example, with the following workspace:\r\n\r\n\r\n\r\nWhen I open any Rust files in `bserver`, RLS is still started in `LSP`, since it appears first in the list. This causes RLS to throw a warning:\r\n\r\n\r\n\r\nand effectively breaks all useful functionality of the LSP plugin--nothing works, because RLS is staring at the wrong directory.\r\n\r\nI'm still digging as to why this is, but it looks like the issue is [an oversight with branching right here](https://github.com/tomv564/LSP/blob/master/plugin/core/workspace.py#L16). I'll submit a PR shortly.\n", "before_files": [{"content": "import os\ntry:\n from typing import List, Optional, Any\n assert List and Optional and Any\nexcept ImportError:\n pass\n\nfrom .logging import debug\n# from .types import WindowLike\n\n\ndef get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n Returns the first project folder or the parent folder of the active view\n \"\"\"\n if len(window.folders()):\n folder_paths = window.folders()\n return folder_paths[0]\n else:\n view = window.active_view()\n if view:\n filename = view.file_name()\n if filename:\n project_path = os.path.dirname(filename)\n debug(\"Couldn't determine project directory since no folders are open!\",\n \"Using\", project_path, \"as a fallback.\")\n return project_path\n else:\n debug(\"Couldn't determine project directory since no folders are open\",\n \"and the current file isn't saved on the disk.\")\n return None\n else:\n debug(\"No view is active in current window\")\n return None # https://github.com/tomv564/LSP/issues/219\n\n\ndef get_common_parent(paths: 'List[str]') -> str:\n \"\"\"\n Get the common parent directory of multiple paths.\n\n Python 3.5+ includes os.path.commonpath which does this, however Sublime\n currently embeds Python 3.3.\n \"\"\"\n return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')\n\n\ndef is_in_workspace(window: 'Any', file_path: str) -> bool:\n workspace_path = get_project_path(window)\n if workspace_path is None:\n return False\n\n common_dir = get_common_parent([workspace_path, file_path])\n return workspace_path == common_dir\n\n\ndef enable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = True\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef disable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = False\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef get_project_config(window: 'Any') -> dict:\n project_data = window.project_data() or dict()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n return project_lsp_settings\n else:\n debug('non-dict returned in project_settings: ', project_data)\n return dict()\n", "path": "plugin/core/workspace.py"}], "after_files": [{"content": "import os\ntry:\n from typing import List, Optional, Any, Iterable\n assert List and Optional and Any and Iterable\nexcept ImportError:\n pass\n\nfrom .logging import debug\nfrom .types import ViewLike\n\n\ndef get_filename_from_view(view: ViewLike) -> 'Optional[str]':\n if not view:\n debug(\"No view is active in current window\")\n return None # https://github.com/tomv564/LSP/issues/219\n filename = view.file_name()\n if not filename:\n debug(\"Couldn't determine project directory since no folders are open\",\n \"and the current file isn't saved on the disk.\")\n return filename\n\n\ndef get_directory_name(view: ViewLike) -> 'Optional[str]':\n filename = get_filename_from_view(view)\n if filename:\n project_path = os.path.dirname(filename)\n return project_path\n return None\n\n\ndef find_path_among_multi_folders(folders: 'Iterable[str]',\n view: ViewLike) -> 'Optional[str]':\n filename = get_filename_from_view(view)\n if not filename:\n return None\n folders = [os.path.realpath(f) for f in folders]\n file = view.file_name()\n if not file:\n return None\n file = os.path.realpath(file)\n while file not in folders:\n file = os.path.dirname(file)\n if os.path.dirname(file) == file:\n # We're at the root of the filesystem.\n file = None\n break\n debug('project path is', file)\n return file\n\n\ndef get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n Returns the project folder or the parent folder of the active view\n \"\"\"\n if not window:\n return None\n num_folders = len(window.folders())\n if num_folders == 0:\n return get_directory_name(window.active_view())\n elif num_folders == 1:\n folder_paths = window.folders()\n return folder_paths[0]\n else: # num_folders > 1\n return find_path_among_multi_folders(\n window.folders(),\n window.active_view())\n\n\ndef get_common_parent(paths: 'List[str]') -> str:\n \"\"\"\n Get the common parent directory of multiple paths.\n\n Python 3.5+ includes os.path.commonpath which does this, however Sublime\n currently embeds Python 3.3.\n \"\"\"\n return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')\n\n\ndef is_in_workspace(window: 'Any', file_path: str) -> bool:\n workspace_path = get_project_path(window)\n if workspace_path is None:\n return False\n\n common_dir = get_common_parent([workspace_path, file_path])\n return workspace_path == common_dir\n\n\ndef enable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = True\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef disable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = False\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef get_project_config(window: 'Any') -> dict:\n project_data = window.project_data() or dict()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n return project_lsp_settings\n else:\n debug('non-dict returned in project_settings: ', project_data)\n return dict()\n", "path": "plugin/core/workspace.py"}]} | 1,430 | 764 |
gh_patches_debug_13955 | rasdani/github-patches | git_diff | saleor__saleor-3337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The fetch vat rates button should not be a GET method
### What I'm trying to achieve
Not to allow GET methods to fetch vat rates.
### Steps to reproduce the problem
1. Go to configuration -> Taxes ;
2. The fetch tax rates button, is a GET button.
### What I expected to happen
Get a POST instead of a GET, which is safer against attacks.
### Describe a proposed solution
Drop the button link on the dashboard for a submit button or a modal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/dashboard/taxes/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth.decorators import permission_required
6 from django.core.exceptions import ImproperlyConfigured
7 from django.core.management import call_command
8 from django.shortcuts import get_object_or_404, redirect
9 from django.template.response import TemplateResponse
10 from django.utils.translation import pgettext_lazy
11 from django_countries.fields import Country
12 from django_prices_vatlayer.models import VAT
13
14 from ...core import TaxRateType
15 from ...core.utils import get_paginator_items
16 from ...core.utils.taxes import get_taxes_for_country
17 from ...dashboard.taxes.filters import TaxFilter
18 from ...dashboard.taxes.forms import TaxesConfigurationForm
19 from ...dashboard.views import staff_member_required
20
21 logger = logging.getLogger(__name__)
22
23
24 @staff_member_required
25 def tax_list(request):
26 taxes = VAT.objects.order_by('country_code')
27 tax_filter = TaxFilter(request.GET, queryset=taxes)
28 taxes = get_paginator_items(
29 tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))
30 ctx = {
31 'taxes': taxes, 'filter_set': tax_filter,
32 'is_empty': not tax_filter.queryset.exists()}
33 return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)
34
35
36 @staff_member_required
37 def tax_details(request, country_code):
38 tax = get_object_or_404(VAT, country_code=country_code)
39 tax_rates = get_taxes_for_country(Country(country_code))
40 translations = dict(TaxRateType.CHOICES)
41 tax_rates = [
42 (translations.get(rate_name, rate_name), tax['value'])
43 for rate_name, tax in tax_rates.items()]
44 ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}
45 return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)
46
47
48 @staff_member_required
49 @permission_required('site.manage_settings')
50 def configure_taxes(request):
51 site_settings = request.site.settings
52 taxes_form = TaxesConfigurationForm(
53 request.POST or None, instance=site_settings)
54 if taxes_form.is_valid():
55 taxes_form.save()
56 msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')
57 messages.success(request, msg)
58 return redirect('dashboard:taxes')
59 ctx = {'site': site_settings, 'taxes_form': taxes_form}
60 return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)
61
62
63 @staff_member_required
64 @permission_required('site.manage_settings')
65 def fetch_tax_rates(request):
66 try:
67 call_command('get_vat_rates')
68 msg = pgettext_lazy(
69 'Dashboard message', 'Tax rates updated successfully')
70 messages.success(request, msg)
71 except ImproperlyConfigured as exc:
72 logger.exception(exc)
73 msg = pgettext_lazy(
74 'Dashboard message',
75 'Could not fetch tax rates. '
76 'Make sure you have supplied a valid API Access Key.<br/>'
77 'Check the server logs for more information about this error.')
78 messages.warning(request, msg)
79 return redirect('dashboard:taxes')
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/dashboard/taxes/views.py b/saleor/dashboard/taxes/views.py
--- a/saleor/dashboard/taxes/views.py
+++ b/saleor/dashboard/taxes/views.py
@@ -8,6 +8,7 @@
from django.shortcuts import get_object_or_404, redirect
from django.template.response import TemplateResponse
from django.utils.translation import pgettext_lazy
+from django.views.decorators.http import require_POST
from django_countries.fields import Country
from django_prices_vatlayer.models import VAT
@@ -61,6 +62,7 @@
@staff_member_required
+@require_POST
@permission_required('site.manage_settings')
def fetch_tax_rates(request):
try:
| {"golden_diff": "diff --git a/saleor/dashboard/taxes/views.py b/saleor/dashboard/taxes/views.py\n--- a/saleor/dashboard/taxes/views.py\n+++ b/saleor/dashboard/taxes/views.py\n@@ -8,6 +8,7 @@\n from django.shortcuts import get_object_or_404, redirect\n from django.template.response import TemplateResponse\n from django.utils.translation import pgettext_lazy\n+from django.views.decorators.http import require_POST\n from django_countries.fields import Country\n from django_prices_vatlayer.models import VAT\n \n@@ -61,6 +62,7 @@\n \n \n @staff_member_required\n+@require_POST\n @permission_required('site.manage_settings')\n def fetch_tax_rates(request):\n try:\n", "issue": "The fetch vat rates button should not be a GET method\n### What I'm trying to achieve\r\nNot to allow GET methods to fetch vat rates.\r\n\r\n### Steps to reproduce the problem\r\n1. Go to configuration -> Taxes ;\r\n2. The fetch tax rates button, is a GET button.\r\n\r\n### What I expected to happen\r\nGet a POST instead of a GET, which is safer against attacks.\r\n\r\n### Describe a proposed solution\r\nDrop the button link on the dashboard for a submit button or a modal.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.core.management import call_command\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext_lazy\nfrom django_countries.fields import Country\nfrom django_prices_vatlayer.models import VAT\n\nfrom ...core import TaxRateType\nfrom ...core.utils import get_paginator_items\nfrom ...core.utils.taxes import get_taxes_for_country\nfrom ...dashboard.taxes.filters import TaxFilter\nfrom ...dashboard.taxes.forms import TaxesConfigurationForm\nfrom ...dashboard.views import staff_member_required\n\nlogger = logging.getLogger(__name__)\n\n\n@staff_member_required\ndef tax_list(request):\n taxes = VAT.objects.order_by('country_code')\n tax_filter = TaxFilter(request.GET, queryset=taxes)\n taxes = get_paginator_items(\n tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))\n ctx = {\n 'taxes': taxes, 'filter_set': tax_filter,\n 'is_empty': not tax_filter.queryset.exists()}\n return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)\n\n\n@staff_member_required\ndef tax_details(request, country_code):\n tax = get_object_or_404(VAT, country_code=country_code)\n tax_rates = get_taxes_for_country(Country(country_code))\n translations = dict(TaxRateType.CHOICES)\n tax_rates = [\n (translations.get(rate_name, rate_name), tax['value'])\n for rate_name, tax in tax_rates.items()]\n ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}\n return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef configure_taxes(request):\n site_settings = request.site.settings\n taxes_form = TaxesConfigurationForm(\n request.POST or None, instance=site_settings)\n if taxes_form.is_valid():\n taxes_form.save()\n msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')\n messages.success(request, msg)\n return redirect('dashboard:taxes')\n ctx = {'site': site_settings, 'taxes_form': taxes_form}\n return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef fetch_tax_rates(request):\n try:\n call_command('get_vat_rates')\n msg = pgettext_lazy(\n 'Dashboard message', 'Tax rates updated successfully')\n messages.success(request, msg)\n except ImproperlyConfigured as exc:\n logger.exception(exc)\n msg = pgettext_lazy(\n 'Dashboard message',\n 'Could not fetch tax rates. '\n 'Make sure you have supplied a valid API Access Key.<br/>'\n 'Check the server logs for more information about this error.')\n messages.warning(request, msg)\n return redirect('dashboard:taxes')\n", "path": "saleor/dashboard/taxes/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.core.management import call_command\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext_lazy\nfrom django.views.decorators.http import require_POST\nfrom django_countries.fields import Country\nfrom django_prices_vatlayer.models import VAT\n\nfrom ...core import TaxRateType\nfrom ...core.utils import get_paginator_items\nfrom ...core.utils.taxes import get_taxes_for_country\nfrom ...dashboard.taxes.filters import TaxFilter\nfrom ...dashboard.taxes.forms import TaxesConfigurationForm\nfrom ...dashboard.views import staff_member_required\n\nlogger = logging.getLogger(__name__)\n\n\n@staff_member_required\ndef tax_list(request):\n taxes = VAT.objects.order_by('country_code')\n tax_filter = TaxFilter(request.GET, queryset=taxes)\n taxes = get_paginator_items(\n tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))\n ctx = {\n 'taxes': taxes, 'filter_set': tax_filter,\n 'is_empty': not tax_filter.queryset.exists()}\n return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)\n\n\n@staff_member_required\ndef tax_details(request, country_code):\n tax = get_object_or_404(VAT, country_code=country_code)\n tax_rates = get_taxes_for_country(Country(country_code))\n translations = dict(TaxRateType.CHOICES)\n tax_rates = [\n (translations.get(rate_name, rate_name), tax['value'])\n for rate_name, tax in tax_rates.items()]\n ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}\n return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef configure_taxes(request):\n site_settings = request.site.settings\n taxes_form = TaxesConfigurationForm(\n request.POST or None, instance=site_settings)\n if taxes_form.is_valid():\n taxes_form.save()\n msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')\n messages.success(request, msg)\n return redirect('dashboard:taxes')\n ctx = {'site': site_settings, 'taxes_form': taxes_form}\n return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)\n\n\n@staff_member_required\n@require_POST\n@permission_required('site.manage_settings')\ndef fetch_tax_rates(request):\n try:\n call_command('get_vat_rates')\n msg = pgettext_lazy(\n 'Dashboard message', 'Tax rates updated successfully')\n messages.success(request, msg)\n except ImproperlyConfigured as exc:\n logger.exception(exc)\n msg = pgettext_lazy(\n 'Dashboard message',\n 'Could not fetch tax rates. '\n 'Make sure you have supplied a valid API Access Key.<br/>'\n 'Check the server logs for more information about this error.')\n messages.warning(request, msg)\n return redirect('dashboard:taxes')\n", "path": "saleor/dashboard/taxes/views.py"}]} | 1,171 | 152 |
gh_patches_debug_1474 | rasdani/github-patches | git_diff | ray-project__ray-9429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] MARWIL tuned cartpole example (and my own experiments) produce nan rewards only.
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem? + Reproduction
I have a custom example that produces offline data and picks it up with MARWIL for training. I observed that I get `nan` reward values for my example every time, so I went a step back and used your cartpole example:
https://github.com/ray-project/ray/blob/cd5a207d69cdaf05b47d956c18e89d928585eec7/rllib/tuned_examples/marwil/cartpole-marwil.yaml
I'm following the exact steps there, i.e. first run
```
./train.py --run=PPO --env=CartPole-v0 \
--stop='{"timesteps_total": 50000}' \
--config='{"output": "/tmp/out", "batch_mode": "complete_episodes"}'
```
followed by
```
rllib train -f cartpole-marwil.yaml
```
I did this both on my currently preferred stable version `0.8.5`, as well as on the `0.9.0.dev0` wheel. The result is this:
```
== Status ==
Memory usage on this node: 19.4/32.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/12 CPUs, 0/0 GPUs, 0.0/9.96 GiB heap, 0.0/3.42 GiB objects
Result logdir: /Users/maxpumperla/ray_results/cartpole-marwil
Number of trials: 2 (2 TERMINATED)
+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+
| Trial name | status | loc | beta | iter | total time (s) | ts | reward |
|--------------------------------+------------+-------+--------+--------+------------------+--------+----------|
| MARWIL_CartPole-v0_7af06_00000 | TERMINATED | | 0 | 2206 | 58.5661 | 500007 | nan |
| MARWIL_CartPole-v0_7af06_00001 | TERMINATED | | 1 | 2248 | 58.6117 | 500286 | nan |
+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+
```
Also, I've noticed that your MARWIL unit test is a pure smoke test and doesn't check reward values, but I didn't run that locally. Maybe it produces nan values as well.
In any case I'd appreciate any input here, as we'd love to use MARWIL for our "real" use case, in which we see the same behaviour.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/examples/custom_loss.py`
Content:
```
1 """Example of using custom_loss() with an imitation learning loss.
2
3 The default input file is too small to learn a good policy, but you can
4 generate new experiences for IL training as follows:
5
6 To generate experiences:
7 $ ./train.py --run=PG --config='{"output": "/tmp/cartpole"}' --env=CartPole-v0
8
9 To train on experiences with joint PG + IL loss:
10 $ python custom_loss.py --input-files=/tmp/cartpole
11 """
12
13 import argparse
14 from pathlib import Path
15 import os
16
17 import ray
18 from ray import tune
19 from ray.rllib.examples.models.custom_loss_model import CustomLossModel, \
20 TorchCustomLossModel
21 from ray.rllib.models import ModelCatalog
22 from ray.rllib.utils.framework import try_import_tf
23
24 tf1, tf, tfv = try_import_tf()
25
26 parser = argparse.ArgumentParser()
27 parser.add_argument("--torch", action="store_true")
28 parser.add_argument("--stop-iters", type=int, default=200)
29 parser.add_argument(
30 "--input-files",
31 type=str,
32 default=os.path.join(
33 os.path.dirname(os.path.abspath(__file__)),
34 "../tests/data/cartpole_small"))
35
36 if __name__ == "__main__":
37 ray.init()
38 args = parser.parse_args()
39
40 # Bazel makes it hard to find files specified in `args` (and `data`).
41 # Look for them here.
42 if not os.path.exists(args.input_files):
43 # This script runs in the ray/rllib/examples dir.
44 rllib_dir = Path(__file__).parent.parent
45 input_dir = rllib_dir.absolute().joinpath(args.input_files)
46 args.input_files = str(input_dir)
47
48 ModelCatalog.register_custom_model(
49 "custom_loss", TorchCustomLossModel if args.torch else CustomLossModel)
50
51 config = {
52 "env": "CartPole-v0",
53 "num_workers": 0,
54 "model": {
55 "custom_model": "custom_loss",
56 "custom_model_config": {
57 "input_files": args.input_files,
58 },
59 },
60 "framework": "torch" if args.torch else "tf",
61 }
62
63 stop = {
64 "training_iteration": args.stop_iters,
65 }
66
67 tune.run("PG", config=config, stop=stop)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rllib/examples/custom_loss.py b/rllib/examples/custom_loss.py
--- a/rllib/examples/custom_loss.py
+++ b/rllib/examples/custom_loss.py
@@ -31,7 +31,7 @@
type=str,
default=os.path.join(
os.path.dirname(os.path.abspath(__file__)),
- "../tests/data/cartpole_small"))
+ "../tests/data/cartpole/small"))
if __name__ == "__main__":
ray.init()
| {"golden_diff": "diff --git a/rllib/examples/custom_loss.py b/rllib/examples/custom_loss.py\n--- a/rllib/examples/custom_loss.py\n+++ b/rllib/examples/custom_loss.py\n@@ -31,7 +31,7 @@\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n- \"../tests/data/cartpole_small\"))\n+ \"../tests/data/cartpole/small\"))\n \n if __name__ == \"__main__\":\n ray.init()\n", "issue": "[rllib] MARWIL tuned cartpole example (and my own experiments) produce nan rewards only.\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem? + Reproduction\r\n\r\nI have a custom example that produces offline data and picks it up with MARWIL for training. I observed that I get `nan` reward values for my example every time, so I went a step back and used your cartpole example:\r\n\r\nhttps://github.com/ray-project/ray/blob/cd5a207d69cdaf05b47d956c18e89d928585eec7/rllib/tuned_examples/marwil/cartpole-marwil.yaml\r\n\r\nI'm following the exact steps there, i.e. first run \r\n\r\n```\r\n./train.py --run=PPO --env=CartPole-v0 \\\r\n --stop='{\"timesteps_total\": 50000}' \\\r\n --config='{\"output\": \"/tmp/out\", \"batch_mode\": \"complete_episodes\"}'\r\n```\r\n\r\nfollowed by \r\n\r\n```\r\nrllib train -f cartpole-marwil.yaml\r\n```\r\n\r\nI did this both on my currently preferred stable version `0.8.5`, as well as on the `0.9.0.dev0` wheel. The result is this:\r\n\r\n```\r\n== Status ==\r\nMemory usage on this node: 19.4/32.0 GiB\r\nUsing FIFO scheduling algorithm.\r\nResources requested: 0/12 CPUs, 0/0 GPUs, 0.0/9.96 GiB heap, 0.0/3.42 GiB objects\r\nResult logdir: /Users/maxpumperla/ray_results/cartpole-marwil\r\nNumber of trials: 2 (2 TERMINATED)\r\n+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+\r\n| Trial name | status | loc | beta | iter | total time (s) | ts | reward |\r\n|--------------------------------+------------+-------+--------+--------+------------------+--------+----------|\r\n| MARWIL_CartPole-v0_7af06_00000 | TERMINATED | | 0 | 2206 | 58.5661 | 500007 | nan |\r\n| MARWIL_CartPole-v0_7af06_00001 | TERMINATED | | 1 | 2248 | 58.6117 | 500286 | nan |\r\n+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+\r\n```\r\n\r\nAlso, I've noticed that your MARWIL unit test is a pure smoke test and doesn't check reward values, but I didn't run that locally. Maybe it produces nan values as well.\r\n\r\nIn any case I'd appreciate any input here, as we'd love to use MARWIL for our \"real\" use case, in which we see the same behaviour.\n", "before_files": [{"content": "\"\"\"Example of using custom_loss() with an imitation learning loss.\n\nThe default input file is too small to learn a good policy, but you can\ngenerate new experiences for IL training as follows:\n\nTo generate experiences:\n$ ./train.py --run=PG --config='{\"output\": \"/tmp/cartpole\"}' --env=CartPole-v0\n\nTo train on experiences with joint PG + IL loss:\n$ python custom_loss.py --input-files=/tmp/cartpole\n\"\"\"\n\nimport argparse\nfrom pathlib import Path\nimport os\n\nimport ray\nfrom ray import tune\nfrom ray.rllib.examples.models.custom_loss_model import CustomLossModel, \\\n TorchCustomLossModel\nfrom ray.rllib.models import ModelCatalog\nfrom ray.rllib.utils.framework import try_import_tf\n\ntf1, tf, tfv = try_import_tf()\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--torch\", action=\"store_true\")\nparser.add_argument(\"--stop-iters\", type=int, default=200)\nparser.add_argument(\n \"--input-files\",\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"../tests/data/cartpole_small\"))\n\nif __name__ == \"__main__\":\n ray.init()\n args = parser.parse_args()\n\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n if not os.path.exists(args.input_files):\n # This script runs in the ray/rllib/examples dir.\n rllib_dir = Path(__file__).parent.parent\n input_dir = rllib_dir.absolute().joinpath(args.input_files)\n args.input_files = str(input_dir)\n\n ModelCatalog.register_custom_model(\n \"custom_loss\", TorchCustomLossModel if args.torch else CustomLossModel)\n\n config = {\n \"env\": \"CartPole-v0\",\n \"num_workers\": 0,\n \"model\": {\n \"custom_model\": \"custom_loss\",\n \"custom_model_config\": {\n \"input_files\": args.input_files,\n },\n },\n \"framework\": \"torch\" if args.torch else \"tf\",\n }\n\n stop = {\n \"training_iteration\": args.stop_iters,\n }\n\n tune.run(\"PG\", config=config, stop=stop)\n", "path": "rllib/examples/custom_loss.py"}], "after_files": [{"content": "\"\"\"Example of using custom_loss() with an imitation learning loss.\n\nThe default input file is too small to learn a good policy, but you can\ngenerate new experiences for IL training as follows:\n\nTo generate experiences:\n$ ./train.py --run=PG --config='{\"output\": \"/tmp/cartpole\"}' --env=CartPole-v0\n\nTo train on experiences with joint PG + IL loss:\n$ python custom_loss.py --input-files=/tmp/cartpole\n\"\"\"\n\nimport argparse\nfrom pathlib import Path\nimport os\n\nimport ray\nfrom ray import tune\nfrom ray.rllib.examples.models.custom_loss_model import CustomLossModel, \\\n TorchCustomLossModel\nfrom ray.rllib.models import ModelCatalog\nfrom ray.rllib.utils.framework import try_import_tf\n\ntf1, tf, tfv = try_import_tf()\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--torch\", action=\"store_true\")\nparser.add_argument(\"--stop-iters\", type=int, default=200)\nparser.add_argument(\n \"--input-files\",\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"../tests/data/cartpole/small\"))\n\nif __name__ == \"__main__\":\n ray.init()\n args = parser.parse_args()\n\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n if not os.path.exists(args.input_files):\n # This script runs in the ray/rllib/examples dir.\n rllib_dir = Path(__file__).parent.parent\n input_dir = rllib_dir.absolute().joinpath(args.input_files)\n args.input_files = str(input_dir)\n\n ModelCatalog.register_custom_model(\n \"custom_loss\", TorchCustomLossModel if args.torch else CustomLossModel)\n\n config = {\n \"env\": \"CartPole-v0\",\n \"num_workers\": 0,\n \"model\": {\n \"custom_model\": \"custom_loss\",\n \"custom_model_config\": {\n \"input_files\": args.input_files,\n },\n },\n \"framework\": \"torch\" if args.torch else \"tf\",\n }\n\n stop = {\n \"training_iteration\": args.stop_iters,\n }\n\n tune.run(\"PG\", config=config, stop=stop)\n", "path": "rllib/examples/custom_loss.py"}]} | 1,545 | 100 |
gh_patches_debug_57313 | rasdani/github-patches | git_diff | vllm-project__vllm-3129 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v0.3.3] Release Tracker
**ETA**: Feb 29th - Mar 1st
## Major changes
* StarCoder2 support
* Performance optimization and LoRA support for Gemma
* Performance optimization for MoE kernel
* 2/3/8-bit GPTQ support
* [Experimental] AWS Inferentia2 support
## PRs to be merged before the release
- [x] #2330 #2223
- [ ] ~~#2761~~
- [x] #2819
- [x] #3087 #3099
- [x] #3089
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/__init__.py`
Content:
```
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.3.2"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.3.2"
+__version__ = "0.3.3"
__all__ = [
"LLM",
| {"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.3.2\"\n+__version__ = \"0.3.3\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.3.3] Release Tracker\n**ETA**: Feb 29th - Mar 1st\r\n\r\n## Major changes\r\n\r\n* StarCoder2 support\r\n* Performance optimization and LoRA support for Gemma\r\n* Performance optimization for MoE kernel\r\n* 2/3/8-bit GPTQ support\r\n* [Experimental] AWS Inferentia2 support\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2330 #2223\r\n- [ ] ~~#2761~~\r\n- [x] #2819 \r\n- [x] #3087 #3099\r\n- [x] #3089 \n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.2\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}], "after_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.3\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]} | 626 | 108 |
gh_patches_debug_33139 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-2535 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Baggage span processor - key predicate
This issue is to track adding a method of selecting what baggage key entries should be copied.
Feedback in the JS contrib PR was to allow a user-provided predicate function. This puts the responsibility on the user to ensure sensitive baggage keys are not copied while also not prescribing how that is determined.
- https://github.com/open-telemetry/opentelemetry-js-contrib/issues/2166
We had a similar feedback in the .NET contrib project but thought it was more complicated than just using a set of prefixes so created an issue to continue the discussion. The plain processor that copies all baggage entries (like using `*` in your example) is likely to be accepted first.
- https://github.com/open-telemetry/opentelemetry-dotnet-contrib/issues/1695
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional
16
17 from opentelemetry.baggage import get_all as get_all_baggage
18 from opentelemetry.context import Context
19 from opentelemetry.sdk.trace.export import SpanProcessor
20 from opentelemetry.trace import Span
21
22
23 class BaggageSpanProcessor(SpanProcessor):
24 """
25 The BaggageSpanProcessor reads entries stored in Baggage
26 from the parent context and adds the baggage entries' keys and
27 values to the span as attributes on span start.
28
29 Add this span processor to a tracer provider.
30
31 Keys and values added to Baggage will appear on subsequent child
32 spans for a trace within this service *and* be propagated to external
33 services in accordance with any configured propagation formats
34 configured. If the external services also have a Baggage span
35 processor, the keys and values will appear in those child spans as
36 well.
37
38 ⚠ Warning ⚠️
39
40 Do not put sensitive information in Baggage.
41
42 To repeat: a consequence of adding data to Baggage is that the keys and
43 values will appear in all outgoing HTTP headers from the application.
44
45 """
46
47 def __init__(self) -> None:
48 pass
49
50 def on_start(
51 self, span: "Span", parent_context: Optional[Context] = None
52 ) -> None:
53 baggage = get_all_baggage(parent_context)
54 for key, value in baggage.items():
55 span.set_attribute(key, value)
56
```
Path: `processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=import-error
16
17 from .processor import BaggageSpanProcessor
18 from .version import __version__
19
20 __all__ = ["BaggageSpanProcessor", "__version__"]
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
@@ -14,7 +14,7 @@
# pylint: disable=import-error
-from .processor import BaggageSpanProcessor
+from .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor
from .version import __version__
-__all__ = ["BaggageSpanProcessor", "__version__"]
+__all__ = ["ALLOW_ALL_BAGGAGE_KEYS", "BaggageSpanProcessor", "__version__"]
diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
@@ -12,13 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Optional
+from typing import Callable, Optional
from opentelemetry.baggage import get_all as get_all_baggage
from opentelemetry.context import Context
from opentelemetry.sdk.trace.export import SpanProcessor
from opentelemetry.trace import Span
+# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean
+BaggageKeyPredicateT = Callable[[str], bool]
+
+# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans
+ALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True
+
class BaggageSpanProcessor(SpanProcessor):
"""
@@ -44,12 +50,13 @@
"""
- def __init__(self) -> None:
- pass
+ def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:
+ self._baggage_key_predicate = baggage_key_predicate
def on_start(
self, span: "Span", parent_context: Optional[Context] = None
) -> None:
baggage = get_all_baggage(parent_context)
for key, value in baggage.items():
- span.set_attribute(key, value)
+ if self._baggage_key_predicate(key):
+ span.set_attribute(key, value)
| {"golden_diff": "diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n@@ -14,7 +14,7 @@\n \n # pylint: disable=import-error\n \n-from .processor import BaggageSpanProcessor\n+from .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor\n from .version import __version__\n \n-__all__ = [\"BaggageSpanProcessor\", \"__version__\"]\n+__all__ = [\"ALLOW_ALL_BAGGAGE_KEYS\", \"BaggageSpanProcessor\", \"__version__\"]\ndiff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n@@ -12,13 +12,19 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from typing import Optional\n+from typing import Callable, Optional\n \n from opentelemetry.baggage import get_all as get_all_baggage\n from opentelemetry.context import Context\n from opentelemetry.sdk.trace.export import SpanProcessor\n from opentelemetry.trace import Span\n \n+# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean\n+BaggageKeyPredicateT = Callable[[str], bool]\n+\n+# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans\n+ALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True\n+\n \n class BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n@@ -44,12 +50,13 @@\n \n \"\"\"\n \n- def __init__(self) -> None:\n- pass\n+ def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:\n+ self._baggage_key_predicate = baggage_key_predicate\n \n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n- span.set_attribute(key, value)\n+ if self._baggage_key_predicate(key):\n+ span.set_attribute(key, value)\n", "issue": "Baggage span processor - key predicate\nThis issue is to track adding a method of selecting what baggage key entries should be copied.\r\n\r\nFeedback in the JS contrib PR was to allow a user-provided predicate function. This puts the responsibility on the user to ensure sensitive baggage keys are not copied while also not prescribing how that is determined.\r\n- https://github.com/open-telemetry/opentelemetry-js-contrib/issues/2166\r\n\r\n\r\nWe had a similar feedback in the .NET contrib project but thought it was more complicated than just using a set of prefixes so created an issue to continue the discussion. The plain processor that copies all baggage entries (like using `*` in your example) is likely to be accepted first.\r\n- https://github.com/open-telemetry/opentelemetry-dotnet-contrib/issues/1695\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional\n\nfrom opentelemetry.baggage import get_all as get_all_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace.export import SpanProcessor\nfrom opentelemetry.trace import Span\n\n\nclass BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n The BaggageSpanProcessor reads entries stored in Baggage\n from the parent context and adds the baggage entries' keys and\n values to the span as attributes on span start.\n\n Add this span processor to a tracer provider.\n\n Keys and values added to Baggage will appear on subsequent child\n spans for a trace within this service *and* be propagated to external\n services in accordance with any configured propagation formats\n configured. If the external services also have a Baggage span\n processor, the keys and values will appear in those child spans as\n well.\n\n \u26a0 Warning \u26a0\ufe0f\n\n Do not put sensitive information in Baggage.\n\n To repeat: a consequence of adding data to Baggage is that the keys and\n values will appear in all outgoing HTTP headers from the application.\n\n \"\"\"\n\n def __init__(self) -> None:\n pass\n\n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n span.set_attribute(key, value)\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=import-error\n\nfrom .processor import BaggageSpanProcessor\nfrom .version import __version__\n\n__all__ = [\"BaggageSpanProcessor\", \"__version__\"]\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Callable, Optional\n\nfrom opentelemetry.baggage import get_all as get_all_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace.export import SpanProcessor\nfrom opentelemetry.trace import Span\n\n# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean\nBaggageKeyPredicateT = Callable[[str], bool]\n\n# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans\nALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True\n\n\nclass BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n The BaggageSpanProcessor reads entries stored in Baggage\n from the parent context and adds the baggage entries' keys and\n values to the span as attributes on span start.\n\n Add this span processor to a tracer provider.\n\n Keys and values added to Baggage will appear on subsequent child\n spans for a trace within this service *and* be propagated to external\n services in accordance with any configured propagation formats\n configured. If the external services also have a Baggage span\n processor, the keys and values will appear in those child spans as\n well.\n\n \u26a0 Warning \u26a0\ufe0f\n\n Do not put sensitive information in Baggage.\n\n To repeat: a consequence of adding data to Baggage is that the keys and\n values will appear in all outgoing HTTP headers from the application.\n\n \"\"\"\n\n def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:\n self._baggage_key_predicate = baggage_key_predicate\n\n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n if self._baggage_key_predicate(key):\n span.set_attribute(key, value)\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=import-error\n\nfrom .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor\nfrom .version import __version__\n\n__all__ = [\"ALLOW_ALL_BAGGAGE_KEYS\", \"BaggageSpanProcessor\", \"__version__\"]\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py"}]} | 1,219 | 617 |
gh_patches_debug_19161 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-5810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dm - use national sites
Is it possible to use the national sites for dm stores instead of the German one? The format is `dm.[country code]` for all countries except for Bulgaria, Bosnia and Italy (which use `dm-drogeriemarkt.[country code]`) and Slovakia (`mojadm.sk`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/dm.py`
Content:
```
1 import scrapy
2
3 from locations.categories import Categories, apply_category
4 from locations.dict_parser import DictParser
5 from locations.hours import DAYS, OpeningHours
6
7
8 class DmSpider(scrapy.Spider):
9 name = "dm"
10 item_attributes = {"brand": "dm", "brand_wikidata": "Q266572"}
11 allowed_domains = ["store-data-service.services.dmtech.com"]
12 start_urls = ["https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999"]
13
14 @staticmethod
15 def parse_hours(store_hours: [dict]) -> OpeningHours:
16 opening_hours = OpeningHours()
17
18 for store_day in store_hours:
19 for times in store_day["timeRanges"]:
20 open_time = times["opening"]
21 close_time = times["closing"]
22
23 opening_hours.add_range(DAYS[store_day["weekDay"] - 1], open_time, close_time)
24
25 return opening_hours
26
27 def parse(self, response, **kwargs):
28 for location in response.json()["stores"]:
29 location["address"]["street_address"] = location["address"].pop("street")
30 location["address"]["country"] = location["countryCode"]
31 location["name"] = location["address"].get("name")
32 item = DictParser.parse(location)
33 item["website"] = f'https://www.dm.de/store{location["storeUrlPath"]}'
34 item["extras"]["check_date"] = location["updateTimeStamp"]
35 item["opening_hours"] = self.parse_hours(location["openingHours"])
36
37 apply_category(Categories.SHOP_CHEMIST, item)
38
39 yield item
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/dm.py b/locations/spiders/dm.py
--- a/locations/spiders/dm.py
+++ b/locations/spiders/dm.py
@@ -30,7 +30,14 @@
location["address"]["country"] = location["countryCode"]
location["name"] = location["address"].get("name")
item = DictParser.parse(location)
- item["website"] = f'https://www.dm.de/store{location["storeUrlPath"]}'
+ if location["countryCode"] in ["BG", "BA", "IT"]:
+ item[
+ "website"
+ ] = f'https://www.dm-drogeriemarkt.{location["countryCode"].lower()}/store{location["storeUrlPath"]}'
+ elif location["countryCode"] == "SK":
+ item["website"] = f'https://www.mojadm.sk/store{location["storeUrlPath"]}'
+ else:
+ item["website"] = f'https://www.dm.{location["countryCode"].lower()}/store{location["storeUrlPath"]}'
item["extras"]["check_date"] = location["updateTimeStamp"]
item["opening_hours"] = self.parse_hours(location["openingHours"])
| {"golden_diff": "diff --git a/locations/spiders/dm.py b/locations/spiders/dm.py\n--- a/locations/spiders/dm.py\n+++ b/locations/spiders/dm.py\n@@ -30,7 +30,14 @@\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n- item[\"website\"] = f'https://www.dm.de/store{location[\"storeUrlPath\"]}'\n+ if location[\"countryCode\"] in [\"BG\", \"BA\", \"IT\"]:\n+ item[\n+ \"website\"\n+ ] = f'https://www.dm-drogeriemarkt.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n+ elif location[\"countryCode\"] == \"SK\":\n+ item[\"website\"] = f'https://www.mojadm.sk/store{location[\"storeUrlPath\"]}'\n+ else:\n+ item[\"website\"] = f'https://www.dm.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n", "issue": "dm - use national sites\nIs it possible to use the national sites for dm stores instead of the German one? The format is `dm.[country code]` for all countries except for Bulgaria, Bosnia and Italy (which use `dm-drogeriemarkt.[country code]`) and Slovakia (`mojadm.sk`).\n", "before_files": [{"content": "import scrapy\n\nfrom locations.categories import Categories, apply_category\nfrom locations.dict_parser import DictParser\nfrom locations.hours import DAYS, OpeningHours\n\n\nclass DmSpider(scrapy.Spider):\n name = \"dm\"\n item_attributes = {\"brand\": \"dm\", \"brand_wikidata\": \"Q266572\"}\n allowed_domains = [\"store-data-service.services.dmtech.com\"]\n start_urls = [\"https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999\"]\n\n @staticmethod\n def parse_hours(store_hours: [dict]) -> OpeningHours:\n opening_hours = OpeningHours()\n\n for store_day in store_hours:\n for times in store_day[\"timeRanges\"]:\n open_time = times[\"opening\"]\n close_time = times[\"closing\"]\n\n opening_hours.add_range(DAYS[store_day[\"weekDay\"] - 1], open_time, close_time)\n\n return opening_hours\n\n def parse(self, response, **kwargs):\n for location in response.json()[\"stores\"]:\n location[\"address\"][\"street_address\"] = location[\"address\"].pop(\"street\")\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n item[\"website\"] = f'https://www.dm.de/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n\n apply_category(Categories.SHOP_CHEMIST, item)\n\n yield item\n", "path": "locations/spiders/dm.py"}], "after_files": [{"content": "import scrapy\n\nfrom locations.categories import Categories, apply_category\nfrom locations.dict_parser import DictParser\nfrom locations.hours import DAYS, OpeningHours\n\n\nclass DmSpider(scrapy.Spider):\n name = \"dm\"\n item_attributes = {\"brand\": \"dm\", \"brand_wikidata\": \"Q266572\"}\n allowed_domains = [\"store-data-service.services.dmtech.com\"]\n start_urls = [\"https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999\"]\n\n @staticmethod\n def parse_hours(store_hours: [dict]) -> OpeningHours:\n opening_hours = OpeningHours()\n\n for store_day in store_hours:\n for times in store_day[\"timeRanges\"]:\n open_time = times[\"opening\"]\n close_time = times[\"closing\"]\n\n opening_hours.add_range(DAYS[store_day[\"weekDay\"] - 1], open_time, close_time)\n\n return opening_hours\n\n def parse(self, response, **kwargs):\n for location in response.json()[\"stores\"]:\n location[\"address\"][\"street_address\"] = location[\"address\"].pop(\"street\")\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n if location[\"countryCode\"] in [\"BG\", \"BA\", \"IT\"]:\n item[\n \"website\"\n ] = f'https://www.dm-drogeriemarkt.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n elif location[\"countryCode\"] == \"SK\":\n item[\"website\"] = f'https://www.mojadm.sk/store{location[\"storeUrlPath\"]}'\n else:\n item[\"website\"] = f'https://www.dm.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n\n apply_category(Categories.SHOP_CHEMIST, item)\n\n yield item\n", "path": "locations/spiders/dm.py"}]} | 767 | 269 |
gh_patches_debug_7973 | rasdani/github-patches | git_diff | celery__celery-5870 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Continuous memory leak
There is a memory leak in the parent process of Celery's worker.
It is not a child process executing a task.
It happens suddenly every few days.
Unless you stop Celery, it consumes server memory in tens of hours.
This problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.
Celery is running on Ubuntu 16 and brokers use RabbitMQ.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/events/receiver.py`
Content:
```
1 """Event receiver implementation."""
2 from __future__ import absolute_import, unicode_literals
3
4 import time
5 from operator import itemgetter
6
7 from kombu import Queue
8 from kombu.connection import maybe_channel
9 from kombu.mixins import ConsumerMixin
10
11 from celery import uuid
12 from celery.app import app_or_default
13 from celery.utils.time import adjust_timestamp
14
15 from .event import get_exchange
16
17 __all__ = ('EventReceiver',)
18
19 CLIENT_CLOCK_SKEW = -1
20
21 _TZGETTER = itemgetter('utcoffset', 'timestamp')
22
23
24 class EventReceiver(ConsumerMixin):
25 """Capture events.
26
27 Arguments:
28 connection (kombu.Connection): Connection to the broker.
29 handlers (Mapping[Callable]): Event handlers.
30 This is a map of event type names and their handlers.
31 The special handler `"*"` captures all events that don't have a
32 handler.
33 """
34
35 app = None
36
37 def __init__(self, channel, handlers=None, routing_key='#',
38 node_id=None, app=None, queue_prefix=None,
39 accept=None, queue_ttl=None, queue_expires=None):
40 self.app = app_or_default(app or self.app)
41 self.channel = maybe_channel(channel)
42 self.handlers = {} if handlers is None else handlers
43 self.routing_key = routing_key
44 self.node_id = node_id or uuid()
45 self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix
46 self.exchange = get_exchange(
47 self.connection or self.app.connection_for_write(),
48 name=self.app.conf.event_exchange)
49 if queue_ttl is None:
50 queue_ttl = self.app.conf.event_queue_ttl
51 if queue_expires is None:
52 queue_expires = self.app.conf.event_queue_expires
53 self.queue = Queue(
54 '.'.join([self.queue_prefix, self.node_id]),
55 exchange=self.exchange,
56 routing_key=self.routing_key,
57 auto_delete=True, durable=False,
58 message_ttl=queue_ttl,
59 expires=queue_expires,
60 )
61 self.clock = self.app.clock
62 self.adjust_clock = self.clock.adjust
63 self.forward_clock = self.clock.forward
64 if accept is None:
65 accept = {self.app.conf.event_serializer, 'json'}
66 self.accept = accept
67
68 def process(self, type, event):
69 """Process event by dispatching to configured handler."""
70 handler = self.handlers.get(type) or self.handlers.get('*')
71 handler and handler(event)
72
73 def get_consumers(self, Consumer, channel):
74 return [Consumer(queues=[self.queue],
75 callbacks=[self._receive], no_ack=True,
76 accept=self.accept)]
77
78 def on_consume_ready(self, connection, channel, consumers,
79 wakeup=True, **kwargs):
80 if wakeup:
81 self.wakeup_workers(channel=channel)
82
83 def itercapture(self, limit=None, timeout=None, wakeup=True):
84 return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)
85
86 def capture(self, limit=None, timeout=None, wakeup=True):
87 """Open up a consumer capturing events.
88
89 This has to run in the main process, and it will never stop
90 unless :attr:`EventDispatcher.should_stop` is set to True, or
91 forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.
92 """
93 return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
94
95 def wakeup_workers(self, channel=None):
96 self.app.control.broadcast('heartbeat',
97 connection=self.connection,
98 channel=channel)
99
100 def event_from_message(self, body, localize=True,
101 now=time.time, tzfields=_TZGETTER,
102 adjust_timestamp=adjust_timestamp,
103 CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):
104 type = body['type']
105 if type == 'task-sent':
106 # clients never sync so cannot use their clock value
107 _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW
108 self.adjust_clock(_c)
109 else:
110 try:
111 clock = body['clock']
112 except KeyError:
113 body['clock'] = self.forward_clock()
114 else:
115 self.adjust_clock(clock)
116
117 if localize:
118 try:
119 offset, timestamp = tzfields(body)
120 except KeyError:
121 pass
122 else:
123 body['timestamp'] = adjust_timestamp(timestamp, offset)
124 body['local_received'] = now()
125 return type, body
126
127 def _receive(self, body, message, list=list, isinstance=isinstance):
128 if isinstance(body, list): # celery 4.0: List of events
129 process, from_message = self.process, self.event_from_message
130 [process(*from_message(event)) for event in body]
131 else:
132 self.process(*self.event_from_message(body))
133
134 @property
135 def connection(self):
136 return self.channel.connection.client if self.channel else None
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/events/receiver.py b/celery/events/receiver.py
--- a/celery/events/receiver.py
+++ b/celery/events/receiver.py
@@ -90,7 +90,8 @@
unless :attr:`EventDispatcher.should_stop` is set to True, or
forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.
"""
- return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
+ for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):
+ pass
def wakeup_workers(self, channel=None):
self.app.control.broadcast('heartbeat',
| {"golden_diff": "diff --git a/celery/events/receiver.py b/celery/events/receiver.py\n--- a/celery/events/receiver.py\n+++ b/celery/events/receiver.py\n@@ -90,7 +90,8 @@\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n- return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))\n+ for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):\n+ pass\n \n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n", "issue": "Continuous memory leak\nThere is a memory leak in the parent process of Celery's worker.\nIt is not a child process executing a task.\nIt happens suddenly every few days.\nUnless you stop Celery, it consumes server memory in tens of hours.\n\nThis problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.\nCelery is running on Ubuntu 16 and brokers use RabbitMQ.\n\n\n\n\n", "before_files": [{"content": "\"\"\"Event receiver implementation.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport time\nfrom operator import itemgetter\n\nfrom kombu import Queue\nfrom kombu.connection import maybe_channel\nfrom kombu.mixins import ConsumerMixin\n\nfrom celery import uuid\nfrom celery.app import app_or_default\nfrom celery.utils.time import adjust_timestamp\n\nfrom .event import get_exchange\n\n__all__ = ('EventReceiver',)\n\nCLIENT_CLOCK_SKEW = -1\n\n_TZGETTER = itemgetter('utcoffset', 'timestamp')\n\n\nclass EventReceiver(ConsumerMixin):\n \"\"\"Capture events.\n\n Arguments:\n connection (kombu.Connection): Connection to the broker.\n handlers (Mapping[Callable]): Event handlers.\n This is a map of event type names and their handlers.\n The special handler `\"*\"` captures all events that don't have a\n handler.\n \"\"\"\n\n app = None\n\n def __init__(self, channel, handlers=None, routing_key='#',\n node_id=None, app=None, queue_prefix=None,\n accept=None, queue_ttl=None, queue_expires=None):\n self.app = app_or_default(app or self.app)\n self.channel = maybe_channel(channel)\n self.handlers = {} if handlers is None else handlers\n self.routing_key = routing_key\n self.node_id = node_id or uuid()\n self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix\n self.exchange = get_exchange(\n self.connection or self.app.connection_for_write(),\n name=self.app.conf.event_exchange)\n if queue_ttl is None:\n queue_ttl = self.app.conf.event_queue_ttl\n if queue_expires is None:\n queue_expires = self.app.conf.event_queue_expires\n self.queue = Queue(\n '.'.join([self.queue_prefix, self.node_id]),\n exchange=self.exchange,\n routing_key=self.routing_key,\n auto_delete=True, durable=False,\n message_ttl=queue_ttl,\n expires=queue_expires,\n )\n self.clock = self.app.clock\n self.adjust_clock = self.clock.adjust\n self.forward_clock = self.clock.forward\n if accept is None:\n accept = {self.app.conf.event_serializer, 'json'}\n self.accept = accept\n\n def process(self, type, event):\n \"\"\"Process event by dispatching to configured handler.\"\"\"\n handler = self.handlers.get(type) or self.handlers.get('*')\n handler and handler(event)\n\n def get_consumers(self, Consumer, channel):\n return [Consumer(queues=[self.queue],\n callbacks=[self._receive], no_ack=True,\n accept=self.accept)]\n\n def on_consume_ready(self, connection, channel, consumers,\n wakeup=True, **kwargs):\n if wakeup:\n self.wakeup_workers(channel=channel)\n\n def itercapture(self, limit=None, timeout=None, wakeup=True):\n return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)\n\n def capture(self, limit=None, timeout=None, wakeup=True):\n \"\"\"Open up a consumer capturing events.\n\n This has to run in the main process, and it will never stop\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))\n\n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n connection=self.connection,\n channel=channel)\n\n def event_from_message(self, body, localize=True,\n now=time.time, tzfields=_TZGETTER,\n adjust_timestamp=adjust_timestamp,\n CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):\n type = body['type']\n if type == 'task-sent':\n # clients never sync so cannot use their clock value\n _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW\n self.adjust_clock(_c)\n else:\n try:\n clock = body['clock']\n except KeyError:\n body['clock'] = self.forward_clock()\n else:\n self.adjust_clock(clock)\n\n if localize:\n try:\n offset, timestamp = tzfields(body)\n except KeyError:\n pass\n else:\n body['timestamp'] = adjust_timestamp(timestamp, offset)\n body['local_received'] = now()\n return type, body\n\n def _receive(self, body, message, list=list, isinstance=isinstance):\n if isinstance(body, list): # celery 4.0: List of events\n process, from_message = self.process, self.event_from_message\n [process(*from_message(event)) for event in body]\n else:\n self.process(*self.event_from_message(body))\n\n @property\n def connection(self):\n return self.channel.connection.client if self.channel else None\n", "path": "celery/events/receiver.py"}], "after_files": [{"content": "\"\"\"Event receiver implementation.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport time\nfrom operator import itemgetter\n\nfrom kombu import Queue\nfrom kombu.connection import maybe_channel\nfrom kombu.mixins import ConsumerMixin\n\nfrom celery import uuid\nfrom celery.app import app_or_default\nfrom celery.utils.time import adjust_timestamp\n\nfrom .event import get_exchange\n\n__all__ = ('EventReceiver',)\n\nCLIENT_CLOCK_SKEW = -1\n\n_TZGETTER = itemgetter('utcoffset', 'timestamp')\n\n\nclass EventReceiver(ConsumerMixin):\n \"\"\"Capture events.\n\n Arguments:\n connection (kombu.Connection): Connection to the broker.\n handlers (Mapping[Callable]): Event handlers.\n This is a map of event type names and their handlers.\n The special handler `\"*\"` captures all events that don't have a\n handler.\n \"\"\"\n\n app = None\n\n def __init__(self, channel, handlers=None, routing_key='#',\n node_id=None, app=None, queue_prefix=None,\n accept=None, queue_ttl=None, queue_expires=None):\n self.app = app_or_default(app or self.app)\n self.channel = maybe_channel(channel)\n self.handlers = {} if handlers is None else handlers\n self.routing_key = routing_key\n self.node_id = node_id or uuid()\n self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix\n self.exchange = get_exchange(\n self.connection or self.app.connection_for_write(),\n name=self.app.conf.event_exchange)\n if queue_ttl is None:\n queue_ttl = self.app.conf.event_queue_ttl\n if queue_expires is None:\n queue_expires = self.app.conf.event_queue_expires\n self.queue = Queue(\n '.'.join([self.queue_prefix, self.node_id]),\n exchange=self.exchange,\n routing_key=self.routing_key,\n auto_delete=True, durable=False,\n message_ttl=queue_ttl,\n expires=queue_expires,\n )\n self.clock = self.app.clock\n self.adjust_clock = self.clock.adjust\n self.forward_clock = self.clock.forward\n if accept is None:\n accept = {self.app.conf.event_serializer, 'json'}\n self.accept = accept\n\n def process(self, type, event):\n \"\"\"Process event by dispatching to configured handler.\"\"\"\n handler = self.handlers.get(type) or self.handlers.get('*')\n handler and handler(event)\n\n def get_consumers(self, Consumer, channel):\n return [Consumer(queues=[self.queue],\n callbacks=[self._receive], no_ack=True,\n accept=self.accept)]\n\n def on_consume_ready(self, connection, channel, consumers,\n wakeup=True, **kwargs):\n if wakeup:\n self.wakeup_workers(channel=channel)\n\n def itercapture(self, limit=None, timeout=None, wakeup=True):\n return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)\n\n def capture(self, limit=None, timeout=None, wakeup=True):\n \"\"\"Open up a consumer capturing events.\n\n This has to run in the main process, and it will never stop\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):\n pass\n\n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n connection=self.connection,\n channel=channel)\n\n def event_from_message(self, body, localize=True,\n now=time.time, tzfields=_TZGETTER,\n adjust_timestamp=adjust_timestamp,\n CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):\n type = body['type']\n if type == 'task-sent':\n # clients never sync so cannot use their clock value\n _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW\n self.adjust_clock(_c)\n else:\n try:\n clock = body['clock']\n except KeyError:\n body['clock'] = self.forward_clock()\n else:\n self.adjust_clock(clock)\n\n if localize:\n try:\n offset, timestamp = tzfields(body)\n except KeyError:\n pass\n else:\n body['timestamp'] = adjust_timestamp(timestamp, offset)\n body['local_received'] = now()\n return type, body\n\n def _receive(self, body, message, list=list, isinstance=isinstance):\n if isinstance(body, list): # celery 4.0: List of events\n process, from_message = self.process, self.event_from_message\n [process(*from_message(event)) for event in body]\n else:\n self.process(*self.event_from_message(body))\n\n @property\n def connection(self):\n return self.channel.connection.client if self.channel else None\n", "path": "celery/events/receiver.py"}]} | 1,748 | 145 |
gh_patches_debug_20144 | rasdani/github-patches | git_diff | openfun__richie-1715 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On the search page, the "more options" feature is broken on the "contributors" filter
## Bug Report
**Problematic behavior/code**
The "more options" feature on the "contributors" filter on the search page is broken.
**Expected Behavior**
When we click on "more options" on the "contributors" filter on the search page, we expect to see a list of more contributors and be able to type a search request to refine the search and find a specific contributor by his/her first/lastname.
**Steps to Reproduce**
1. Go to the search page: https://www.fun-mooc.fr/en/courses/
2. Click "more options" on the "contributors" filter
**Environment**
- Richie version: 2.5.0
- Platform: docker
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/richie/apps/search/defaults.py`
Content:
```
1 """
2 Import custom settings and set up defaults for values the Search app needs
3 """
4 from django.conf import settings
5 from django.utils.functional import lazy
6 from django.utils.translation import gettext_lazy as _
7
8 # Elasticsearch
9 ES_CHUNK_SIZE = 500
10 ES_PAGE_SIZE = 10
11
12 # Use a lazy to enable easier testing by not defining the value at bootstrap time
13 ES_INDICES_PREFIX = lazy(
14 lambda: getattr(settings, "RICHIE_ES_INDICES_PREFIX", "richie")
15 )()
16
17 # Define which analyzer should be used for each language
18 QUERY_ANALYZERS = getattr(
19 settings, "RICHIE_QUERY_ANALYZERS", {"en": "english", "fr": "french"}
20 )
21
22 # Define the scoring boost (in ElasticSearch) related value names receive when using
23 # full-text search.
24 # For example, when a user searches for "Science" in full-text, it should match any
25 # course whose category contains "Science" or a related word, albeit with a lower
26 # score than courses that include it in their title or description.
27 # This lower score factor is the boost value we get or set here.
28 RELATED_CONTENT_BOOST = 0.05
29
30 FACET_SORTING_DEFAULT = "count"
31
32 FACET_COUNTS_DEFAULT_LIMIT = getattr(settings, "RICHIE_FACET_COUNTS_DEFAULT_LIMIT", 10)
33 FACET_COUNTS_MAX_LIMIT = getattr(settings, "RICHIE_FACET_COUNTS_MAX_LIMIT", 50)
34
35 ES_STATE_WEIGHTS = getattr(settings, "RICHIE_ES_STATE_WEIGHTS", None) or [
36 80, # ONGOING_OPEN
37 70, # FUTURE_OPEN
38 60, # ARCHIVED_OPEN
39 30, # FUTURE_NOT_YET_OPEN
40 6, # FUTURE_CLOSED
41 5, # ONGOING_CLOSED
42 1, # ARCHIVED_CLOSED
43 ]
44
45 FILTERS_CONFIGURATION = [
46 (
47 "richie.apps.search.filter_definitions.StaticChoicesFilterDefinition",
48 {
49 "fragment_map": {"new": [{"term": {"is_new": True}}]},
50 "human_name": _("New courses"),
51 "min_doc_count": 0,
52 "name": "new",
53 "position": 0,
54 "sorting": "conf",
55 "values": {"new": _("First session")},
56 },
57 ),
58 (
59 "richie.apps.search.filter_definitions.NestingWrapper",
60 {
61 "name": "course_runs",
62 "filters": [
63 (
64 "richie.apps.search.filter_definitions.AvailabilityFilterDefinition",
65 {
66 "human_name": _("Availability"),
67 "is_drilldown": True,
68 "min_doc_count": 0,
69 "name": "availability",
70 "position": 1,
71 "sorting": "conf",
72 },
73 ),
74 (
75 "richie.apps.search.filter_definitions.LanguagesFilterDefinition",
76 {
77 "human_name": _("Languages"),
78 # There are too many available languages to show them all, all the time.
79 # Eg. 200 languages, 190+ of which will have 0 matching courses.
80 "min_doc_count": 1,
81 "name": "languages",
82 "position": 5,
83 },
84 ),
85 ],
86 },
87 ),
88 (
89 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
90 {
91 "human_name": _("Subjects"),
92 "is_autocompletable": True,
93 "is_searchable": True,
94 "min_doc_count": 0,
95 "name": "subjects",
96 "position": 2,
97 "reverse_id": "subjects",
98 "term": "categories",
99 },
100 ),
101 (
102 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
103 {
104 "human_name": _("Levels"),
105 "is_autocompletable": True,
106 "is_searchable": True,
107 "min_doc_count": 0,
108 "name": "levels",
109 "position": 3,
110 "reverse_id": "levels",
111 "term": "categories",
112 },
113 ),
114 (
115 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
116 {
117 "human_name": _("Organizations"),
118 "is_autocompletable": True,
119 "is_searchable": True,
120 "min_doc_count": 0,
121 "name": "organizations",
122 "position": 4,
123 "reverse_id": "organizations",
124 },
125 ),
126 (
127 "richie.apps.search.filter_definitions.IndexableFilterDefinition",
128 {
129 "human_name": _("Persons"),
130 "is_autocompletable": True,
131 "is_searchable": True,
132 "min_doc_count": 0,
133 "name": "persons",
134 "position": 5,
135 "reverse_id": "persons",
136 },
137 ),
138 ]
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/richie/apps/search/defaults.py b/src/richie/apps/search/defaults.py
--- a/src/richie/apps/search/defaults.py
+++ b/src/richie/apps/search/defaults.py
@@ -118,6 +118,8 @@
"is_autocompletable": True,
"is_searchable": True,
"min_doc_count": 0,
+ # Note: this is a special name that connects the filter to Organization objects
+ # in Richie as well was the corresponding indexer and API endpoint.
"name": "organizations",
"position": 4,
"reverse_id": "organizations",
@@ -130,6 +132,8 @@
"is_autocompletable": True,
"is_searchable": True,
"min_doc_count": 0,
+ # Note: this is a special name that connects the filter to Person objects
+ # in Richie as well was the corresponding indexer and API endpoint.
"name": "persons",
"position": 5,
"reverse_id": "persons",
| {"golden_diff": "diff --git a/src/richie/apps/search/defaults.py b/src/richie/apps/search/defaults.py\n--- a/src/richie/apps/search/defaults.py\n+++ b/src/richie/apps/search/defaults.py\n@@ -118,6 +118,8 @@\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n+ # Note: this is a special name that connects the filter to Organization objects\n+ # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n@@ -130,6 +132,8 @@\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n+ # Note: this is a special name that connects the filter to Person objects\n+ # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n", "issue": "On the search page, the \"more options\" feature is broken on the \"contributors\" filter\n## Bug Report\r\n\r\n**Problematic behavior/code**\r\nThe \"more options\" feature on the \"contributors\" filter on the search page is broken.\r\n\r\n**Expected Behavior**\r\nWhen we click on \"more options\" on the \"contributors\" filter on the search page, we expect to see a list of more contributors and be able to type a search request to refine the search and find a specific contributor by his/her first/lastname.\r\n\r\n**Steps to Reproduce**\r\n1. Go to the search page: https://www.fun-mooc.fr/en/courses/\r\n2. Click \"more options\" on the \"contributors\" filter\r\n\r\n**Environment**\r\n- Richie version: 2.5.0\r\n- Platform: docker\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nImport custom settings and set up defaults for values the Search app needs\n\"\"\"\nfrom django.conf import settings\nfrom django.utils.functional import lazy\nfrom django.utils.translation import gettext_lazy as _\n\n# Elasticsearch\nES_CHUNK_SIZE = 500\nES_PAGE_SIZE = 10\n\n# Use a lazy to enable easier testing by not defining the value at bootstrap time\nES_INDICES_PREFIX = lazy(\n lambda: getattr(settings, \"RICHIE_ES_INDICES_PREFIX\", \"richie\")\n)()\n\n# Define which analyzer should be used for each language\nQUERY_ANALYZERS = getattr(\n settings, \"RICHIE_QUERY_ANALYZERS\", {\"en\": \"english\", \"fr\": \"french\"}\n)\n\n# Define the scoring boost (in ElasticSearch) related value names receive when using\n# full-text search.\n# For example, when a user searches for \"Science\" in full-text, it should match any\n# course whose category contains \"Science\" or a related word, albeit with a lower\n# score than courses that include it in their title or description.\n# This lower score factor is the boost value we get or set here.\nRELATED_CONTENT_BOOST = 0.05\n\nFACET_SORTING_DEFAULT = \"count\"\n\nFACET_COUNTS_DEFAULT_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_DEFAULT_LIMIT\", 10)\nFACET_COUNTS_MAX_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_MAX_LIMIT\", 50)\n\nES_STATE_WEIGHTS = getattr(settings, \"RICHIE_ES_STATE_WEIGHTS\", None) or [\n 80, # ONGOING_OPEN\n 70, # FUTURE_OPEN\n 60, # ARCHIVED_OPEN\n 30, # FUTURE_NOT_YET_OPEN\n 6, # FUTURE_CLOSED\n 5, # ONGOING_CLOSED\n 1, # ARCHIVED_CLOSED\n]\n\nFILTERS_CONFIGURATION = [\n (\n \"richie.apps.search.filter_definitions.StaticChoicesFilterDefinition\",\n {\n \"fragment_map\": {\"new\": [{\"term\": {\"is_new\": True}}]},\n \"human_name\": _(\"New courses\"),\n \"min_doc_count\": 0,\n \"name\": \"new\",\n \"position\": 0,\n \"sorting\": \"conf\",\n \"values\": {\"new\": _(\"First session\")},\n },\n ),\n (\n \"richie.apps.search.filter_definitions.NestingWrapper\",\n {\n \"name\": \"course_runs\",\n \"filters\": [\n (\n \"richie.apps.search.filter_definitions.AvailabilityFilterDefinition\",\n {\n \"human_name\": _(\"Availability\"),\n \"is_drilldown\": True,\n \"min_doc_count\": 0,\n \"name\": \"availability\",\n \"position\": 1,\n \"sorting\": \"conf\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.LanguagesFilterDefinition\",\n {\n \"human_name\": _(\"Languages\"),\n # There are too many available languages to show them all, all the time.\n # Eg. 200 languages, 190+ of which will have 0 matching courses.\n \"min_doc_count\": 1,\n \"name\": \"languages\",\n \"position\": 5,\n },\n ),\n ],\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Subjects\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"subjects\",\n \"position\": 2,\n \"reverse_id\": \"subjects\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Levels\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"levels\",\n \"position\": 3,\n \"reverse_id\": \"levels\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Organizations\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableFilterDefinition\",\n {\n \"human_name\": _(\"Persons\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n },\n ),\n]\n", "path": "src/richie/apps/search/defaults.py"}], "after_files": [{"content": "\"\"\"\nImport custom settings and set up defaults for values the Search app needs\n\"\"\"\nfrom django.conf import settings\nfrom django.utils.functional import lazy\nfrom django.utils.translation import gettext_lazy as _\n\n# Elasticsearch\nES_CHUNK_SIZE = 500\nES_PAGE_SIZE = 10\n\n# Use a lazy to enable easier testing by not defining the value at bootstrap time\nES_INDICES_PREFIX = lazy(\n lambda: getattr(settings, \"RICHIE_ES_INDICES_PREFIX\", \"richie\")\n)()\n\n# Define which analyzer should be used for each language\nQUERY_ANALYZERS = getattr(\n settings, \"RICHIE_QUERY_ANALYZERS\", {\"en\": \"english\", \"fr\": \"french\"}\n)\n\n# Define the scoring boost (in ElasticSearch) related value names receive when using\n# full-text search.\n# For example, when a user searches for \"Science\" in full-text, it should match any\n# course whose category contains \"Science\" or a related word, albeit with a lower\n# score than courses that include it in their title or description.\n# This lower score factor is the boost value we get or set here.\nRELATED_CONTENT_BOOST = 0.05\n\nFACET_SORTING_DEFAULT = \"count\"\n\nFACET_COUNTS_DEFAULT_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_DEFAULT_LIMIT\", 10)\nFACET_COUNTS_MAX_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_MAX_LIMIT\", 50)\n\nES_STATE_WEIGHTS = getattr(settings, \"RICHIE_ES_STATE_WEIGHTS\", None) or [\n 80, # ONGOING_OPEN\n 70, # FUTURE_OPEN\n 60, # ARCHIVED_OPEN\n 30, # FUTURE_NOT_YET_OPEN\n 6, # FUTURE_CLOSED\n 5, # ONGOING_CLOSED\n 1, # ARCHIVED_CLOSED\n]\n\nFILTERS_CONFIGURATION = [\n (\n \"richie.apps.search.filter_definitions.StaticChoicesFilterDefinition\",\n {\n \"fragment_map\": {\"new\": [{\"term\": {\"is_new\": True}}]},\n \"human_name\": _(\"New courses\"),\n \"min_doc_count\": 0,\n \"name\": \"new\",\n \"position\": 0,\n \"sorting\": \"conf\",\n \"values\": {\"new\": _(\"First session\")},\n },\n ),\n (\n \"richie.apps.search.filter_definitions.NestingWrapper\",\n {\n \"name\": \"course_runs\",\n \"filters\": [\n (\n \"richie.apps.search.filter_definitions.AvailabilityFilterDefinition\",\n {\n \"human_name\": _(\"Availability\"),\n \"is_drilldown\": True,\n \"min_doc_count\": 0,\n \"name\": \"availability\",\n \"position\": 1,\n \"sorting\": \"conf\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.LanguagesFilterDefinition\",\n {\n \"human_name\": _(\"Languages\"),\n # There are too many available languages to show them all, all the time.\n # Eg. 200 languages, 190+ of which will have 0 matching courses.\n \"min_doc_count\": 1,\n \"name\": \"languages\",\n \"position\": 5,\n },\n ),\n ],\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Subjects\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"subjects\",\n \"position\": 2,\n \"reverse_id\": \"subjects\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Levels\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"levels\",\n \"position\": 3,\n \"reverse_id\": \"levels\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Organizations\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n # Note: this is a special name that connects the filter to Organization objects\n # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableFilterDefinition\",\n {\n \"human_name\": _(\"Persons\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n # Note: this is a special name that connects the filter to Person objects\n # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n },\n ),\n]\n", "path": "src/richie/apps/search/defaults.py"}]} | 1,781 | 240 |
gh_patches_debug_7419 | rasdani/github-patches | git_diff | bokeh__bokeh-6344 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scale/Range incompatibility in examples/models/server/population.py
in master:
Scale/Range incompatibility in examples/models/server/population.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/models/server/population.py`
Content:
```
1 from __future__ import print_function
2
3 from math import pi
4
5 from bokeh.client import push_session
6 from bokeh.document import Document
7 from bokeh.models.glyphs import Line, HBar
8 from bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,
9 LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)
10 from bokeh.sampledata.population import load_population
11 from bokeh.models.widgets import Select
12 from bokeh.models.layouts import WidgetBox, Column
13
14 document = Document()
15 session = push_session(document)
16
17 df = load_population()
18 revision = 2012
19
20 year, location = 2010, "World"
21
22 years = [str(x) for x in sorted(df.Year.unique())]
23 locations = sorted(df.Location.unique())
24 groups = [str(x) for x in df.AgeGrp.unique()]
25 groups.remove('80+') # remove oddball group
26
27 source_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))
28 source_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))
29
30 def pyramid():
31 xdr = DataRange1d()
32 ydr = FactorRange(factors=groups)
33 y_scale = CategoricalScale()
34
35 plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)
36
37 xaxis = LinearAxis()
38 plot.add_layout(xaxis, 'below')
39 plot.add_layout(CategoricalAxis(), 'left')
40
41 plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
42
43 m = HBar(left="value", right=0, y="group", height=1, fill_color="#3B8686")
44 mglyph = plot.add_glyph(source_pyramid_m, m)
45
46 f = HBar(left=0, right="value", y="group", height=1, fill_color="#CFF09E")
47 fglyph = plot.add_glyph(source_pyramid_f, f)
48
49 plot.add_layout(Legend(items=[("Male" , [mglyph]), ("Female" , [fglyph])]))
50
51 return plot
52
53 source_known = ColumnDataSource(data=dict(x=[], y=[]))
54 source_predicted = ColumnDataSource(data=dict(x=[], y=[]))
55
56 def population():
57 xdr = FactorRange(factors=years)
58 ydr = DataRange1d()
59 y_scale = CategoricalScale()
60
61 plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)
62
63 plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')
64
65 known = Line(x="x", y="y", line_color="violet", line_width=2)
66 known_glyph = plot.add_glyph(source_known, known)
67
68 predicted = Line(x="x", y="y", line_color="violet", line_width=2, line_dash="dashed")
69 predicted_glyph = plot.add_glyph(source_predicted, predicted)
70
71 legend = Legend(location="bottom_right",
72 items=[("known", [known_glyph]), ("predicted", [predicted_glyph])])
73 plot.add_layout(legend)
74
75 return plot
76
77 def update_pyramid():
78 pyramid = df[(df.Location == location) & (df.Year == year)]
79
80 male = pyramid[pyramid.Sex == "Male"]
81 female = pyramid[pyramid.Sex == "Female"]
82
83 total = df.Value.sum()
84 male_percent = -male.Value / total
85 female_percent = female.Value / total
86
87 source_pyramid_m.data = dict(
88 group=[str(x) for x in male.AgeGrp.unique()],
89 value=male_percent,
90 )
91 source_pyramid_f.data = dict(
92 group=[str(x) for x in female.AgeGrp.unique()],
93 value=female_percent,
94 )
95
96 def update_population():
97 population = df[df.Location == location].groupby(df.Year).Value.sum()
98 aligned_revision = revision // 10 * 10
99
100 known = population[population.index <= aligned_revision]
101 predicted = population[population.index >= aligned_revision]
102
103 source_known.data = dict(x=known.index.map(str), y=known.values)
104 source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)
105
106 def update_data():
107 update_population()
108 update_pyramid()
109
110 def on_year_change(attr, old, new):
111 global year
112 year = int(new)
113 update_data()
114
115 def on_location_change(attr, old, new):
116 global location
117 location = new
118 update_data()
119
120 def create_layout():
121 year_select = Select(title="Year:", value="2010", options=years)
122 location_select = Select(title="Location:", value="World", options=locations)
123
124 year_select.on_change('value', on_year_change)
125 location_select.on_change('value', on_location_change)
126
127 controls = WidgetBox(children=[year_select, location_select], height=150, width=600)
128 layout = Column(children=[controls, pyramid(), population()])
129
130 return layout
131
132 layout = create_layout()
133
134 update_data()
135
136 document.add_root(layout)
137 session.show(layout)
138
139 if __name__ == "__main__":
140 document.validate()
141 print("\npress ctrl-C to exit")
142 session.loop_until_closed()
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/models/server/population.py b/examples/models/server/population.py
--- a/examples/models/server/population.py
+++ b/examples/models/server/population.py
@@ -56,9 +56,9 @@
def population():
xdr = FactorRange(factors=years)
ydr = DataRange1d()
- y_scale = CategoricalScale()
+ x_scale = CategoricalScale()
- plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)
+ plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)
plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')
| {"golden_diff": "diff --git a/examples/models/server/population.py b/examples/models/server/population.py\n--- a/examples/models/server/population.py\n+++ b/examples/models/server/population.py\n@@ -56,9 +56,9 @@\n def population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n- y_scale = CategoricalScale()\n+ x_scale = CategoricalScale()\n \n- plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)\n+ plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)\n \n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n", "issue": "Scale/Range incompatibility in examples/models/server/population.py\nin master:\r\n\r\nScale/Range incompatibility in examples/models/server/population.py\n", "before_files": [{"content": "from __future__ import print_function\n\nfrom math import pi\n\nfrom bokeh.client import push_session\nfrom bokeh.document import Document\nfrom bokeh.models.glyphs import Line, HBar\nfrom bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,\n LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)\nfrom bokeh.sampledata.population import load_population\nfrom bokeh.models.widgets import Select\nfrom bokeh.models.layouts import WidgetBox, Column\n\ndocument = Document()\nsession = push_session(document)\n\ndf = load_population()\nrevision = 2012\n\nyear, location = 2010, \"World\"\n\nyears = [str(x) for x in sorted(df.Year.unique())]\nlocations = sorted(df.Location.unique())\ngroups = [str(x) for x in df.AgeGrp.unique()]\ngroups.remove('80+') # remove oddball group\n\nsource_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))\nsource_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))\n\ndef pyramid():\n xdr = DataRange1d()\n ydr = FactorRange(factors=groups)\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)\n\n xaxis = LinearAxis()\n plot.add_layout(xaxis, 'below')\n plot.add_layout(CategoricalAxis(), 'left')\n\n plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\n\n m = HBar(left=\"value\", right=0, y=\"group\", height=1, fill_color=\"#3B8686\")\n mglyph = plot.add_glyph(source_pyramid_m, m)\n\n f = HBar(left=0, right=\"value\", y=\"group\", height=1, fill_color=\"#CFF09E\")\n fglyph = plot.add_glyph(source_pyramid_f, f)\n\n plot.add_layout(Legend(items=[(\"Male\" , [mglyph]), (\"Female\" , [fglyph])]))\n\n return plot\n\nsource_known = ColumnDataSource(data=dict(x=[], y=[]))\nsource_predicted = ColumnDataSource(data=dict(x=[], y=[]))\n\ndef population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)\n\n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n\n known = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2)\n known_glyph = plot.add_glyph(source_known, known)\n\n predicted = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2, line_dash=\"dashed\")\n predicted_glyph = plot.add_glyph(source_predicted, predicted)\n\n legend = Legend(location=\"bottom_right\",\n items=[(\"known\", [known_glyph]), (\"predicted\", [predicted_glyph])])\n plot.add_layout(legend)\n\n return plot\n\ndef update_pyramid():\n pyramid = df[(df.Location == location) & (df.Year == year)]\n\n male = pyramid[pyramid.Sex == \"Male\"]\n female = pyramid[pyramid.Sex == \"Female\"]\n\n total = df.Value.sum()\n male_percent = -male.Value / total\n female_percent = female.Value / total\n\n source_pyramid_m.data = dict(\n group=[str(x) for x in male.AgeGrp.unique()],\n value=male_percent,\n )\n source_pyramid_f.data = dict(\n group=[str(x) for x in female.AgeGrp.unique()],\n value=female_percent,\n )\n\ndef update_population():\n population = df[df.Location == location].groupby(df.Year).Value.sum()\n aligned_revision = revision // 10 * 10\n\n known = population[population.index <= aligned_revision]\n predicted = population[population.index >= aligned_revision]\n\n source_known.data = dict(x=known.index.map(str), y=known.values)\n source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)\n\ndef update_data():\n update_population()\n update_pyramid()\n\ndef on_year_change(attr, old, new):\n global year\n year = int(new)\n update_data()\n\ndef on_location_change(attr, old, new):\n global location\n location = new\n update_data()\n\ndef create_layout():\n year_select = Select(title=\"Year:\", value=\"2010\", options=years)\n location_select = Select(title=\"Location:\", value=\"World\", options=locations)\n\n year_select.on_change('value', on_year_change)\n location_select.on_change('value', on_location_change)\n\n controls = WidgetBox(children=[year_select, location_select], height=150, width=600)\n layout = Column(children=[controls, pyramid(), population()])\n\n return layout\n\nlayout = create_layout()\n\nupdate_data()\n\ndocument.add_root(layout)\nsession.show(layout)\n\nif __name__ == \"__main__\":\n document.validate()\n print(\"\\npress ctrl-C to exit\")\n session.loop_until_closed()\n", "path": "examples/models/server/population.py"}], "after_files": [{"content": "from __future__ import print_function\n\nfrom math import pi\n\nfrom bokeh.client import push_session\nfrom bokeh.document import Document\nfrom bokeh.models.glyphs import Line, HBar\nfrom bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,\n LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)\nfrom bokeh.sampledata.population import load_population\nfrom bokeh.models.widgets import Select\nfrom bokeh.models.layouts import WidgetBox, Column\n\ndocument = Document()\nsession = push_session(document)\n\ndf = load_population()\nrevision = 2012\n\nyear, location = 2010, \"World\"\n\nyears = [str(x) for x in sorted(df.Year.unique())]\nlocations = sorted(df.Location.unique())\ngroups = [str(x) for x in df.AgeGrp.unique()]\ngroups.remove('80+') # remove oddball group\n\nsource_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))\nsource_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))\n\ndef pyramid():\n xdr = DataRange1d()\n ydr = FactorRange(factors=groups)\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)\n\n xaxis = LinearAxis()\n plot.add_layout(xaxis, 'below')\n plot.add_layout(CategoricalAxis(), 'left')\n\n plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\n\n m = HBar(left=\"value\", right=0, y=\"group\", height=1, fill_color=\"#3B8686\")\n mglyph = plot.add_glyph(source_pyramid_m, m)\n\n f = HBar(left=0, right=\"value\", y=\"group\", height=1, fill_color=\"#CFF09E\")\n fglyph = plot.add_glyph(source_pyramid_f, f)\n\n plot.add_layout(Legend(items=[(\"Male\" , [mglyph]), (\"Female\" , [fglyph])]))\n\n return plot\n\nsource_known = ColumnDataSource(data=dict(x=[], y=[]))\nsource_predicted = ColumnDataSource(data=dict(x=[], y=[]))\n\ndef population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n x_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)\n\n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n\n known = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2)\n known_glyph = plot.add_glyph(source_known, known)\n\n predicted = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2, line_dash=\"dashed\")\n predicted_glyph = plot.add_glyph(source_predicted, predicted)\n\n legend = Legend(location=\"bottom_right\",\n items=[(\"known\", [known_glyph]), (\"predicted\", [predicted_glyph])])\n plot.add_layout(legend)\n\n return plot\n\ndef update_pyramid():\n pyramid = df[(df.Location == location) & (df.Year == year)]\n\n male = pyramid[pyramid.Sex == \"Male\"]\n female = pyramid[pyramid.Sex == \"Female\"]\n\n total = df.Value.sum()\n male_percent = -male.Value / total\n female_percent = female.Value / total\n\n source_pyramid_m.data = dict(\n group=[str(x) for x in male.AgeGrp.unique()],\n value=male_percent,\n )\n source_pyramid_f.data = dict(\n group=[str(x) for x in female.AgeGrp.unique()],\n value=female_percent,\n )\n\ndef update_population():\n population = df[df.Location == location].groupby(df.Year).Value.sum()\n aligned_revision = revision // 10 * 10\n\n known = population[population.index <= aligned_revision]\n predicted = population[population.index >= aligned_revision]\n\n source_known.data = dict(x=known.index.map(str), y=known.values)\n source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)\n\ndef update_data():\n update_population()\n update_pyramid()\n\ndef on_year_change(attr, old, new):\n global year\n year = int(new)\n update_data()\n\ndef on_location_change(attr, old, new):\n global location\n location = new\n update_data()\n\ndef create_layout():\n year_select = Select(title=\"Year:\", value=\"2010\", options=years)\n location_select = Select(title=\"Location:\", value=\"World\", options=locations)\n\n year_select.on_change('value', on_year_change)\n location_select.on_change('value', on_location_change)\n\n controls = WidgetBox(children=[year_select, location_select], height=150, width=600)\n layout = Column(children=[controls, pyramid(), population()])\n\n return layout\n\nlayout = create_layout()\n\nupdate_data()\n\ndocument.add_root(layout)\nsession.show(layout)\n\nif __name__ == \"__main__\":\n document.validate()\n print(\"\\npress ctrl-C to exit\")\n session.loop_until_closed()\n", "path": "examples/models/server/population.py"}]} | 1,772 | 187 |
gh_patches_debug_762 | rasdani/github-patches | git_diff | kubeflow__pipelines-2610 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
kfp 0.1.35 tar.gz in pypi.org is missing diagnose_me directory
**What happened:**
The 0.1.35 release of kfp available on pypi.org (i.e. what is installed via `pip3 install kfp`) seems to be missing the `kfp/cli/diagnose_me` directory containing the diagnose_me modules required by the cli. The release hosted on github contains these files.
This is the tar.gz file hosted on pypi: https://files.pythonhosted.org/packages/e8/02/51dbeae211ddf1c931b2d1613db90856b7d94a53c1d9f704593dfa6253ae/kfp-0.1.35.tar.gz
If you try to install and run kfp 0.1.35 via pip it causes an error:
```
Traceback (most recent call last):
File "/Users/shenderson/venvs/kubeflow/bin/kfp", line 5, in <module>
from kfp.__main__ import main
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/__main__.py", line 15, in <module>
from .cli.cli import main
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/cli.py", line 21, in <module>
from .diagnose_me_cli import diagnose_me
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/diagnose_me_cli.py", line 6, in <module>
from .diagnose_me import dev_env
ModuleNotFoundError: No module named 'kfp.cli.diagnose_me'
```
**What did you expect to happen:**
All kfp modules including the diagnose_me package to be installed.
**What steps did you take:**
* Run `pip3 install --upgrade --force --no-cache-dir kfp`
* Run `kfp`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 REQUIRES = [
23 'urllib3>=1.15,<1.25', #Fixing the version conflict with the "requests" package
24 'six >= 1.10',
25 'certifi',
26 'python-dateutil',
27 'PyYAML',
28 'google-cloud-storage>=1.13.0',
29 'kubernetes>=8.0.0, <=9.0.0',
30 'PyJWT>=1.6.4',
31 'cryptography>=2.4.2',
32 'google-auth>=1.6.1',
33 'requests_toolbelt>=0.8.0',
34 'cloudpickle==1.1.1',
35 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.
36 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1
37 'jsonschema >= 3.0.1',
38 'tabulate == 0.8.3',
39 'click == 7.0',
40 'Deprecated',
41 ]
42
43 def find_version(*file_path_parts):
44 here = os.path.abspath(os.path.dirname(__file__))
45 with open(os.path.join(here, *file_path_parts), 'r') as fp:
46 version_file_text = fp.read()
47
48 version_match = re.search(
49 r"^__version__ = ['\"]([^'\"]*)['\"]",
50 version_file_text,
51 re.M,
52 )
53 if version_match:
54 return version_match.group(1)
55
56 raise RuntimeError("Unable to find version string.")
57
58 setup(
59 name=NAME,
60 version=find_version("kfp", "__init__.py"),
61 description='KubeFlow Pipelines SDK',
62 author='google',
63 install_requires=REQUIRES,
64 packages=[
65 'kfp',
66 'kfp.cli',
67 'kfp.compiler',
68 'kfp.components',
69 'kfp.components.structures',
70 'kfp.components.structures.kubernetes',
71 'kfp.containers',
72 'kfp.dsl',
73 'kfp.notebook',
74 ],
75 classifiers=[
76 'Intended Audience :: Developers',
77 'Intended Audience :: Education',
78 'Intended Audience :: Science/Research',
79 'License :: OSI Approved :: Apache Software License',
80 'Programming Language :: Python :: 3',
81 'Programming Language :: Python :: 3.5',
82 'Programming Language :: Python :: 3.6',
83 'Programming Language :: Python :: 3.7',
84 'Topic :: Scientific/Engineering',
85 'Topic :: Scientific/Engineering :: Artificial Intelligence',
86 'Topic :: Software Development',
87 'Topic :: Software Development :: Libraries',
88 'Topic :: Software Development :: Libraries :: Python Modules',
89 ],
90 python_requires='>=3.5.3',
91 include_package_data=True,
92 entry_points={
93 'console_scripts': [
94 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'
95 ]
96 })
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -64,6 +64,7 @@
packages=[
'kfp',
'kfp.cli',
+ 'kfp.cli.diagnose_me',
'kfp.compiler',
'kfp.components',
'kfp.components.structures',
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -64,6 +64,7 @@\n packages=[\n 'kfp',\n 'kfp.cli',\n+ 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n", "issue": "kfp 0.1.35 tar.gz in pypi.org is missing diagnose_me directory\n**What happened:**\r\nThe 0.1.35 release of kfp available on pypi.org (i.e. what is installed via `pip3 install kfp`) seems to be missing the `kfp/cli/diagnose_me` directory containing the diagnose_me modules required by the cli. The release hosted on github contains these files.\r\n\r\nThis is the tar.gz file hosted on pypi: https://files.pythonhosted.org/packages/e8/02/51dbeae211ddf1c931b2d1613db90856b7d94a53c1d9f704593dfa6253ae/kfp-0.1.35.tar.gz\r\n\r\nIf you try to install and run kfp 0.1.35 via pip it causes an error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/shenderson/venvs/kubeflow/bin/kfp\", line 5, in <module>\r\n from kfp.__main__ import main\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/__main__.py\", line 15, in <module>\r\n from .cli.cli import main\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/cli.py\", line 21, in <module>\r\n from .diagnose_me_cli import diagnose_me\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/diagnose_me_cli.py\", line 6, in <module>\r\n from .diagnose_me import dev_env\r\nModuleNotFoundError: No module named 'kfp.cli.diagnose_me'\r\n```\r\n\r\n**What did you expect to happen:**\r\nAll kfp modules including the diagnose_me package to be installed.\r\n\r\n**What steps did you take:**\r\n* Run `pip3 install --upgrade --force --no-cache-dir kfp`\r\n* Run `kfp`\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle==1.1.1',\n 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0',\n 'Deprecated',\n]\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n name=NAME,\n version=find_version(\"kfp\", \"__init__.py\"),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle==1.1.1',\n 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0',\n 'Deprecated',\n]\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n name=NAME,\n version=find_version(\"kfp\", \"__init__.py\"),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]} | 1,761 | 85 |
gh_patches_debug_41258 | rasdani/github-patches | git_diff | streamlink__streamlink-5774 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.artetv: error: Unable to validate response text: ValidationError(dict):
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.5.0
### Description
I fix this issue
```
by adding '**API_HLS_NG**' in line 51 of file
`/usr/lib/python3.11/site-packages/streamlink/plugins/artetv.py`
like this :
```

link/streamlink/assets/19744191/b78f47ba-67b2-439b-b336-85bef7e4615a)
### Debug log
```text
error: Unable to validate response text: ValidationError(dict):
Unable to validate value of key 'data'
Context(dict):
Unable to validate value of key 'attributes'
Context(dict):
Unable to validate value of key 'streams'
Context(AnySchema):
ValidationError(AnySchema):
ValidationError(AnySchema):
ValidationError(dict):
Unable to validate value of key 'protocol'
Context(AnySchema):
ValidationError(equality):
'API_HLS_NG' does not equal 'HLS'
ValidationError(equality):
'API_HLS_NG' does not equal 'HLS_NG'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/artetv.py`
Content:
```
1 """
2 $description European public service channel promoting culture, including magazine shows, concerts and documentaries.
3 $url arte.tv
4 $type live, vod
5 $metadata title
6 """
7
8 import logging
9 import re
10 from operator import itemgetter
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.hls import HLSStream
15
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(r"""
21 https?://(?:\w+\.)?arte\.tv/(?:guide/)?
22 (?P<language>[a-z]{2})/
23 (?:
24 (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+
25 |
26 (?:direct|live)
27 )
28 """, re.VERBOSE))
29 class ArteTV(Plugin):
30 API_URL = "https://api.arte.tv/api/player/v2/config/{0}/{1}"
31 API_TOKEN = "MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ"
32
33 def _get_streams(self):
34 language = self.match.group("language")
35 video_id = self.match.group("video_id")
36
37 json_url = self.API_URL.format(language, video_id or "LIVE")
38 headers = {
39 "Authorization": f"Bearer {self.API_TOKEN}",
40 }
41 streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(
42 validate.parse_json(),
43 {"data": {"attributes": {
44 "streams": validate.any(
45 [],
46 [
47 validate.all(
48 {
49 "url": validate.url(),
50 "slot": int,
51 "protocol": validate.any("HLS", "HLS_NG"),
52 },
53 validate.union_get("slot", "protocol", "url"),
54 ),
55 ],
56 ),
57 "metadata": {
58 "title": str,
59 "subtitle": validate.any(None, str),
60 },
61 }}},
62 validate.get(("data", "attributes")),
63 validate.union_get("streams", "metadata"),
64 ))
65
66 if not streams:
67 return
68
69 self.title = f"{metadata['title']} - {metadata['subtitle']}" if metadata["subtitle"] else metadata["title"]
70
71 for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):
72 return HLSStream.parse_variant_playlist(self.session, url)
73
74
75 __plugin__ = ArteTV
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py
--- a/src/streamlink/plugins/artetv.py
+++ b/src/streamlink/plugins/artetv.py
@@ -2,6 +2,7 @@
$description European public service channel promoting culture, including magazine shows, concerts and documentaries.
$url arte.tv
$type live, vod
+$metadata id
$metadata title
"""
@@ -17,38 +18,41 @@
log = logging.getLogger(__name__)
-@pluginmatcher(re.compile(r"""
- https?://(?:\w+\.)?arte\.tv/(?:guide/)?
- (?P<language>[a-z]{2})/
- (?:
- (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+
- |
- (?:direct|live)
- )
-""", re.VERBOSE))
+@pluginmatcher(
+ name="live",
+ pattern=re.compile(
+ r"https?://(?:\w+\.)?arte\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?",
+ ),
+)
+@pluginmatcher(
+ name="vod",
+ pattern=re.compile(
+ r"https?://(?:\w+\.)?arte\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+",
+ ),
+)
class ArteTV(Plugin):
- API_URL = "https://api.arte.tv/api/player/v2/config/{0}/{1}"
- API_TOKEN = "MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ"
+ API_URL = "https://api.arte.tv/api/player/v2/config/{language}/{id}"
def _get_streams(self):
- language = self.match.group("language")
- video_id = self.match.group("video_id")
+ self.id = self.match["video_id"] if self.matches["vod"] else "LIVE"
- json_url = self.API_URL.format(language, video_id or "LIVE")
- headers = {
- "Authorization": f"Bearer {self.API_TOKEN}",
- }
- streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(
+ json_url = self.API_URL.format(
+ language=self.match["language"],
+ id=self.id,
+ )
+ streams, metadata = self.session.http.get(json_url, schema=validate.Schema(
validate.parse_json(),
- {"data": {"attributes": {
+ {"data": {"attributes": dict}},
+ validate.get(("data", "attributes")),
+ {
"streams": validate.any(
[],
[
validate.all(
{
- "url": validate.url(),
"slot": int,
- "protocol": validate.any("HLS", "HLS_NG"),
+ "protocol": str,
+ "url": validate.url(),
},
validate.union_get("slot", "protocol", "url"),
),
@@ -58,17 +62,15 @@
"title": str,
"subtitle": validate.any(None, str),
},
- }}},
- validate.get(("data", "attributes")),
+ },
validate.union_get("streams", "metadata"),
))
- if not streams:
- return
-
self.title = f"{metadata['title']} - {metadata['subtitle']}" if metadata["subtitle"] else metadata["title"]
- for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):
+ for _slot, protocol, url in sorted(streams, key=itemgetter(0)):
+ if "HLS" not in protocol:
+ continue
return HLSStream.parse_variant_playlist(self.session, url)
| {"golden_diff": "diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py\n--- a/src/streamlink/plugins/artetv.py\n+++ b/src/streamlink/plugins/artetv.py\n@@ -2,6 +2,7 @@\n $description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n $url arte.tv\n $type live, vod\n+$metadata id\n $metadata title\n \"\"\"\n \n@@ -17,38 +18,41 @@\n log = logging.getLogger(__name__)\n \n \n-@pluginmatcher(re.compile(r\"\"\"\n- https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?\n- (?P<language>[a-z]{2})/\n- (?:\n- (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\n- |\n- (?:direct|live)\n- )\n-\"\"\", re.VERBOSE))\n+@pluginmatcher(\n+ name=\"live\",\n+ pattern=re.compile(\n+ r\"https?://(?:\\w+\\.)?arte\\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?\",\n+ ),\n+)\n+@pluginmatcher(\n+ name=\"vod\",\n+ pattern=re.compile(\n+ r\"https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\",\n+ ),\n+)\n class ArteTV(Plugin):\n- API_URL = \"https://api.arte.tv/api/player/v2/config/{0}/{1}\"\n- API_TOKEN = \"MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ\"\n+ API_URL = \"https://api.arte.tv/api/player/v2/config/{language}/{id}\"\n \n def _get_streams(self):\n- language = self.match.group(\"language\")\n- video_id = self.match.group(\"video_id\")\n+ self.id = self.match[\"video_id\"] if self.matches[\"vod\"] else \"LIVE\"\n \n- json_url = self.API_URL.format(language, video_id or \"LIVE\")\n- headers = {\n- \"Authorization\": f\"Bearer {self.API_TOKEN}\",\n- }\n- streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(\n+ json_url = self.API_URL.format(\n+ language=self.match[\"language\"],\n+ id=self.id,\n+ )\n+ streams, metadata = self.session.http.get(json_url, schema=validate.Schema(\n validate.parse_json(),\n- {\"data\": {\"attributes\": {\n+ {\"data\": {\"attributes\": dict}},\n+ validate.get((\"data\", \"attributes\")),\n+ {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n- \"url\": validate.url(),\n \"slot\": int,\n- \"protocol\": validate.any(\"HLS\", \"HLS_NG\"),\n+ \"protocol\": str,\n+ \"url\": validate.url(),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n@@ -58,17 +62,15 @@\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n- }}},\n- validate.get((\"data\", \"attributes\")),\n+ },\n validate.union_get(\"streams\", \"metadata\"),\n ))\n \n- if not streams:\n- return\n-\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n \n- for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):\n+ for _slot, protocol, url in sorted(streams, key=itemgetter(0)):\n+ if \"HLS\" not in protocol:\n+ continue\n return HLSStream.parse_variant_playlist(self.session, url)\n", "issue": "plugins.artetv: error: Unable to validate response text: ValidationError(dict):\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\nstreamlink 6.5.0\r\n\r\n### Description\r\n\r\nI fix this issue \r\n```\r\nby adding '**API_HLS_NG**' in line 51 of file \r\n`/usr/lib/python3.11/site-packages/streamlink/plugins/artetv.py`\r\nlike this :\r\n```\r\n\r\nlink/streamlink/assets/19744191/b78f47ba-67b2-439b-b336-85bef7e4615a)\r\n\r\n### Debug log\r\n\r\n```text\r\nerror: Unable to validate response text: ValidationError(dict):\r\n Unable to validate value of key 'data'\r\n Context(dict):\r\n Unable to validate value of key 'attributes'\r\n Context(dict):\r\n Unable to validate value of key 'streams'\r\n Context(AnySchema):\r\n ValidationError(AnySchema):\r\n ValidationError(AnySchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'protocol'\r\n Context(AnySchema):\r\n ValidationError(equality):\r\n 'API_HLS_NG' does not equal 'HLS'\r\n ValidationError(equality):\r\n 'API_HLS_NG' does not equal 'HLS_NG'\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n$url arte.tv\n$type live, vod\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom operator import itemgetter\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?\n (?P<language>[a-z]{2})/\n (?:\n (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\n |\n (?:direct|live)\n )\n\"\"\", re.VERBOSE))\nclass ArteTV(Plugin):\n API_URL = \"https://api.arte.tv/api/player/v2/config/{0}/{1}\"\n API_TOKEN = \"MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ\"\n\n def _get_streams(self):\n language = self.match.group(\"language\")\n video_id = self.match.group(\"video_id\")\n\n json_url = self.API_URL.format(language, video_id or \"LIVE\")\n headers = {\n \"Authorization\": f\"Bearer {self.API_TOKEN}\",\n }\n streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\"attributes\": {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n \"url\": validate.url(),\n \"slot\": int,\n \"protocol\": validate.any(\"HLS\", \"HLS_NG\"),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n ],\n ),\n \"metadata\": {\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n }}},\n validate.get((\"data\", \"attributes\")),\n validate.union_get(\"streams\", \"metadata\"),\n ))\n\n if not streams:\n return\n\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n\n for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = ArteTV\n", "path": "src/streamlink/plugins/artetv.py"}], "after_files": [{"content": "\"\"\"\n$description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n$url arte.tv\n$type live, vod\n$metadata id\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom operator import itemgetter\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(\n name=\"live\",\n pattern=re.compile(\n r\"https?://(?:\\w+\\.)?arte\\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?\",\n ),\n)\n@pluginmatcher(\n name=\"vod\",\n pattern=re.compile(\n r\"https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\",\n ),\n)\nclass ArteTV(Plugin):\n API_URL = \"https://api.arte.tv/api/player/v2/config/{language}/{id}\"\n\n def _get_streams(self):\n self.id = self.match[\"video_id\"] if self.matches[\"vod\"] else \"LIVE\"\n\n json_url = self.API_URL.format(\n language=self.match[\"language\"],\n id=self.id,\n )\n streams, metadata = self.session.http.get(json_url, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\"attributes\": dict}},\n validate.get((\"data\", \"attributes\")),\n {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n \"slot\": int,\n \"protocol\": str,\n \"url\": validate.url(),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n ],\n ),\n \"metadata\": {\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n },\n validate.union_get(\"streams\", \"metadata\"),\n ))\n\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n\n for _slot, protocol, url in sorted(streams, key=itemgetter(0)):\n if \"HLS\" not in protocol:\n continue\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = ArteTV\n", "path": "src/streamlink/plugins/artetv.py"}]} | 1,452 | 909 |
gh_patches_debug_20703 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1817 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search: search doesn't appear to use the organization
Example: the MapAction org has two public datasets, but searching "mapaction" or MapAction returns 0 results.
Other org searches will return results, but this is probably because the name of the org is mentioned in other metadata.
To do:
1. confirm that search queries from the homepage or main search bar are not using organizations
2. if that is the source of the problem, add org to the search queries
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_search/ckanext/hdx_search/plugin.py`
Content:
```
1 import logging
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6
7 class HDXSearchPlugin(plugins.SingletonPlugin):
8 plugins.implements(plugins.IConfigurer, inherit=False)
9 plugins.implements(plugins.IRoutes, inherit=True)
10 plugins.implements(plugins.ITemplateHelpers, inherit=False)
11 plugins.implements(plugins.IPackageController, inherit=True)
12
13 def update_config(self, config):
14 tk.add_template_directory(config, 'templates')
15
16 def get_helpers(self):
17 return {}
18
19 def before_map(self, map):
20 map.connect('search', '/search',
21 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
22 map.connect('simple_search',
23 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
24 return map
25
26 def after_map(self, map):
27 map.connect('search', '/search',
28 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
29 map.connect('simple_search',
30 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
31 return map
32
33 def before_search(self, search_params):
34 if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
35 search_params['facet.field'].append('vocab_Topics')
36
37 # If indicator flag is set, search only that type
38 if 'ext_indicator' in search_params['extras']:
39 if int(search_params['extras']['ext_indicator']) == 1:
40 search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'
41 elif int(search_params['extras']['ext_indicator']) == 0:
42 search_params['fq'] = search_params[
43 'fq'] + ' -extras_indicator:1'
44 return search_params
45
46 def after_search(self, search_results, search_params):
47 return search_results
48
49 def before_view(self, pkg_dict):
50 return pkg_dict
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -1,8 +1,13 @@
-import logging
+import logging, re
import ckan.plugins as plugins
import ckan.plugins.toolkit as tk
import ckan.lib.plugins as lib_plugins
+def convert_country(q):
+ for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):
+ if re.findall(c['display_name'].lower(),q.lower()):
+ q += ' '+c['name']
+ return q
class HDXSearchPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer, inherit=False)
@@ -31,6 +36,7 @@
return map
def before_search(self, search_params):
+ search_params['q'] = convert_country(search_params['q'])
if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
search_params['facet.field'].append('vocab_Topics')
| {"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -1,8 +1,13 @@\n-import logging\n+import logging, re\n import ckan.plugins as plugins\n import ckan.plugins.toolkit as tk\n import ckan.lib.plugins as lib_plugins\n \n+def convert_country(q):\n+ for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n+ if re.findall(c['display_name'].lower(),q.lower()):\n+ q += ' '+c['name']\n+ return q\n \n class HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n@@ -31,6 +36,7 @@\n return map\n \n def before_search(self, search_params):\n+ search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n", "issue": "Search: search doesn't appear to use the organization\nExample: the MapAction org has two public datasets, but searching \"mapaction\" or MapAction returns 0 results. \n\nOther org searches will return results, but this is probably because the name of the org is mentioned in other metadata. \n\nTo do: \n1. confirm that search queries from the homepage or main search bar are not using organizations\n2. if that is the source of the problem, add org to the search queries\n\n", "before_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}], "after_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]} | 918 | 287 |
gh_patches_debug_11786 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-2877 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BigQuery] Allow more recent versions of google-api-core?
### Describe the feature
Currently dbt-bigquery has [an upper limit of 1.16 on `google-api-core`](https://github.com/fishtown-analytics/dbt/blob/v0.18.1b3/plugins/bigquery/setup.py#L53). That release is from Jan of this year.
Would it be possible to loosen that?
While it's clearly not dbt's responsibility for us to be able to install arbitrary packages, here's an example where we can't instally `google-cloud-bigquery-datatransfer` because of this restriction:
```
[SolverProblemError]
Because no versions of google-cloud-bigquery-datatransfer match >2.0.0,<3.0.0
and google-cloud-bigquery-datatransfer (2.0.0) depends on google-api-core (>=1.22.2,<2.0.0dev), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) requires google-api-core (>=1.22.2,<2.0.0dev).
And because dbt-bigquery (0.18.0) depends on google-api-core (>=1.16.0,<1.17.0), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt-bigquery (0.18.0).
And because dbt (0.18.0) depends on dbt-bigquery (0.18.0)
and no versions of dbt match >0.18.0,<0.19.0, google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt (>=0.18.0,<0.19.0).
So, because {repo} depends on both dbt (^0.18.0) and google-cloud-bigquery-datatransfer (^2.0.0), version solving failed.
```
Thanks as ever for the awesome product!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/bigquery/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 package_name = "dbt-bigquery"
23 package_version = "0.19.0b1"
24 description = """The bigquery adapter plugin for dbt (data build tool)"""
25
26 this_directory = os.path.abspath(os.path.dirname(__file__))
27 with open(os.path.join(this_directory, 'README.md')) as f:
28 long_description = f.read()
29
30 setup(
31 name=package_name,
32 version=package_version,
33 description=description,
34 long_description=long_description,
35 long_description_content_type='text/markdown',
36 author="Fishtown Analytics",
37 author_email="[email protected]",
38 url="https://github.com/fishtown-analytics/dbt",
39 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
40 package_data={
41 'dbt': [
42 'include/bigquery/dbt_project.yml',
43 'include/bigquery/sample_profiles.yml',
44 'include/bigquery/macros/*.sql',
45 'include/bigquery/macros/**/*.sql',
46 ]
47 },
48 install_requires=[
49 'dbt-core=={}'.format(package_version),
50 'protobuf>=3.6.0,<3.12',
51 'google-cloud-core>=1.3.0,<1.4',
52 'google-cloud-bigquery>=1.25.0,<1.26.0',
53 'google-api-core>=1.16.0,<1.17.0',
54 'googleapis-common-protos>=1.6.0,<1.7.0',
55 'six>=1.14.0',
56 ],
57 zip_safe=False,
58 classifiers=[
59 'Development Status :: 5 - Production/Stable',
60
61 'License :: OSI Approved :: Apache Software License',
62
63 'Operating System :: Microsoft :: Windows',
64 'Operating System :: MacOS :: MacOS X',
65 'Operating System :: POSIX :: Linux',
66
67 'Programming Language :: Python :: 3.6',
68 'Programming Language :: Python :: 3.7',
69 'Programming Language :: Python :: 3.8',
70 ],
71 python_requires=">=3.6.2",
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/bigquery/setup.py b/plugins/bigquery/setup.py
--- a/plugins/bigquery/setup.py
+++ b/plugins/bigquery/setup.py
@@ -47,11 +47,13 @@
},
install_requires=[
'dbt-core=={}'.format(package_version),
- 'protobuf>=3.6.0,<3.12',
- 'google-cloud-core>=1.3.0,<1.4',
- 'google-cloud-bigquery>=1.25.0,<1.26.0',
- 'google-api-core>=1.16.0,<1.17.0',
- 'googleapis-common-protos>=1.6.0,<1.7.0',
+ 'protobuf>=3.13.0,<4',
+ # These are more tightly pinned, as they have a track record of
+ # breaking changes in minor releases.
+ 'google-cloud-core>=1.3.0,<1.5',
+ 'google-cloud-bigquery>=1.25.0,<2.4',
+ 'google-api-core>=1.16.0,<1.24',
+ 'googleapis-common-protos>=1.6.0,<1.53',
'six>=1.14.0',
],
zip_safe=False,
| {"golden_diff": "diff --git a/plugins/bigquery/setup.py b/plugins/bigquery/setup.py\n--- a/plugins/bigquery/setup.py\n+++ b/plugins/bigquery/setup.py\n@@ -47,11 +47,13 @@\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n- 'protobuf>=3.6.0,<3.12',\n- 'google-cloud-core>=1.3.0,<1.4',\n- 'google-cloud-bigquery>=1.25.0,<1.26.0',\n- 'google-api-core>=1.16.0,<1.17.0',\n- 'googleapis-common-protos>=1.6.0,<1.7.0',\n+ 'protobuf>=3.13.0,<4',\n+ # These are more tightly pinned, as they have a track record of\n+ # breaking changes in minor releases.\n+ 'google-cloud-core>=1.3.0,<1.5',\n+ 'google-cloud-bigquery>=1.25.0,<2.4',\n+ 'google-api-core>=1.16.0,<1.24',\n+ 'googleapis-common-protos>=1.6.0,<1.53',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n", "issue": "[BigQuery] Allow more recent versions of google-api-core?\n### Describe the feature\r\n\r\nCurrently dbt-bigquery has [an upper limit of 1.16 on `google-api-core`](https://github.com/fishtown-analytics/dbt/blob/v0.18.1b3/plugins/bigquery/setup.py#L53). That release is from Jan of this year.\r\n\r\nWould it be possible to loosen that?\r\n\r\nWhile it's clearly not dbt's responsibility for us to be able to install arbitrary packages, here's an example where we can't instally `google-cloud-bigquery-datatransfer` because of this restriction:\r\n\r\n```\r\n[SolverProblemError]\r\nBecause no versions of google-cloud-bigquery-datatransfer match >2.0.0,<3.0.0\r\n and google-cloud-bigquery-datatransfer (2.0.0) depends on google-api-core (>=1.22.2,<2.0.0dev), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) requires google-api-core (>=1.22.2,<2.0.0dev).\r\nAnd because dbt-bigquery (0.18.0) depends on google-api-core (>=1.16.0,<1.17.0), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt-bigquery (0.18.0).\r\nAnd because dbt (0.18.0) depends on dbt-bigquery (0.18.0)\r\n and no versions of dbt match >0.18.0,<0.19.0, google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt (>=0.18.0,<0.19.0).\r\nSo, because {repo} depends on both dbt (^0.18.0) and google-cloud-bigquery-datatransfer (^2.0.0), version solving failed.\r\n```\r\n\r\nThanks as ever for the awesome product!\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\npackage_name = \"dbt-bigquery\"\npackage_version = \"0.19.0b1\"\ndescription = \"\"\"The bigquery adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/bigquery/dbt_project.yml',\n 'include/bigquery/sample_profiles.yml',\n 'include/bigquery/macros/*.sql',\n 'include/bigquery/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'protobuf>=3.6.0,<3.12',\n 'google-cloud-core>=1.3.0,<1.4',\n 'google-cloud-bigquery>=1.25.0,<1.26.0',\n 'google-api-core>=1.16.0,<1.17.0',\n 'googleapis-common-protos>=1.6.0,<1.7.0',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/bigquery/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\npackage_name = \"dbt-bigquery\"\npackage_version = \"0.19.0b1\"\ndescription = \"\"\"The bigquery adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/bigquery/dbt_project.yml',\n 'include/bigquery/sample_profiles.yml',\n 'include/bigquery/macros/*.sql',\n 'include/bigquery/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'protobuf>=3.13.0,<4',\n # These are more tightly pinned, as they have a track record of\n # breaking changes in minor releases.\n 'google-cloud-core>=1.3.0,<1.5',\n 'google-cloud-bigquery>=1.25.0,<2.4',\n 'google-api-core>=1.16.0,<1.24',\n 'googleapis-common-protos>=1.6.0,<1.53',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/bigquery/setup.py"}]} | 1,442 | 295 |
gh_patches_debug_10622 | rasdani/github-patches | git_diff | mdn__kuma-6143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Require minium length for "explanation" field in BCD signals
**Summary**
_What should be changed?_
A minimum length of 10 characters should be required for the "explanation" field in BCD signals
**Rationale**
_What problems would this solve?_
Less spam submissions
**Audience**
_Who would use this changed feature?_
BCD maintainers
**Proposal**
_What would users see and do? What would happen as a result?_
Users would be required to enter a meaningful explanation and hopefully refrain from submitting "fehfs", "test", and other garbage.
**Additional context**
_Is there anything else we should know?_
Was discussed in https://github.com/mdn/sprints/issues/2289
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/api/v1/serializers.py`
Content:
```
1 from rest_framework import exceptions
2 from rest_framework import serializers
3
4 from kuma.wiki.models import BCSignal, Document
5
6
7 class BCSignalSerializer(serializers.Serializer):
8 feature = serializers.CharField(max_length=255)
9 browsers = serializers.CharField(max_length=255)
10 slug = serializers.CharField(max_length=255)
11 locale = serializers.CharField(max_length=7)
12 explanation = serializers.CharField(allow_blank=True, max_length=1000)
13 supporting_material = serializers.CharField(
14 allow_blank=True, required=False, max_length=1000
15 )
16
17 def create(self, validated_data):
18 slug = validated_data.pop("slug")
19 locale = validated_data.pop("locale")
20 document = Document.objects.filter(slug=slug, locale=locale).first()
21
22 if document:
23 return BCSignal.objects.create(document=document, **validated_data)
24 raise exceptions.ValidationError("Document not found")
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kuma/api/v1/serializers.py b/kuma/api/v1/serializers.py
--- a/kuma/api/v1/serializers.py
+++ b/kuma/api/v1/serializers.py
@@ -9,7 +9,11 @@
browsers = serializers.CharField(max_length=255)
slug = serializers.CharField(max_length=255)
locale = serializers.CharField(max_length=7)
- explanation = serializers.CharField(allow_blank=True, max_length=1000)
+ explanation = serializers.CharField(
+ # Make sure these match the constants in bcd-signal.jsx
+ max_length=1000,
+ min_length=10,
+ )
supporting_material = serializers.CharField(
allow_blank=True, required=False, max_length=1000
)
| {"golden_diff": "diff --git a/kuma/api/v1/serializers.py b/kuma/api/v1/serializers.py\n--- a/kuma/api/v1/serializers.py\n+++ b/kuma/api/v1/serializers.py\n@@ -9,7 +9,11 @@\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n- explanation = serializers.CharField(allow_blank=True, max_length=1000)\n+ explanation = serializers.CharField(\n+ # Make sure these match the constants in bcd-signal.jsx\n+ max_length=1000,\n+ min_length=10,\n+ )\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n", "issue": "Require minium length for \"explanation\" field in BCD signals\n**Summary**\r\n_What should be changed?_\r\nA minimum length of 10 characters should be required for the \"explanation\" field in BCD signals\r\n\r\n**Rationale**\r\n_What problems would this solve?_\r\nLess spam submissions\r\n\r\n**Audience**\r\n_Who would use this changed feature?_\r\nBCD maintainers\r\n\r\n**Proposal**\r\n_What would users see and do? What would happen as a result?_\r\nUsers would be required to enter a meaningful explanation and hopefully refrain from submitting \"fehfs\", \"test\", and other garbage.\r\n\r\n**Additional context**\r\n_Is there anything else we should know?_\r\nWas discussed in https://github.com/mdn/sprints/issues/2289\n", "before_files": [{"content": "from rest_framework import exceptions\nfrom rest_framework import serializers\n\nfrom kuma.wiki.models import BCSignal, Document\n\n\nclass BCSignalSerializer(serializers.Serializer):\n feature = serializers.CharField(max_length=255)\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n explanation = serializers.CharField(allow_blank=True, max_length=1000)\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n\n def create(self, validated_data):\n slug = validated_data.pop(\"slug\")\n locale = validated_data.pop(\"locale\")\n document = Document.objects.filter(slug=slug, locale=locale).first()\n\n if document:\n return BCSignal.objects.create(document=document, **validated_data)\n raise exceptions.ValidationError(\"Document not found\")\n", "path": "kuma/api/v1/serializers.py"}], "after_files": [{"content": "from rest_framework import exceptions\nfrom rest_framework import serializers\n\nfrom kuma.wiki.models import BCSignal, Document\n\n\nclass BCSignalSerializer(serializers.Serializer):\n feature = serializers.CharField(max_length=255)\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n explanation = serializers.CharField(\n # Make sure these match the constants in bcd-signal.jsx\n max_length=1000,\n min_length=10,\n )\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n\n def create(self, validated_data):\n slug = validated_data.pop(\"slug\")\n locale = validated_data.pop(\"locale\")\n document = Document.objects.filter(slug=slug, locale=locale).first()\n\n if document:\n return BCSignal.objects.create(document=document, **validated_data)\n raise exceptions.ValidationError(\"Document not found\")\n", "path": "kuma/api/v1/serializers.py"}]} | 658 | 181 |
gh_patches_debug_7894 | rasdani/github-patches | git_diff | vega__altair-390 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin vega version in requirements
To make sure things still work when ipyvega is updated (as it already has been)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 LONG_DESCRIPTION = """
2 Altair: A declarative statistical visualization library for Python.
3
4 http://altair-viz.github.io/
5
6 This package provides a Python API for building statistical visualizations
7 in a declarative manner. This API contains no actual visualization rendering
8 code, but instead emits JSON data structures following the `Vega-Lite`_
9 specification. For convenience, Altair can optionally use `ipyvega`_ to
10 seamlessly display client-side renderings in the Jupyter notebook.
11
12 .. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png
13
14 Please note that if you wish to use altair in the Jupyter Notebook, the
15 `ipyvega`_ notebook extension must be enabled as follows::
16
17 $ pip install altair
18 $ pip install --upgrade notebook
19 $ jupyter nbextension install --sys-prefix --py vega
20
21 See the `Altair Documentation`_ for tutorials, detailed installation
22 instructions, and examples.
23 See the `Altair Github Repository`_ for issues, bug reports, and contributions.
24
25 .. _Altair Github Repository: http://github.com/altair-viz/altair/
26 .. _Altair Documentation: http://altair-viz.github.io/
27 .. _Vega-Lite: https://github.com/vega/vega-lite
28 .. _ipyvega: https://github.com/vega/ipyvega
29 """
30
31 DESCRIPTION = "Altair: A declarative statistical visualization library for Python."
32 NAME = "altair"
33 PACKAGES = ['altair',
34 'altair.v1',
35 'altair.v1.tests',
36 'altair.v1.schema',
37 'altair.v1.schema._interface',
38 'altair.v1.schema._interface.tests',
39 'altair.v1.examples',
40 'altair.v1.examples.tests',
41 'altair.datasets',
42 'altair.datasets.tests',
43 'altair.expr',
44 'altair.expr.tests',
45 'altair.tests',
46 'altair.utils',
47 'altair.utils.tests',
48 ]
49 PACKAGE_DATA = {'altair': ['notebooks/*.ipynb',
50 'notebooks/*.html',
51 'notebooks/auto_examples/*.ipynb',
52 'v1/schema/*.json',
53 'v1/examples/*.json',
54 'v1/examples/json/*.json',
55 'datasets/*.json',
56 'expr/*.json']}
57 AUTHOR = "Brian E. Granger / Jake VanderPlas"
58 AUTHOR_EMAIL = "[email protected] / [email protected]"
59 URL = 'http://altair-viz.github.io'
60 DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'
61 LICENSE = 'BSD 3-clause'
62 INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']
63
64
65 import io
66 import os
67 import re
68
69 try:
70 from setuptools import setup
71 except ImportError:
72 from distutils.core import setup
73
74
75 def read(path, encoding='utf-8'):
76 path = os.path.join(os.path.dirname(__file__), path)
77 with io.open(path, encoding=encoding) as fp:
78 return fp.read()
79
80
81 def version(path):
82 """Obtain the packge version from a python file e.g. pkg/__init__.py
83
84 See <https://packaging.python.org/en/latest/single_source_version.html>.
85 """
86 version_file = read(path)
87 version_match = re.search(r"""^__version__ = ['"]([^'"]*)['"]""",
88 version_file, re.M)
89 if version_match:
90 return version_match.group(1)
91 raise RuntimeError("Unable to find version string.")
92
93
94 VERSION = version('altair/__init__.py')
95
96
97 setup(name=NAME,
98 version=VERSION,
99 description=DESCRIPTION,
100 long_description=LONG_DESCRIPTION,
101 author=AUTHOR,
102 author_email=AUTHOR_EMAIL,
103 url=URL,
104 download_url=DOWNLOAD_URL,
105 license=LICENSE,
106 packages=PACKAGES,
107 package_data=PACKAGE_DATA,
108 install_requires=INSTALL_REQUIRES,
109 classifiers=[
110 'Development Status :: 4 - Beta',
111 'Environment :: Console',
112 'Intended Audience :: Science/Research',
113 'License :: OSI Approved :: BSD License',
114 'Natural Language :: English',
115 'Programming Language :: Python :: 2.7',
116 'Programming Language :: Python :: 3.4',
117 'Programming Language :: Python :: 3.5'],
118 )
119
```
Path: `altair/__init__.py`
Content:
```
1 __version__ = '1.3.0.dev0'
2
3 from .v1 import *
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/__init__.py b/altair/__init__.py
--- a/altair/__init__.py
+++ b/altair/__init__.py
@@ -1,3 +1,3 @@
-__version__ = '1.3.0.dev0'
+__version__ = '1.2.1.dev0'
from .v1 import *
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@
URL = 'http://altair-viz.github.io'
DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'
LICENSE = 'BSD 3-clause'
-INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']
+INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']
import io
| {"golden_diff": "diff --git a/altair/__init__.py b/altair/__init__.py\n--- a/altair/__init__.py\n+++ b/altair/__init__.py\n@@ -1,3 +1,3 @@\n-__version__ = '1.3.0.dev0'\n+__version__ = '1.2.1.dev0'\n \n from .v1 import *\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,7 +59,7 @@\n URL = 'http://altair-viz.github.io'\n DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\n LICENSE = 'BSD 3-clause'\n-INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']\n+INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']\n \n \n import io\n", "issue": "Pin vega version in requirements\nTo make sure things still work when ipyvega is updated (as it already has been)\n", "before_files": [{"content": "LONG_DESCRIPTION = \"\"\"\nAltair: A declarative statistical visualization library for Python.\n\nhttp://altair-viz.github.io/\n\nThis package provides a Python API for building statistical visualizations\nin a declarative manner. This API contains no actual visualization rendering\ncode, but instead emits JSON data structures following the `Vega-Lite`_\nspecification. For convenience, Altair can optionally use `ipyvega`_ to\nseamlessly display client-side renderings in the Jupyter notebook.\n\n.. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png\n\nPlease note that if you wish to use altair in the Jupyter Notebook, the\n`ipyvega`_ notebook extension must be enabled as follows::\n\n $ pip install altair\n $ pip install --upgrade notebook\n $ jupyter nbextension install --sys-prefix --py vega\n\nSee the `Altair Documentation`_ for tutorials, detailed installation\ninstructions, and examples.\nSee the `Altair Github Repository`_ for issues, bug reports, and contributions.\n\n.. _Altair Github Repository: http://github.com/altair-viz/altair/\n.. _Altair Documentation: http://altair-viz.github.io/\n.. _Vega-Lite: https://github.com/vega/vega-lite\n.. _ipyvega: https://github.com/vega/ipyvega\n\"\"\"\n\nDESCRIPTION = \"Altair: A declarative statistical visualization library for Python.\"\nNAME = \"altair\"\nPACKAGES = ['altair',\n 'altair.v1',\n 'altair.v1.tests',\n 'altair.v1.schema',\n 'altair.v1.schema._interface',\n 'altair.v1.schema._interface.tests',\n 'altair.v1.examples',\n 'altair.v1.examples.tests',\n 'altair.datasets',\n 'altair.datasets.tests',\n 'altair.expr',\n 'altair.expr.tests',\n 'altair.tests',\n 'altair.utils',\n 'altair.utils.tests',\n ]\nPACKAGE_DATA = {'altair': ['notebooks/*.ipynb',\n 'notebooks/*.html',\n 'notebooks/auto_examples/*.ipynb',\n 'v1/schema/*.json',\n 'v1/examples/*.json',\n 'v1/examples/json/*.json',\n 'datasets/*.json',\n 'expr/*.json']}\nAUTHOR = \"Brian E. Granger / Jake VanderPlas\"\nAUTHOR_EMAIL = \"[email protected] / [email protected]\"\nURL = 'http://altair-viz.github.io'\nDOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\nLICENSE = 'BSD 3-clause'\nINSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']\n\n\nimport io\nimport os\nimport re\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\ndef read(path, encoding='utf-8'):\n path = os.path.join(os.path.dirname(__file__), path)\n with io.open(path, encoding=encoding) as fp:\n return fp.read()\n\n\ndef version(path):\n \"\"\"Obtain the packge version from a python file e.g. pkg/__init__.py\n\n See <https://packaging.python.org/en/latest/single_source_version.html>.\n \"\"\"\n version_file = read(path)\n version_match = re.search(r\"\"\"^__version__ = ['\"]([^'\"]*)['\"]\"\"\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nVERSION = version('altair/__init__.py')\n\n\nsetup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n download_url=DOWNLOAD_URL,\n license=LICENSE,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'],\n )\n", "path": "setup.py"}, {"content": "__version__ = '1.3.0.dev0'\n\nfrom .v1 import *\n", "path": "altair/__init__.py"}], "after_files": [{"content": "LONG_DESCRIPTION = \"\"\"\nAltair: A declarative statistical visualization library for Python.\n\nhttp://altair-viz.github.io/\n\nThis package provides a Python API for building statistical visualizations\nin a declarative manner. This API contains no actual visualization rendering\ncode, but instead emits JSON data structures following the `Vega-Lite`_\nspecification. For convenience, Altair can optionally use `ipyvega`_ to\nseamlessly display client-side renderings in the Jupyter notebook.\n\n.. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png\n\nPlease note that if you wish to use altair in the Jupyter Notebook, the\n`ipyvega`_ notebook extension must be enabled as follows::\n\n $ pip install altair\n $ pip install --upgrade notebook\n $ jupyter nbextension install --sys-prefix --py vega\n\nSee the `Altair Documentation`_ for tutorials, detailed installation\ninstructions, and examples.\nSee the `Altair Github Repository`_ for issues, bug reports, and contributions.\n\n.. _Altair Github Repository: http://github.com/altair-viz/altair/\n.. _Altair Documentation: http://altair-viz.github.io/\n.. _Vega-Lite: https://github.com/vega/vega-lite\n.. _ipyvega: https://github.com/vega/ipyvega\n\"\"\"\n\nDESCRIPTION = \"Altair: A declarative statistical visualization library for Python.\"\nNAME = \"altair\"\nPACKAGES = ['altair',\n 'altair.v1',\n 'altair.v1.tests',\n 'altair.v1.schema',\n 'altair.v1.schema._interface',\n 'altair.v1.schema._interface.tests',\n 'altair.v1.examples',\n 'altair.v1.examples.tests',\n 'altair.datasets',\n 'altair.datasets.tests',\n 'altair.expr',\n 'altair.expr.tests',\n 'altair.tests',\n 'altair.utils',\n 'altair.utils.tests',\n ]\nPACKAGE_DATA = {'altair': ['notebooks/*.ipynb',\n 'notebooks/*.html',\n 'notebooks/auto_examples/*.ipynb',\n 'v1/schema/*.json',\n 'v1/examples/*.json',\n 'v1/examples/json/*.json',\n 'datasets/*.json',\n 'expr/*.json']}\nAUTHOR = \"Brian E. Granger / Jake VanderPlas\"\nAUTHOR_EMAIL = \"[email protected] / [email protected]\"\nURL = 'http://altair-viz.github.io'\nDOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\nLICENSE = 'BSD 3-clause'\nINSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']\n\n\nimport io\nimport os\nimport re\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\ndef read(path, encoding='utf-8'):\n path = os.path.join(os.path.dirname(__file__), path)\n with io.open(path, encoding=encoding) as fp:\n return fp.read()\n\n\ndef version(path):\n \"\"\"Obtain the packge version from a python file e.g. pkg/__init__.py\n\n See <https://packaging.python.org/en/latest/single_source_version.html>.\n \"\"\"\n version_file = read(path)\n version_match = re.search(r\"\"\"^__version__ = ['\"]([^'\"]*)['\"]\"\"\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nVERSION = version('altair/__init__.py')\n\n\nsetup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n download_url=DOWNLOAD_URL,\n license=LICENSE,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'],\n )\n", "path": "setup.py"}, {"content": "__version__ = '1.2.1.dev0'\n\nfrom .v1 import *\n", "path": "altair/__init__.py"}]} | 1,544 | 226 |
gh_patches_debug_26558 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-4522 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix JUPYTERHUB_SINGLEUSER_APP after Notebook 7 release
### Bug description
With `notebook 6.5.4` it was possible to specify JUPYTERHUB_SINGLEUSER_APP='notebook' to run `Jupyter Notebook` instead of `JupyterLab`.
#### Expected behaviour
Jupyter Notebook is run in singleuser
#### Actual behaviour
`jupyterhub-singleuser` fails
### How to reproduce
Working image: `jupyter/base-notebook:notebook-6.5.4`
Failing image: `jupyter/base-notebook:notebook-7.0.0`
1. Run image: `docker run -it --rm jupyter/base-notebook:notebook-7.0.0 bash`
2. Run: `JUPYTERHUB_SINGLEUSER_APP='notebook' JUPYTERHUB_SERVICE_URL="127.0.0.1" jupyterhub-singleuser`
JupyterHub is not running inside the image, but I don't think that's the problem.
Output with Jupyter Notebook 7:
```
Traceback (most recent call last):
File "/opt/conda/bin/jupyterhub-singleuser", line 6, in <module>
from jupyterhub.singleuser import main
File "/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/__init__.py", line 67, in <module>
from .app import SingleUserNotebookApp, main
File "/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/app.py", line 31, in <module>
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/traitlets/utils/importstring.py", line 30, in import_item
module = __import__(package, fromlist=[obj])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'notebook.notebookapp'
```
Output with Jupyter Notebook 6:
```
[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:547] Starting jupyterhub single-user server version 4.0.1
[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:561] Extending notebook.notebookapp.NotebookApp from notebook 6.5.4
[W 2023-07-25 20:59:48.578 SingleUserNotebookApp configurable:200] Config option `open_browser` not recognized by `SingleUserNotebookApp`. Did you mean `browser`?
JUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?
```
### Your personal set up
- OS:
<!-- [e.g. ubuntu 20.04, macOS 11.0] -->
- Version(s):
<!-- e.g. jupyterhub --version, python --version --->
<details><summary>Full environment</summary>
<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->
```
# paste output of `pip freeze` or `conda list` here
```
</details>
<details><summary>Configuration</summary>
<!--
For JupyterHub, especially include information such as what Spawner and Authenticator are being used.
Be careful not to share any sensitive information.
You can paste jupyterhub_config.py below.
To exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:
grep -v '\(^#\|^[[:space:]]*$\)' jupyterhub_config.py
-->
```python
# jupyterhub_config.py
```
</details>
<details><summary>Logs</summary>
<!--
Errors are often logged by jupytehub. How you get logs depends on your deployment.
With kubernetes it might be:
kubectl get pod # hub pod name starts with hub...
kubectl logs hub-...
# or for a single-user server
kubectl logs jupyter-username
Or the-littlest-jupyterhub:
journalctl -u jupyterhub
# or for a single-user server
journalctl -u jupyter-username
-->
```
# paste relevant logs here, if any
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser/app.py`
Content:
```
1 """Make a single-user app based on the environment:
2
3 - $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.
4 default: jupyter_server.serverapp.ServerApp
5
6 .. versionchanged:: 2.0
7
8 Default app changed to launch `jupyter labhub`.
9 Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.
10 """
11 import os
12
13 from traitlets import import_item
14
15 from .mixins import make_singleuser_app
16
17 JUPYTERHUB_SINGLEUSER_APP = os.environ.get("JUPYTERHUB_SINGLEUSER_APP", "")
18
19 # allow shortcut references
20 _app_shortcuts = {
21 "notebook": "notebook.notebookapp.NotebookApp",
22 "jupyter-server": "jupyter_server.serverapp.ServerApp",
23 "extension": "jupyter_server.serverapp.ServerApp",
24 }
25
26 JUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(
27 JUPYTERHUB_SINGLEUSER_APP.replace("_", "-"), JUPYTERHUB_SINGLEUSER_APP
28 )
29
30 if JUPYTERHUB_SINGLEUSER_APP:
31 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
32 else:
33 App = None
34 _import_error = None
35 for JUPYTERHUB_SINGLEUSER_APP in (
36 "jupyter_server.serverapp.ServerApp",
37 "notebook.notebookapp.NotebookApp",
38 ):
39 try:
40 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
41 except ImportError as e:
42 if _import_error is None:
43 _import_error = e
44 continue
45 else:
46 break
47 if App is None:
48 raise _import_error
49
50
51 SingleUserNotebookApp = make_singleuser_app(App)
52
53
54 def main():
55 """Launch a jupyterhub single-user server"""
56 if not os.environ.get("JUPYTERHUB_SINGLEUSER_APP"):
57 # app not specified, launch jupyter-labhub by default,
58 # if jupyterlab is recent enough (3.1).
59 # This is a minimally extended ServerApp that does:
60 # 1. ensure lab extension is enabled, and
61 # 2. set default URL to `/lab`
62 import re
63
64 _version_pat = re.compile(r"(\d+)\.(\d+)")
65 try:
66 import jupyterlab
67 from jupyterlab.labhubapp import SingleUserLabApp
68
69 m = _version_pat.match(jupyterlab.__version__)
70 except Exception:
71 m = None
72
73 if m is not None:
74 version_tuple = tuple(int(v) for v in m.groups())
75 if version_tuple >= (3, 1):
76 return SingleUserLabApp.launch_instance()
77
78 return SingleUserNotebookApp.launch_instance()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py
--- a/jupyterhub/singleuser/app.py
+++ b/jupyterhub/singleuser/app.py
@@ -6,7 +6,7 @@
.. versionchanged:: 2.0
Default app changed to launch `jupyter labhub`.
- Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.
+ Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).
"""
import os
@@ -27,7 +27,25 @@
JUPYTERHUB_SINGLEUSER_APP.replace("_", "-"), JUPYTERHUB_SINGLEUSER_APP
)
+
if JUPYTERHUB_SINGLEUSER_APP:
+ if JUPYTERHUB_SINGLEUSER_APP in {"notebook", _app_shortcuts["notebook"]}:
+ # better error for notebook v7, which uses jupyter-server
+ # when the legacy notebook server is requested
+ try:
+ from notebook import __version__
+ except ImportError:
+ # will raise later
+ pass
+ else:
+ # check if this failed because of notebook v7
+ _notebook_major_version = int(__version__.split(".", 1)[0])
+ if _notebook_major_version >= 7:
+ raise ImportError(
+ f"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\n"
+ f"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), "
+ 'and set `c.Spawner.default_url = "/tree"` to make notebook v7 the default UI.'
+ )
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
else:
App = None
| {"golden_diff": "diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py\n--- a/jupyterhub/singleuser/app.py\n+++ b/jupyterhub/singleuser/app.py\n@@ -6,7 +6,7 @@\n .. versionchanged:: 2.0\n \n Default app changed to launch `jupyter labhub`.\n- Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n+ Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).\n \"\"\"\n import os\n \n@@ -27,7 +27,25 @@\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n )\n \n+\n if JUPYTERHUB_SINGLEUSER_APP:\n+ if JUPYTERHUB_SINGLEUSER_APP in {\"notebook\", _app_shortcuts[\"notebook\"]}:\n+ # better error for notebook v7, which uses jupyter-server\n+ # when the legacy notebook server is requested\n+ try:\n+ from notebook import __version__\n+ except ImportError:\n+ # will raise later\n+ pass\n+ else:\n+ # check if this failed because of notebook v7\n+ _notebook_major_version = int(__version__.split(\".\", 1)[0])\n+ if _notebook_major_version >= 7:\n+ raise ImportError(\n+ f\"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\\n\"\n+ f\"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), \"\n+ 'and set `c.Spawner.default_url = \"/tree\"` to make notebook v7 the default UI.'\n+ )\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n else:\n App = None\n", "issue": "Fix JUPYTERHUB_SINGLEUSER_APP after Notebook 7 release\n### Bug description\r\n\r\nWith `notebook 6.5.4` it was possible to specify JUPYTERHUB_SINGLEUSER_APP='notebook' to run `Jupyter Notebook` instead of `JupyterLab`.\r\n\r\n#### Expected behaviour\r\n\r\nJupyter Notebook is run in singleuser\r\n\r\n#### Actual behaviour\r\n\r\n`jupyterhub-singleuser` fails\r\n\r\n### How to reproduce\r\n\r\nWorking image: `jupyter/base-notebook:notebook-6.5.4`\r\nFailing image: `jupyter/base-notebook:notebook-7.0.0`\r\n\r\n1. Run image: `docker run -it --rm jupyter/base-notebook:notebook-7.0.0 bash`\r\n2. Run: `JUPYTERHUB_SINGLEUSER_APP='notebook' JUPYTERHUB_SERVICE_URL=\"127.0.0.1\" jupyterhub-singleuser`\r\n\r\nJupyterHub is not running inside the image, but I don't think that's the problem.\r\n\r\nOutput with Jupyter Notebook 7:\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/jupyterhub-singleuser\", line 6, in <module>\r\n from jupyterhub.singleuser import main\r\n File \"/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/__init__.py\", line 67, in <module>\r\n from .app import SingleUserNotebookApp, main\r\n File \"/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/app.py\", line 31, in <module>\r\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/conda/lib/python3.11/site-packages/traitlets/utils/importstring.py\", line 30, in import_item\r\n module = __import__(package, fromlist=[obj])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nModuleNotFoundError: No module named 'notebook.notebookapp'\r\n```\r\n\r\nOutput with Jupyter Notebook 6:\r\n```\r\n[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:547] Starting jupyterhub single-user server version 4.0.1\r\n[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:561] Extending notebook.notebookapp.NotebookApp from notebook 6.5.4\r\n[W 2023-07-25 20:59:48.578 SingleUserNotebookApp configurable:200] Config option `open_browser` not recognized by `SingleUserNotebookApp`. Did you mean `browser`?\r\nJUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?\r\n```\r\n\r\n### Your personal set up\r\n\r\n\r\n\r\n - OS:\r\n <!-- [e.g. ubuntu 20.04, macOS 11.0] -->\r\n - Version(s):\r\n <!-- e.g. jupyterhub --version, python --version --->\r\n\r\n<details><summary>Full environment</summary>\r\n<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->\r\n\r\n```\r\n# paste output of `pip freeze` or `conda list` here\r\n```\r\n</details>\r\n\r\n<details><summary>Configuration</summary>\r\n<!--\r\nFor JupyterHub, especially include information such as what Spawner and Authenticator are being used.\r\nBe careful not to share any sensitive information.\r\nYou can paste jupyterhub_config.py below.\r\nTo exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:\r\n grep -v '\\(^#\\|^[[:space:]]*$\\)' jupyterhub_config.py\r\n-->\r\n\r\n```python\r\n# jupyterhub_config.py\r\n```\r\n</details>\r\n\r\n<details><summary>Logs</summary>\r\n<!--\r\nErrors are often logged by jupytehub. How you get logs depends on your deployment.\r\nWith kubernetes it might be:\r\n\r\n kubectl get pod # hub pod name starts with hub...\r\n kubectl logs hub-...\r\n # or for a single-user server\r\n kubectl logs jupyter-username\r\n\r\nOr the-littlest-jupyterhub:\r\n\r\n journalctl -u jupyterhub\r\n # or for a single-user server\r\n journalctl -u jupyter-username\r\n-->\r\n\r\n```\r\n# paste relevant logs here, if any\r\n```\r\n</details>\r\n\n", "before_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\", \"\")\n\n# allow shortcut references\n_app_shortcuts = {\n \"notebook\": \"notebook.notebookapp.NotebookApp\",\n \"jupyter-server\": \"jupyter_server.serverapp.ServerApp\",\n \"extension\": \"jupyter_server.serverapp.ServerApp\",\n}\n\nJUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n)\n\nif JUPYTERHUB_SINGLEUSER_APP:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n if _import_error is None:\n _import_error = e\n continue\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}], "after_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\", \"\")\n\n# allow shortcut references\n_app_shortcuts = {\n \"notebook\": \"notebook.notebookapp.NotebookApp\",\n \"jupyter-server\": \"jupyter_server.serverapp.ServerApp\",\n \"extension\": \"jupyter_server.serverapp.ServerApp\",\n}\n\nJUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n)\n\n\nif JUPYTERHUB_SINGLEUSER_APP:\n if JUPYTERHUB_SINGLEUSER_APP in {\"notebook\", _app_shortcuts[\"notebook\"]}:\n # better error for notebook v7, which uses jupyter-server\n # when the legacy notebook server is requested\n try:\n from notebook import __version__\n except ImportError:\n # will raise later\n pass\n else:\n # check if this failed because of notebook v7\n _notebook_major_version = int(__version__.split(\".\", 1)[0])\n if _notebook_major_version >= 7:\n raise ImportError(\n f\"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\\n\"\n f\"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), \"\n 'and set `c.Spawner.default_url = \"/tree\"` to make notebook v7 the default UI.'\n )\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n if _import_error is None:\n _import_error = e\n continue\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}]} | 2,040 | 448 |
gh_patches_debug_14864 | rasdani/github-patches | git_diff | benoitc__gunicorn-1931 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Must explicitly define `setuptools` as a dependency
When running gunicorn in a hardened Python docker image (with most of the dependencies removed) `setuptools` might be missing.
For instance:
```
Traceback (most recent call last):
File "/app/manage-docker.binary.runfiles/__main__/server.py", line 1, in <module>
from gunicorn.app.base import BaseApplication
File "/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/app/base.py", line 12, in <module>
from gunicorn import util
File "/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/util.py", line 12, in <module>
import pkg_resources
ImportError: No module named pkg_resources
```
Can be fixed by defining `setuptools` as a direct dependency within the project' `requirements.txt` file, however, it could be fix at the gunicorn codebase level by using `install_requires = ['setuptools']` in setup.py.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import os
7 import sys
8
9 from setuptools import setup, find_packages
10 from setuptools.command.test import test as TestCommand
11
12 from gunicorn import __version__
13
14
15 CLASSIFIERS = [
16 'Development Status :: 4 - Beta',
17 'Environment :: Other Environment',
18 'Intended Audience :: Developers',
19 'License :: OSI Approved :: MIT License',
20 'Operating System :: MacOS :: MacOS X',
21 'Operating System :: POSIX',
22 'Programming Language :: Python',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.4',
25 'Programming Language :: Python :: 3.5',
26 'Programming Language :: Python :: 3.6',
27 'Programming Language :: Python :: 3.7',
28 'Programming Language :: Python :: 3 :: Only',
29 'Topic :: Internet',
30 'Topic :: Utilities',
31 'Topic :: Software Development :: Libraries :: Python Modules',
32 'Topic :: Internet :: WWW/HTTP',
33 'Topic :: Internet :: WWW/HTTP :: WSGI',
34 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
35 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']
36
37 # read long description
38 with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:
39 long_description = f.read()
40
41 # read dev requirements
42 fname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')
43 with open(fname) as f:
44 tests_require = [l.strip() for l in f.readlines()]
45
46 class PyTestCommand(TestCommand):
47 user_options = [
48 ("cov", None, "measure coverage")
49 ]
50
51 def initialize_options(self):
52 TestCommand.initialize_options(self)
53 self.cov = None
54
55 def finalize_options(self):
56 TestCommand.finalize_options(self)
57 self.test_args = ['tests']
58 if self.cov:
59 self.test_args += ['--cov', 'gunicorn']
60 self.test_suite = True
61
62 def run_tests(self):
63 import pytest
64 errno = pytest.main(self.test_args)
65 sys.exit(errno)
66
67
68 extra_require = {
69 'gevent': ['gevent>=0.13'],
70 'eventlet': ['eventlet>=0.9.7'],
71 'tornado': ['tornado>=0.2'],
72 'gthread': [],
73 }
74
75 setup(
76 name='gunicorn',
77 version=__version__,
78
79 description='WSGI HTTP Server for UNIX',
80 long_description=long_description,
81 author='Benoit Chesneau',
82 author_email='[email protected]',
83 license='MIT',
84 url='http://gunicorn.org',
85
86 python_requires='>=3.4',
87 classifiers=CLASSIFIERS,
88 zip_safe=False,
89 packages=find_packages(exclude=['examples', 'tests']),
90 include_package_data=True,
91
92 tests_require=tests_require,
93 cmdclass={'test': PyTestCommand},
94
95 entry_points="""
96 [console_scripts]
97 gunicorn=gunicorn.app.wsgiapp:run
98 gunicorn_paster=gunicorn.app.pasterapp:run
99
100 [paste.server_runner]
101 main=gunicorn.app.pasterapp:paste_server
102 """,
103 extras_require=extra_require,
104 )
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,6 +65,14 @@
sys.exit(errno)
+install_requires = [
+ # We depend on functioning pkg_resources.working_set.add_entry() and
+ # pkg_resources.load_entry_point(). These both work as of 3.0 which
+ # is the first version to support Python 3.4 which we require as a
+ # floor.
+ 'setuptools>=3.0',
+]
+
extra_require = {
'gevent': ['gevent>=0.13'],
'eventlet': ['eventlet>=0.9.7'],
@@ -84,6 +92,7 @@
url='http://gunicorn.org',
python_requires='>=3.4',
+ install_requires=install_requires,
classifiers=CLASSIFIERS,
zip_safe=False,
packages=find_packages(exclude=['examples', 'tests']),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,6 +65,14 @@\n sys.exit(errno)\n \n \n+install_requires = [\n+ # We depend on functioning pkg_resources.working_set.add_entry() and\n+ # pkg_resources.load_entry_point(). These both work as of 3.0 which\n+ # is the first version to support Python 3.4 which we require as a\n+ # floor.\n+ 'setuptools>=3.0',\n+]\n+\n extra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n@@ -84,6 +92,7 @@\n url='http://gunicorn.org',\n \n python_requires='>=3.4',\n+ install_requires=install_requires,\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n", "issue": "Must explicitly define `setuptools` as a dependency\nWhen running gunicorn in a hardened Python docker image (with most of the dependencies removed) `setuptools` might be missing.\r\n\r\nFor instance:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/app/manage-docker.binary.runfiles/__main__/server.py\", line 1, in <module>\r\n from gunicorn.app.base import BaseApplication\r\n File \"/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/app/base.py\", line 12, in <module>\r\n from gunicorn import util\r\n File \"/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/util.py\", line 12, in <module>\r\n import pkg_resources\r\nImportError: No module named pkg_resources\r\n```\r\n\r\nCan be fixed by defining `setuptools` as a direct dependency within the project' `requirements.txt` file, however, it could be fix at the gunicorn codebase level by using `install_requires = ['setuptools']` in setup.py. \n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\nextra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n 'tornado': ['tornado>=0.2'],\n 'gthread': [],\n}\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n python_requires='>=3.4',\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\",\n extras_require=extra_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ninstall_requires = [\n # We depend on functioning pkg_resources.working_set.add_entry() and\n # pkg_resources.load_entry_point(). These both work as of 3.0 which\n # is the first version to support Python 3.4 which we require as a\n # floor.\n 'setuptools>=3.0',\n]\n\nextra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n 'tornado': ['tornado>=0.2'],\n 'gthread': [],\n}\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n python_requires='>=3.4',\n install_requires=install_requires,\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\",\n extras_require=extra_require,\n)\n", "path": "setup.py"}]} | 1,417 | 216 |
gh_patches_debug_4178 | rasdani/github-patches | git_diff | learningequality__kolibri-12049 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'On my own' device - Merging a user is not working
## Observed behavior
Observed while integration testing the [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1) release.
When I try to merge a user created through 'On my own' I am getting an "Invalid URL" error in the console. Note that creating a new account through the same flow is working correctly. This issue is caused by the changes made in https://github.com/learningequality/kolibri/pull/12028 and is not extant in [v0.16.1-beta0](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta0).
https://github.com/learningequality/kolibri/assets/79847249/30daa3ca-918c-4c15-901b-c74c08b96466
## Expected behavior
Fully functional 'Merge accounts' user flow.
## Steps to reproduce the issue
1. Install [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1).
2. Setup a full device as a server and another device by going through the 'On my own' setup flow.
3. Attempt to merge the user from the 'On my own' device' to the server facility.
## Logs
[logs.zip](https://github.com/learningequality/kolibri/files/14850735/logs.zip)
## Usage Details
[v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1)
Windows 11, Ubuntu 22 - Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/user_profile/viewsets.py`
Content:
```
1 import requests
2 from django.contrib.auth import login
3 from django.core.exceptions import ValidationError as DjangoValidationError
4 from rest_framework.exceptions import ValidationError
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8 from .utils import TokenGenerator
9 from kolibri.core.auth.models import FacilityUser
10 from kolibri.core.utils.urls import reverse_remote
11 from kolibri.utils.urls import validator
12
13
14 class OnMyOwnSetupViewset(APIView):
15 """
16 Viewset to determine if the facility has been setup as an "On my own setup" facility.
17 """
18
19 def get(self, request, format=None):
20 if request.user.is_anonymous:
21 self.permission_denied(request)
22 user_facility = self.request.user.facility
23 return Response(
24 {
25 "on_my_own_setup": user_facility.on_my_own_setup,
26 }
27 )
28
29
30 class RemoteFacilityUserViewset(APIView):
31 def get(self, request):
32 baseurl = request.query_params.get("baseurl", "")
33 try:
34 validator(baseurl)
35 except DjangoValidationError as e:
36 raise ValidationError(detail=str(e))
37 username = request.query_params.get("username", None)
38 facility = request.query_params.get("facility", None)
39 if username is None or facility is None:
40 raise ValidationError(detail="Both username and facility are required")
41 url = reverse_remote(baseurl, "kolibri:core:publicsearchuser-list")
42 try:
43 response = requests.get(
44 url, params={"facility": facility, "search": username}
45 )
46 if response.status_code == 200:
47 return Response(response.json())
48 else:
49 return Response({})
50 except Exception as e:
51 raise ValidationError(detail=str(e))
52
53
54 class RemoteFacilityUserAuthenticatedViewset(APIView):
55 def post(self, request, *args, **kwargs):
56 baseurl = request.query_params.get("baseurl", "")
57 try:
58 validator(baseurl)
59 except DjangoValidationError as e:
60 raise ValidationError(detail=str(e))
61 username = request.data.get("username", None)
62 facility = request.data.get("facility", None)
63 password = request.data.get("password", None)
64 if username is None or facility is None:
65 raise ValidationError(detail="Both username and facility are required")
66 url = reverse_remote(baseurl, "kolibri:core:publicuser-list")
67 params = {"facility": facility, "search": username}
68
69 # adding facility so auth works when learners can login without password:
70 username = "username={}&facility={}".format(username, facility)
71
72 auth = requests.auth.HTTPBasicAuth(username, password)
73 try:
74 response = requests.get(url, params=params, verify=False, auth=auth)
75 if response.status_code == 200:
76 return Response(response.json())
77 else:
78 return Response({"error": response.json()["detail"]})
79 except Exception as e:
80 raise ValidationError(detail=str(e))
81
82
83 class LoginMergedUserViewset(APIView):
84 """
85 Viewset to login into kolibri using the merged user,
86 after the old user has been deleted
87 """
88
89 def post(self, request):
90 pk = request.data.get("pk", None)
91 token = request.data.get("token", None)
92 new_user = FacilityUser.objects.get(pk=pk)
93 if not TokenGenerator().check_token(new_user, token):
94 return Response({"error": "Unauthorized"}, status=401)
95 login(request, new_user)
96 return Response({"success": True})
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/plugins/user_profile/viewsets.py b/kolibri/plugins/user_profile/viewsets.py
--- a/kolibri/plugins/user_profile/viewsets.py
+++ b/kolibri/plugins/user_profile/viewsets.py
@@ -53,7 +53,7 @@
class RemoteFacilityUserAuthenticatedViewset(APIView):
def post(self, request, *args, **kwargs):
- baseurl = request.query_params.get("baseurl", "")
+ baseurl = request.data.get("baseurl", "")
try:
validator(baseurl)
except DjangoValidationError as e:
| {"golden_diff": "diff --git a/kolibri/plugins/user_profile/viewsets.py b/kolibri/plugins/user_profile/viewsets.py\n--- a/kolibri/plugins/user_profile/viewsets.py\n+++ b/kolibri/plugins/user_profile/viewsets.py\n@@ -53,7 +53,7 @@\n \n class RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n- baseurl = request.query_params.get(\"baseurl\", \"\")\n+ baseurl = request.data.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n", "issue": "'On my own' device - Merging a user is not working\n## Observed behavior\r\nObserved while integration testing the [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1) release.\r\nWhen I try to merge a user created through 'On my own' I am getting an \"Invalid URL\" error in the console. Note that creating a new account through the same flow is working correctly. This issue is caused by the changes made in https://github.com/learningequality/kolibri/pull/12028 and is not extant in [v0.16.1-beta0](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta0).\r\n\r\nhttps://github.com/learningequality/kolibri/assets/79847249/30daa3ca-918c-4c15-901b-c74c08b96466\r\n\r\n## Expected behavior\r\n\r\nFully functional 'Merge accounts' user flow. \r\n\r\n## Steps to reproduce the issue\r\n\r\n1. Install [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1).\r\n2. Setup a full device as a server and another device by going through the 'On my own' setup flow.\r\n3. Attempt to merge the user from the 'On my own' device' to the server facility.\r\n\r\n## Logs\r\n\r\n[logs.zip](https://github.com/learningequality/kolibri/files/14850735/logs.zip)\r\n\r\n## Usage Details\r\n[v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1)\r\nWindows 11, Ubuntu 22 - Chrome\n", "before_files": [{"content": "import requests\nfrom django.contrib.auth import login\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.urls import validator\n\n\nclass OnMyOwnSetupViewset(APIView):\n \"\"\"\n Viewset to determine if the facility has been setup as an \"On my own setup\" facility.\n \"\"\"\n\n def get(self, request, format=None):\n if request.user.is_anonymous:\n self.permission_denied(request)\n user_facility = self.request.user.facility\n return Response(\n {\n \"on_my_own_setup\": user_facility.on_my_own_setup,\n }\n )\n\n\nclass RemoteFacilityUserViewset(APIView):\n def get(self, request):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.query_params.get(\"username\", None)\n facility = request.query_params.get(\"facility\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicsearchuser-list\")\n try:\n response = requests.get(\n url, params={\"facility\": facility, \"search\": username}\n )\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.data.get(\"username\", None)\n facility = request.data.get(\"facility\", None)\n password = request.data.get(\"password\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicuser-list\")\n params = {\"facility\": facility, \"search\": username}\n\n # adding facility so auth works when learners can login without password:\n username = \"username={}&facility={}\".format(username, facility)\n\n auth = requests.auth.HTTPBasicAuth(username, password)\n try:\n response = requests.get(url, params=params, verify=False, auth=auth)\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({\"error\": response.json()[\"detail\"]})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass LoginMergedUserViewset(APIView):\n \"\"\"\n Viewset to login into kolibri using the merged user,\n after the old user has been deleted\n \"\"\"\n\n def post(self, request):\n pk = request.data.get(\"pk\", None)\n token = request.data.get(\"token\", None)\n new_user = FacilityUser.objects.get(pk=pk)\n if not TokenGenerator().check_token(new_user, token):\n return Response({\"error\": \"Unauthorized\"}, status=401)\n login(request, new_user)\n return Response({\"success\": True})\n", "path": "kolibri/plugins/user_profile/viewsets.py"}], "after_files": [{"content": "import requests\nfrom django.contrib.auth import login\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.urls import validator\n\n\nclass OnMyOwnSetupViewset(APIView):\n \"\"\"\n Viewset to determine if the facility has been setup as an \"On my own setup\" facility.\n \"\"\"\n\n def get(self, request, format=None):\n if request.user.is_anonymous:\n self.permission_denied(request)\n user_facility = self.request.user.facility\n return Response(\n {\n \"on_my_own_setup\": user_facility.on_my_own_setup,\n }\n )\n\n\nclass RemoteFacilityUserViewset(APIView):\n def get(self, request):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.query_params.get(\"username\", None)\n facility = request.query_params.get(\"facility\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicsearchuser-list\")\n try:\n response = requests.get(\n url, params={\"facility\": facility, \"search\": username}\n )\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n baseurl = request.data.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.data.get(\"username\", None)\n facility = request.data.get(\"facility\", None)\n password = request.data.get(\"password\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicuser-list\")\n params = {\"facility\": facility, \"search\": username}\n\n # adding facility so auth works when learners can login without password:\n username = \"username={}&facility={}\".format(username, facility)\n\n auth = requests.auth.HTTPBasicAuth(username, password)\n try:\n response = requests.get(url, params=params, verify=False, auth=auth)\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({\"error\": response.json()[\"detail\"]})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass LoginMergedUserViewset(APIView):\n \"\"\"\n Viewset to login into kolibri using the merged user,\n after the old user has been deleted\n \"\"\"\n\n def post(self, request):\n pk = request.data.get(\"pk\", None)\n token = request.data.get(\"token\", None)\n new_user = FacilityUser.objects.get(pk=pk)\n if not TokenGenerator().check_token(new_user, token):\n return Response({\"error\": \"Unauthorized\"}, status=401)\n login(request, new_user)\n return Response({\"success\": True})\n", "path": "kolibri/plugins/user_profile/viewsets.py"}]} | 1,590 | 127 |
gh_patches_debug_1219 | rasdani/github-patches | git_diff | pulp__pulpcore-4641 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pulp_file version is set to 3.40.0.dev
**Version**
pulpcore 3.40.0
**Describe the bug**
Status API reports pulp_file version as 3.40.0.dev
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulp_file/app/__init__.py`
Content:
```
1 from pulpcore.plugin import PulpPluginAppConfig
2
3
4 class PulpFilePluginAppConfig(PulpPluginAppConfig):
5 """
6 Entry point for pulp_file plugin.
7 """
8
9 name = "pulp_file.app"
10 label = "file"
11 version = "3.40.0.dev"
12 python_package_name = "pulp_file" # TODO Add python_module_name
13 domain_compatible = True
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py
--- a/pulp_file/app/__init__.py
+++ b/pulp_file/app/__init__.py
@@ -8,6 +8,6 @@
name = "pulp_file.app"
label = "file"
- version = "3.40.0.dev"
+ version = "3.41.0.dev"
python_package_name = "pulp_file" # TODO Add python_module_name
domain_compatible = True
| {"golden_diff": "diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py\n--- a/pulp_file/app/__init__.py\n+++ b/pulp_file/app/__init__.py\n@@ -8,6 +8,6 @@\n \n name = \"pulp_file.app\"\n label = \"file\"\n- version = \"3.40.0.dev\"\n+ version = \"3.41.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "issue": "pulp_file version is set to 3.40.0.dev \n**Version**\r\npulpcore 3.40.0\r\n\r\n**Describe the bug**\r\nStatus API reports pulp_file version as 3.40.0.dev\n", "before_files": [{"content": "from pulpcore.plugin import PulpPluginAppConfig\n\n\nclass PulpFilePluginAppConfig(PulpPluginAppConfig):\n \"\"\"\n Entry point for pulp_file plugin.\n \"\"\"\n\n name = \"pulp_file.app\"\n label = \"file\"\n version = \"3.40.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "path": "pulp_file/app/__init__.py"}], "after_files": [{"content": "from pulpcore.plugin import PulpPluginAppConfig\n\n\nclass PulpFilePluginAppConfig(PulpPluginAppConfig):\n \"\"\"\n Entry point for pulp_file plugin.\n \"\"\"\n\n name = \"pulp_file.app\"\n label = \"file\"\n version = \"3.41.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "path": "pulp_file/app/__init__.py"}]} | 423 | 121 |
gh_patches_debug_3577 | rasdani/github-patches | git_diff | python__mypy-2596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Tuple[()]` is occasionally converted to `Tuple[Any, ...]`
Most obvious when the `Tuple[()]` is passed through a Callable
```
from typing import *
Type = Callable[[Tuple[()]], int]
x = "foo" # type: Type
```
Results in:
```
Incompatible types in assignment (expression has type "str", variable has type Callable[[Tuple[Any, ...]], int])
```
As a side note,
```Type = Tuple[()]```
Also appears to give a weird error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mypy/exprtotype.py`
Content:
```
1 """Translate an Expression to a Type value."""
2
3 from mypy.nodes import (
4 Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,
5 ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr
6 )
7 from mypy.parsetype import parse_str_as_type, TypeParseError
8 from mypy.types import Type, UnboundType, TypeList, EllipsisType
9
10
11 class TypeTranslationError(Exception):
12 """Exception raised when an expression is not valid as a type."""
13
14
15 def expr_to_unanalyzed_type(expr: Expression) -> Type:
16 """Translate an expression to the corresponding type.
17
18 The result is not semantically analyzed. It can be UnboundType or TypeList.
19 Raise TypeTranslationError if the expression cannot represent a type.
20 """
21 if isinstance(expr, NameExpr):
22 name = expr.name
23 return UnboundType(name, line=expr.line, column=expr.column)
24 elif isinstance(expr, MemberExpr):
25 fullname = get_member_expr_fullname(expr)
26 if fullname:
27 return UnboundType(fullname, line=expr.line, column=expr.column)
28 else:
29 raise TypeTranslationError()
30 elif isinstance(expr, IndexExpr):
31 base = expr_to_unanalyzed_type(expr.base)
32 if isinstance(base, UnboundType):
33 if base.args:
34 raise TypeTranslationError()
35 if isinstance(expr.index, TupleExpr):
36 args = expr.index.items
37 else:
38 args = [expr.index]
39 base.args = [expr_to_unanalyzed_type(arg) for arg in args]
40 return base
41 else:
42 raise TypeTranslationError()
43 elif isinstance(expr, ListExpr):
44 return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],
45 line=expr.line, column=expr.column)
46 elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):
47 # Parse string literal type.
48 try:
49 result = parse_str_as_type(expr.value, expr.line)
50 except TypeParseError:
51 raise TypeTranslationError()
52 return result
53 elif isinstance(expr, EllipsisExpr):
54 return EllipsisType(expr.line)
55 else:
56 raise TypeTranslationError()
57
58
59 def get_member_expr_fullname(expr: MemberExpr) -> str:
60 """Return the qualified name representation of a member expression.
61
62 Return a string of form foo.bar, foo.bar.baz, or similar, or None if the
63 argument cannot be represented in this form.
64 """
65 if isinstance(expr.expr, NameExpr):
66 initial = expr.expr.name
67 elif isinstance(expr.expr, MemberExpr):
68 initial = get_member_expr_fullname(expr.expr)
69 else:
70 return None
71 return '{}.{}'.format(initial, expr.name)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mypy/exprtotype.py b/mypy/exprtotype.py
--- a/mypy/exprtotype.py
+++ b/mypy/exprtotype.py
@@ -37,6 +37,8 @@
else:
args = [expr.index]
base.args = [expr_to_unanalyzed_type(arg) for arg in args]
+ if not base.args:
+ base.empty_tuple_index = True
return base
else:
raise TypeTranslationError()
| {"golden_diff": "diff --git a/mypy/exprtotype.py b/mypy/exprtotype.py\n--- a/mypy/exprtotype.py\n+++ b/mypy/exprtotype.py\n@@ -37,6 +37,8 @@\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n+ if not base.args:\n+ base.empty_tuple_index = True\n return base\n else:\n raise TypeTranslationError()\n", "issue": "`Tuple[()]` is occasionally converted to `Tuple[Any, ...]`\nMost obvious when the `Tuple[()]` is passed through a Callable\r\n```\r\nfrom typing import *\r\n\r\nType = Callable[[Tuple[()]], int]\r\nx = \"foo\" # type: Type\r\n```\r\nResults in:\r\n```\r\nIncompatible types in assignment (expression has type \"str\", variable has type Callable[[Tuple[Any, ...]], int])\r\n```\r\n\r\nAs a side note,\r\n```Type = Tuple[()]```\r\nAlso appears to give a weird error.\n", "before_files": [{"content": "\"\"\"Translate an Expression to a Type value.\"\"\"\n\nfrom mypy.nodes import (\n Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,\n ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr\n)\nfrom mypy.parsetype import parse_str_as_type, TypeParseError\nfrom mypy.types import Type, UnboundType, TypeList, EllipsisType\n\n\nclass TypeTranslationError(Exception):\n \"\"\"Exception raised when an expression is not valid as a type.\"\"\"\n\n\ndef expr_to_unanalyzed_type(expr: Expression) -> Type:\n \"\"\"Translate an expression to the corresponding type.\n\n The result is not semantically analyzed. It can be UnboundType or TypeList.\n Raise TypeTranslationError if the expression cannot represent a type.\n \"\"\"\n if isinstance(expr, NameExpr):\n name = expr.name\n return UnboundType(name, line=expr.line, column=expr.column)\n elif isinstance(expr, MemberExpr):\n fullname = get_member_expr_fullname(expr)\n if fullname:\n return UnboundType(fullname, line=expr.line, column=expr.column)\n else:\n raise TypeTranslationError()\n elif isinstance(expr, IndexExpr):\n base = expr_to_unanalyzed_type(expr.base)\n if isinstance(base, UnboundType):\n if base.args:\n raise TypeTranslationError()\n if isinstance(expr.index, TupleExpr):\n args = expr.index.items\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n return base\n else:\n raise TypeTranslationError()\n elif isinstance(expr, ListExpr):\n return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],\n line=expr.line, column=expr.column)\n elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):\n # Parse string literal type.\n try:\n result = parse_str_as_type(expr.value, expr.line)\n except TypeParseError:\n raise TypeTranslationError()\n return result\n elif isinstance(expr, EllipsisExpr):\n return EllipsisType(expr.line)\n else:\n raise TypeTranslationError()\n\n\ndef get_member_expr_fullname(expr: MemberExpr) -> str:\n \"\"\"Return the qualified name representation of a member expression.\n\n Return a string of form foo.bar, foo.bar.baz, or similar, or None if the\n argument cannot be represented in this form.\n \"\"\"\n if isinstance(expr.expr, NameExpr):\n initial = expr.expr.name\n elif isinstance(expr.expr, MemberExpr):\n initial = get_member_expr_fullname(expr.expr)\n else:\n return None\n return '{}.{}'.format(initial, expr.name)\n", "path": "mypy/exprtotype.py"}], "after_files": [{"content": "\"\"\"Translate an Expression to a Type value.\"\"\"\n\nfrom mypy.nodes import (\n Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,\n ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr\n)\nfrom mypy.parsetype import parse_str_as_type, TypeParseError\nfrom mypy.types import Type, UnboundType, TypeList, EllipsisType\n\n\nclass TypeTranslationError(Exception):\n \"\"\"Exception raised when an expression is not valid as a type.\"\"\"\n\n\ndef expr_to_unanalyzed_type(expr: Expression) -> Type:\n \"\"\"Translate an expression to the corresponding type.\n\n The result is not semantically analyzed. It can be UnboundType or TypeList.\n Raise TypeTranslationError if the expression cannot represent a type.\n \"\"\"\n if isinstance(expr, NameExpr):\n name = expr.name\n return UnboundType(name, line=expr.line, column=expr.column)\n elif isinstance(expr, MemberExpr):\n fullname = get_member_expr_fullname(expr)\n if fullname:\n return UnboundType(fullname, line=expr.line, column=expr.column)\n else:\n raise TypeTranslationError()\n elif isinstance(expr, IndexExpr):\n base = expr_to_unanalyzed_type(expr.base)\n if isinstance(base, UnboundType):\n if base.args:\n raise TypeTranslationError()\n if isinstance(expr.index, TupleExpr):\n args = expr.index.items\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n if not base.args:\n base.empty_tuple_index = True\n return base\n else:\n raise TypeTranslationError()\n elif isinstance(expr, ListExpr):\n return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],\n line=expr.line, column=expr.column)\n elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):\n # Parse string literal type.\n try:\n result = parse_str_as_type(expr.value, expr.line)\n except TypeParseError:\n raise TypeTranslationError()\n return result\n elif isinstance(expr, EllipsisExpr):\n return EllipsisType(expr.line)\n else:\n raise TypeTranslationError()\n\n\ndef get_member_expr_fullname(expr: MemberExpr) -> str:\n \"\"\"Return the qualified name representation of a member expression.\n\n Return a string of form foo.bar, foo.bar.baz, or similar, or None if the\n argument cannot be represented in this form.\n \"\"\"\n if isinstance(expr.expr, NameExpr):\n initial = expr.expr.name\n elif isinstance(expr.expr, MemberExpr):\n initial = get_member_expr_fullname(expr.expr)\n else:\n return None\n return '{}.{}'.format(initial, expr.name)\n", "path": "mypy/exprtotype.py"}]} | 1,085 | 109 |
gh_patches_debug_56668 | rasdani/github-patches | git_diff | magenta__magenta-841 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
execfile() was removed from Python 3
https://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/tools/pip/setup.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """A setuptools based setup module for magenta."""
15
16 from setuptools import find_packages
17 from setuptools import setup
18
19 # Bit of a hack to parse the version string stored in version.py without
20 # executing __init__.py, which will end up requiring a bunch of dependencies to
21 # execute (e.g., tensorflow, pretty_midi, etc.).
22 # Makes the __version__ variable available.
23 execfile('magenta/version.py')
24
25
26 REQUIRED_PACKAGES = [
27 'IPython',
28 'Pillow >= 3.4.2',
29 'bokeh >= 0.12.0',
30 'futures',
31 'intervaltree >= 2.1.0',
32 'matplotlib >= 1.5.3',
33 'mido == 1.2.6',
34 'numpy >= 1.11.0',
35 'pandas >= 0.18.1',
36 'pretty_midi >= 0.2.6',
37 'python-rtmidi',
38 'scipy >= 0.18.1',
39 'tensorflow >= 1.1.0',
40 'wheel',
41 ]
42
43 CONSOLE_SCRIPTS = [
44 'magenta.interfaces.midi.magenta_midi',
45 'magenta.interfaces.midi.midi_clock',
46 'magenta.models.drums_rnn.drums_rnn_create_dataset',
47 'magenta.models.drums_rnn.drums_rnn_generate',
48 'magenta.models.drums_rnn.drums_rnn_train',
49 'magenta.models.image_stylization.image_stylization_create_dataset',
50 'magenta.models.image_stylization.image_stylization_evaluate',
51 'magenta.models.image_stylization.image_stylization_finetune',
52 'magenta.models.image_stylization.image_stylization_train',
53 'magenta.models.image_stylization.image_stylization_transform',
54 'magenta.models.improv_rnn.improv_rnn_create_dataset',
55 'magenta.models.improv_rnn.improv_rnn_generate',
56 'magenta.models.improv_rnn.improv_rnn_train',
57 'magenta.models.melody_rnn.melody_rnn_create_dataset',
58 'magenta.models.melody_rnn.melody_rnn_generate',
59 'magenta.models.melody_rnn.melody_rnn_train',
60 'magenta.models.nsynth.wavenet.nsynth_generate',
61 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',
62 'magenta.models.performance_rnn.performance_rnn_create_dataset',
63 'magenta.models.performance_rnn.performance_rnn_generate',
64 'magenta.models.performance_rnn.performance_rnn_train',
65 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',
66 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',
67 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',
68 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',
69 'magenta.models.polyphony_rnn.polyphony_rnn_generate',
70 'magenta.models.polyphony_rnn.polyphony_rnn_train',
71 'magenta.models.rl_tuner.rl_tuner_train',
72 'magenta.models.sketch_rnn.sketch_rnn_train',
73 'magenta.scripts.convert_dir_to_note_sequences',
74 ]
75
76 setup(
77 name='magenta',
78 version=__version__, # pylint: disable=undefined-variable
79 description='Use machine learning to create art and music',
80 long_description='',
81 url='https://magenta.tensorflow.org/',
82 author='Google Inc.',
83 author_email='[email protected]',
84 license='Apache 2',
85 # PyPI package information.
86 classifiers=[
87 'Development Status :: 4 - Beta',
88 'Intended Audience :: Developers',
89 'Intended Audience :: Education',
90 'Intended Audience :: Science/Research',
91 'License :: OSI Approved :: Apache Software License',
92 'Programming Language :: Python :: 2.7',
93 'Programming Language :: Python :: 3',
94 'Topic :: Scientific/Engineering :: Mathematics',
95 'Topic :: Software Development :: Libraries :: Python Modules',
96 'Topic :: Software Development :: Libraries',
97 ],
98 keywords='tensorflow machine learning magenta music art',
99
100 packages=find_packages(),
101 install_requires=REQUIRED_PACKAGES,
102 entry_points={
103 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in
104 ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],
105 },
106
107 include_package_data=True,
108 package_data={
109 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],
110 },
111 )
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py
--- a/magenta/tools/pip/setup.py
+++ b/magenta/tools/pip/setup.py
@@ -20,7 +20,8 @@
# executing __init__.py, which will end up requiring a bunch of dependencies to
# execute (e.g., tensorflow, pretty_midi, etc.).
# Makes the __version__ variable available.
-execfile('magenta/version.py')
+with open('magenta/version.py') as in_file:
+ exec(in_file.read())
REQUIRED_PACKAGES = [
| {"golden_diff": "diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py\n--- a/magenta/tools/pip/setup.py\n+++ b/magenta/tools/pip/setup.py\n@@ -20,7 +20,8 @@\n # executing __init__.py, which will end up requiring a bunch of dependencies to\n # execute (e.g., tensorflow, pretty_midi, etc.).\n # Makes the __version__ variable available.\n-execfile('magenta/version.py')\n+with open('magenta/version.py') as in_file:\n+ exec(in_file.read())\n \n \n REQUIRED_PACKAGES = [\n", "issue": "execfile() was removed from Python 3\nhttps://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nexecfile('magenta/version.py')\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nwith open('magenta/version.py') as in_file:\n exec(in_file.read())\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}]} | 1,635 | 128 |
gh_patches_debug_18735 | rasdani/github-patches | git_diff | openfun__marsha-1060 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create XAPI statements for live video
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
When a video is a live all the existing XAPI statement are sent like a regular videos. Some events should not be sent and some data can't be computed
**Describe the solution you'd like**
Change the activity-type to `http://id.tincanapi.com/activitytype/webinar`
Send statement for those events :
- initialized
- play
- pause
- interacted
Also, do not send video length info, we can't have it. The completion threshold can not be computed too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/backend/marsha/core/xapi.py`
Content:
```
1 """XAPI module."""
2 import re
3 import uuid
4
5 from django.conf import settings
6 from django.utils import timezone
7 from django.utils.translation import to_locale
8
9 import requests
10
11
12 class XAPIStatement:
13 """Object to work on a XAPI Statement."""
14
15 statement = None
16
17 def __init__(self, video, statement, lti_user):
18 """Compute a valid xapi satement.
19
20 Parameters
21 ----------
22 video : Type[.models/videos]
23 The video object used in the xAPI statement
24
25 statement : dictionary
26 Statement containing base information to send to the LRS
27 An example of expected statement:
28 {
29 "verb": {
30 "id": "http://adlnet.gov/expapi/verbs/initialized",
31 "display": {
32 "en-US": "initialized"
33 }
34 },
35 "context": {
36 "extensions": {
37 "https://w3id.org/xapi/video/extensions/volume": 1,
38 "https://w3id.org/xapi/video/extensions/video-playback-size": "640x264",
39 }
40 }
41 }
42
43 lti_user : Type[lti.LTIUser]
44 Object representing data stored in the JWT Token and related to the user authenticated
45 with LTI
46
47 """
48 try:
49 user_id = lti_user.user.get("id")
50 except AttributeError:
51 user_id = lti_user.session_id
52
53 homepage = video.playlist.consumer_site.domain
54
55 if re.match(r"^http(s?):\/\/.*", homepage) is None:
56 homepage = f"http://{homepage}"
57
58 if "id" not in statement:
59 statement["id"] = str(uuid.uuid4())
60
61 statement["timestamp"] = timezone.now().isoformat()
62 statement["context"].update(
63 {"contextActivities": {"category": [{"id": "https://w3id.org/xapi/video"}]}}
64 )
65
66 statement["actor"] = {
67 "objectType": "Agent",
68 "account": {"name": user_id, "homePage": homepage},
69 }
70
71 statement["object"] = {
72 "definition": {
73 "type": "https://w3id.org/xapi/video/activity-type/video",
74 "name": {
75 to_locale(settings.LANGUAGE_CODE).replace("_", "-"): video.title
76 },
77 },
78 "id": "uuid://{id}".format(id=str(video.id)),
79 "objectType": "Activity",
80 }
81
82 object_extensions = {}
83 if lti_user.course.get("school_name") is not None:
84 object_extensions[
85 "https://w3id.org/xapi/acrossx/extensions/school"
86 ] = lti_user.course["school_name"]
87
88 if lti_user.course.get("course_name") is not None:
89 object_extensions[
90 "http://adlnet.gov/expapi/activities/course"
91 ] = lti_user.course["course_name"]
92
93 if lti_user.course.get("course_run") is not None:
94 object_extensions[
95 "http://adlnet.gov/expapi/activities/module"
96 ] = lti_user.course["course_run"]
97
98 if object_extensions:
99 statement["object"]["definition"]["extensions"] = object_extensions
100
101 self.statement = statement
102
103 def get_statement(self):
104 """Return the enriched statement."""
105 return self.statement
106
107
108 class XAPI:
109 """The XAPI object compute statements and send them to a LRS."""
110
111 def __init__(self, url, auth_token, xapi_version="1.0.3"):
112 """Initialize the XAPI module.
113
114 Parameters
115 ----------
116 url: string
117 The LRS endpoint to fetch
118
119 auth_token: string
120 The basic_auth token used to authenticate on the LRS
121
122 xapi_version: string
123 The xAPI version used.
124
125 """
126 self.url = url
127 self.auth_token = auth_token
128 self.xapi_version = xapi_version
129
130 def send(self, xapi_statement):
131 """Send the statement to a LRS.
132
133 Parameters
134 ----------
135 statement : Type[.XAPIStatement]
136
137 """
138 headers = {
139 "Authorization": self.auth_token,
140 "Content-Type": "application/json",
141 "X-Experience-API-Version": self.xapi_version,
142 }
143
144 response = requests.post(
145 self.url, json=xapi_statement.get_statement(), headers=headers
146 )
147
148 response.raise_for_status()
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/backend/marsha/core/xapi.py b/src/backend/marsha/core/xapi.py
--- a/src/backend/marsha/core/xapi.py
+++ b/src/backend/marsha/core/xapi.py
@@ -52,6 +52,12 @@
homepage = video.playlist.consumer_site.domain
+ activity_type = "https://w3id.org/xapi/video/activity-type/video"
+
+ # When the video is a live we change the activity to webinar
+ if video.live_state is not None:
+ activity_type = "http://id.tincanapi.com/activitytype/webinar"
+
if re.match(r"^http(s?):\/\/.*", homepage) is None:
homepage = f"http://{homepage}"
@@ -70,7 +76,7 @@
statement["object"] = {
"definition": {
- "type": "https://w3id.org/xapi/video/activity-type/video",
+ "type": activity_type,
"name": {
to_locale(settings.LANGUAGE_CODE).replace("_", "-"): video.title
},
| {"golden_diff": "diff --git a/src/backend/marsha/core/xapi.py b/src/backend/marsha/core/xapi.py\n--- a/src/backend/marsha/core/xapi.py\n+++ b/src/backend/marsha/core/xapi.py\n@@ -52,6 +52,12 @@\n \n homepage = video.playlist.consumer_site.domain\n \n+ activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n+\n+ # When the video is a live we change the activity to webinar\n+ if video.live_state is not None:\n+ activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n+\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n \n@@ -70,7 +76,7 @@\n \n statement[\"object\"] = {\n \"definition\": {\n- \"type\": \"https://w3id.org/xapi/video/activity-type/video\",\n+ \"type\": activity_type,\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n", "issue": "Create XAPI statements for live video\n## Feature Request\r\n\r\n**Is your feature request related to a problem or unsupported use case? Please describe.**\r\n\r\nWhen a video is a live all the existing XAPI statement are sent like a regular videos. Some events should not be sent and some data can't be computed\r\n\r\n**Describe the solution you'd like**\r\n\r\nChange the activity-type to `http://id.tincanapi.com/activitytype/webinar`\r\nSend statement for those events : \r\n- initialized\r\n- play\r\n- pause\r\n- interacted\r\n\r\nAlso, do not send video length info, we can't have it. The completion threshold can not be computed too.\r\n\n", "before_files": [{"content": "\"\"\"XAPI module.\"\"\"\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\nclass XAPIStatement:\n \"\"\"Object to work on a XAPI Statement.\"\"\"\n\n statement = None\n\n def __init__(self, video, statement, lti_user):\n \"\"\"Compute a valid xapi satement.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n lti_user : Type[lti.LTIUser]\n Object representing data stored in the JWT Token and related to the user authenticated\n with LTI\n\n \"\"\"\n try:\n user_id = lti_user.user.get(\"id\")\n except AttributeError:\n user_id = lti_user.session_id\n\n homepage = video.playlist.consumer_site.domain\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/video\"}]}}\n )\n\n statement[\"actor\"] = {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": \"https://w3id.org/xapi/video/activity-type/video\",\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n },\n \"id\": \"uuid://{id}\".format(id=str(video.id)),\n \"objectType\": \"Activity\",\n }\n\n object_extensions = {}\n if lti_user.course.get(\"school_name\") is not None:\n object_extensions[\n \"https://w3id.org/xapi/acrossx/extensions/school\"\n ] = lti_user.course[\"school_name\"]\n\n if lti_user.course.get(\"course_name\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/course\"\n ] = lti_user.course[\"course_name\"]\n\n if lti_user.course.get(\"course_run\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/module\"\n ] = lti_user.course[\"course_run\"]\n\n if object_extensions:\n statement[\"object\"][\"definition\"][\"extensions\"] = object_extensions\n\n self.statement = statement\n\n def get_statement(self):\n \"\"\"Return the enriched statement.\"\"\"\n return self.statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url, json=xapi_statement.get_statement(), headers=headers\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py"}], "after_files": [{"content": "\"\"\"XAPI module.\"\"\"\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\nclass XAPIStatement:\n \"\"\"Object to work on a XAPI Statement.\"\"\"\n\n statement = None\n\n def __init__(self, video, statement, lti_user):\n \"\"\"Compute a valid xapi satement.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n lti_user : Type[lti.LTIUser]\n Object representing data stored in the JWT Token and related to the user authenticated\n with LTI\n\n \"\"\"\n try:\n user_id = lti_user.user.get(\"id\")\n except AttributeError:\n user_id = lti_user.session_id\n\n homepage = video.playlist.consumer_site.domain\n\n activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n\n # When the video is a live we change the activity to webinar\n if video.live_state is not None:\n activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/video\"}]}}\n )\n\n statement[\"actor\"] = {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": activity_type,\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n },\n \"id\": \"uuid://{id}\".format(id=str(video.id)),\n \"objectType\": \"Activity\",\n }\n\n object_extensions = {}\n if lti_user.course.get(\"school_name\") is not None:\n object_extensions[\n \"https://w3id.org/xapi/acrossx/extensions/school\"\n ] = lti_user.course[\"school_name\"]\n\n if lti_user.course.get(\"course_name\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/course\"\n ] = lti_user.course[\"course_name\"]\n\n if lti_user.course.get(\"course_run\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/module\"\n ] = lti_user.course[\"course_run\"]\n\n if object_extensions:\n statement[\"object\"][\"definition\"][\"extensions\"] = object_extensions\n\n self.statement = statement\n\n def get_statement(self):\n \"\"\"Return the enriched statement.\"\"\"\n return self.statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url, json=xapi_statement.get_statement(), headers=headers\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py"}]} | 1,693 | 238 |
gh_patches_debug_34117 | rasdani/github-patches | git_diff | ESMCI__cime-1090 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
case.lt_archive
lt_archive script has several problems preventing functionality.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/case_lt_archive.py`
Content:
```
1 from CIME.XML.standard_module_setup import *
2 from CIME.utils import expect, does_file_have_string, append_status
3 from CIME.XML.lt_archive import LTArchive
4
5 import time
6
7 logger = logging.getLogger(__name__)
8
9 ###############################################################################
10 def case_lt_archive(case):
11 ###############################################################################
12 caseroot = case.get_value("CASEROOT")
13
14 # max number of threads needed by scripts
15 os.environ["maxthrds"] = 1
16
17 # document start
18 append_status("lt_archive starting",caseroot=caseroot,sfile="CaseStatus")
19
20 # determine status of run and short term archiving
21 runComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
22 "run SUCCESSFUL")
23 staComplete = does_file_have_string(os.path.join(caseroot, "stArchiveStatus"),
24 "st_archive_complete")
25
26 # set up envrionment vars and call the lt_archive.sh script
27 if runComplete and staComplete:
28 os.environ["DOUT_S_ROOT"] = case.get_value("DOUT_S_ROOT")
29 os.environ["DOUT_L_MSROOT"] = case.get_value("DOUT_L_MSROOT")
30 os.environ["DOUT_L_HPSS_ACCNT"] = case.get_value("DOUT_L_HPSS_ACCNT")
31
32 lid = time.strftime("%y%m%d-%H%M%S")
33 lt_archive = LTArchive(case.get_value("MACH"))
34 lt_archive_args = lt_archive.get_lt_archive_args()
35 cmd = os.path.join(caseroot, "Tools/lt_archive.sh") \
36 + lt_archive_args + "ltArchiveStatus." + lid + " 2>&1"
37 run_cmd_no_fail(cmd, from_dir=caseroot)
38 else:
39 expect(False,
40 "lt_archive: run or st_archive is not yet complete or was not successful."
41 "Unable to perform long term archive...")
42
43 # document completion
44 append_status("lt_archive completed" ,caseroot=caseroot, sfile="CaseStatus")
45
46 return True
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/case_lt_archive.py b/utils/python/CIME/case_lt_archive.py
--- a/utils/python/CIME/case_lt_archive.py
+++ b/utils/python/CIME/case_lt_archive.py
@@ -12,17 +12,16 @@
caseroot = case.get_value("CASEROOT")
# max number of threads needed by scripts
- os.environ["maxthrds"] = 1
+ os.environ["maxthrds"] = "1"
# document start
append_status("lt_archive starting",caseroot=caseroot,sfile="CaseStatus")
# determine status of run and short term archiving
runComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
- "run SUCCESSFUL")
- staComplete = does_file_have_string(os.path.join(caseroot, "stArchiveStatus"),
- "st_archive_complete")
-
+ "Run SUCCESSFUL")
+ staComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
+ "st_archiving completed")
# set up envrionment vars and call the lt_archive.sh script
if runComplete and staComplete:
os.environ["DOUT_S_ROOT"] = case.get_value("DOUT_S_ROOT")
@@ -32,10 +31,13 @@
lid = time.strftime("%y%m%d-%H%M%S")
lt_archive = LTArchive(case.get_value("MACH"))
lt_archive_args = lt_archive.get_lt_archive_args()
- cmd = os.path.join(caseroot, "Tools/lt_archive.sh") \
+ if lt_archive_args is None:
+ lt_archive_args = " "
+ cmd = os.path.join(caseroot, "Tools", "lt_archive.sh") \
+ lt_archive_args + "ltArchiveStatus." + lid + " 2>&1"
run_cmd_no_fail(cmd, from_dir=caseroot)
else:
+ logger.warn("runComplete %s staComplete %s"%(runComplete, staComplete))
expect(False,
"lt_archive: run or st_archive is not yet complete or was not successful."
"Unable to perform long term archive...")
| {"golden_diff": "diff --git a/utils/python/CIME/case_lt_archive.py b/utils/python/CIME/case_lt_archive.py\n--- a/utils/python/CIME/case_lt_archive.py\n+++ b/utils/python/CIME/case_lt_archive.py\n@@ -12,17 +12,16 @@\n caseroot = case.get_value(\"CASEROOT\")\n \n # max number of threads needed by scripts\n- os.environ[\"maxthrds\"] = 1\n+ os.environ[\"maxthrds\"] = \"1\"\n \n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n \n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n- \"run SUCCESSFUL\")\n- staComplete = does_file_have_string(os.path.join(caseroot, \"stArchiveStatus\"),\n- \"st_archive_complete\")\n-\n+ \"Run SUCCESSFUL\")\n+ staComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n+ \"st_archiving completed\")\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n@@ -32,10 +31,13 @@\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n- cmd = os.path.join(caseroot, \"Tools/lt_archive.sh\") \\\n+ if lt_archive_args is None:\n+ lt_archive_args = \" \"\n+ cmd = os.path.join(caseroot, \"Tools\", \"lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n+ logger.warn(\"runComplete %s staComplete %s\"%(runComplete, staComplete))\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n", "issue": "case.lt_archive\nlt_archive script has several problems preventing functionality. \n", "before_files": [{"content": "from CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, does_file_have_string, append_status\nfrom CIME.XML.lt_archive import LTArchive\n\nimport time\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef case_lt_archive(case):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n\n # max number of threads needed by scripts\n os.environ[\"maxthrds\"] = 1\n\n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n\n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"run SUCCESSFUL\")\n staComplete = does_file_have_string(os.path.join(caseroot, \"stArchiveStatus\"),\n \"st_archive_complete\")\n\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n os.environ[\"DOUT_L_MSROOT\"] = case.get_value(\"DOUT_L_MSROOT\")\n os.environ[\"DOUT_L_HPSS_ACCNT\"] = case.get_value(\"DOUT_L_HPSS_ACCNT\")\n\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n cmd = os.path.join(caseroot, \"Tools/lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n\n # document completion\n append_status(\"lt_archive completed\" ,caseroot=caseroot, sfile=\"CaseStatus\")\n\n return True\n", "path": "utils/python/CIME/case_lt_archive.py"}], "after_files": [{"content": "from CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, does_file_have_string, append_status\nfrom CIME.XML.lt_archive import LTArchive\n\nimport time\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef case_lt_archive(case):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n\n # max number of threads needed by scripts\n os.environ[\"maxthrds\"] = \"1\"\n\n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n\n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"Run SUCCESSFUL\")\n staComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"st_archiving completed\")\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n os.environ[\"DOUT_L_MSROOT\"] = case.get_value(\"DOUT_L_MSROOT\")\n os.environ[\"DOUT_L_HPSS_ACCNT\"] = case.get_value(\"DOUT_L_HPSS_ACCNT\")\n\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n if lt_archive_args is None:\n lt_archive_args = \" \"\n cmd = os.path.join(caseroot, \"Tools\", \"lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n logger.warn(\"runComplete %s staComplete %s\"%(runComplete, staComplete))\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n\n # document completion\n append_status(\"lt_archive completed\" ,caseroot=caseroot, sfile=\"CaseStatus\")\n\n return True\n", "path": "utils/python/CIME/case_lt_archive.py"}]} | 796 | 481 |
gh_patches_debug_37636 | rasdani/github-patches | git_diff | doccano__doccano-1222 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Enhancement request] Meaningful error on labels naming conflict
Feature description
---------
Try rename a label to an existing name.
You get a 500 error.
Desired: a meaningful error.
Related: #601, #826.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/views/label.py`
Content:
```
1 import json
2
3 from django.db import IntegrityError, transaction
4 from django.shortcuts import get_object_or_404
5 from rest_framework import generics, status
6 from rest_framework.exceptions import ParseError
7 from rest_framework.parsers import MultiPartParser
8 from rest_framework.permissions import IsAuthenticated
9 from rest_framework.response import Response
10 from rest_framework.views import APIView
11
12 from ..models import Label, Project
13 from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin
14 from ..serializers import LabelSerializer
15
16
17 class LabelList(generics.ListCreateAPIView):
18 serializer_class = LabelSerializer
19 pagination_class = None
20 permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]
21
22 def get_queryset(self):
23 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
24 return project.labels
25
26 def perform_create(self, serializer):
27 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
28 serializer.save(project=project)
29
30
31 class LabelDetail(generics.RetrieveUpdateDestroyAPIView):
32 queryset = Label.objects.all()
33 serializer_class = LabelSerializer
34 lookup_url_kwarg = 'label_id'
35 permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]
36
37
38 class LabelUploadAPI(APIView):
39 parser_classes = (MultiPartParser,)
40 permission_classes = [IsAuthenticated & IsProjectAdmin]
41
42 @transaction.atomic
43 def post(self, request, *args, **kwargs):
44 if 'file' not in request.data:
45 raise ParseError('Empty content')
46 labels = json.load(request.data['file'])
47 project = get_object_or_404(Project, pk=kwargs['project_id'])
48 try:
49 for label in labels:
50 serializer = LabelSerializer(data=label)
51 serializer.is_valid(raise_exception=True)
52 serializer.save(project=project)
53 return Response(status=status.HTTP_201_CREATED)
54 except IntegrityError:
55 content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}
56 return Response(content, status=status.HTTP_400_BAD_REQUEST)
57
```
Path: `app/api/exceptions.py`
Content:
```
1 from rest_framework import status
2 from rest_framework.exceptions import (APIException, PermissionDenied,
3 ValidationError)
4
5
6 class FileParseException(APIException):
7 status_code = status.HTTP_400_BAD_REQUEST
8 default_detail = 'Invalid file format, line {}: {}'
9 default_code = 'invalid'
10
11 def __init__(self, line_num, line, code=None):
12 detail = self.default_detail.format(line_num, line)
13 super().__init__(detail, code)
14
15
16 class AutoLabelingException(APIException):
17 status_code = status.HTTP_400_BAD_REQUEST
18 default_detail = 'Auto labeling not allowed for the document with labels.'
19
20
21 class AutoLabeliingPermissionDenied(PermissionDenied):
22 default_detail = 'You do not have permission to perform auto labeling.' \
23 'Please ask the project administrators to add you.'
24
25
26 class URLConnectionError(ValidationError):
27 default_detail = 'Failed to establish a connection. Please check the URL or network.'
28
29
30 class AWSTokenError(ValidationError):
31 default_detail = 'The security token included in the request is invalid.'
32
33
34 class SampleDataException(ValidationError):
35 default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \
36 'Please specify another sample data which returns at least one label.'
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/exceptions.py b/app/api/exceptions.py
--- a/app/api/exceptions.py
+++ b/app/api/exceptions.py
@@ -34,3 +34,8 @@
class SampleDataException(ValidationError):
default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \
'Please specify another sample data which returns at least one label.'
+
+
+class LabelValidationError(APIException):
+ status_code = status.HTTP_400_BAD_REQUEST
+ default_detail = 'You cannot create a label with same name or shortcut key.'
diff --git a/app/api/views/label.py b/app/api/views/label.py
--- a/app/api/views/label.py
+++ b/app/api/views/label.py
@@ -9,6 +9,7 @@
from rest_framework.response import Response
from rest_framework.views import APIView
+from ..exceptions import LabelValidationError
from ..models import Label, Project
from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin
from ..serializers import LabelSerializer
@@ -27,6 +28,11 @@
project = get_object_or_404(Project, pk=self.kwargs['project_id'])
serializer.save(project=project)
+ def delete(self, request, *args, **kwargs):
+ delete_ids = request.data['ids']
+ Label.objects.filter(pk__in=delete_ids).delete()
+ return Response(status=status.HTTP_204_NO_CONTENT)
+
class LabelDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = Label.objects.all()
@@ -43,14 +49,14 @@
def post(self, request, *args, **kwargs):
if 'file' not in request.data:
raise ParseError('Empty content')
- labels = json.load(request.data['file'])
project = get_object_or_404(Project, pk=kwargs['project_id'])
try:
- for label in labels:
- serializer = LabelSerializer(data=label)
- serializer.is_valid(raise_exception=True)
- serializer.save(project=project)
+ labels = json.load(request.data['file'])
+ serializer = LabelSerializer(data=labels, many=True)
+ serializer.is_valid(raise_exception=True)
+ serializer.save(project=project)
return Response(status=status.HTTP_201_CREATED)
+ except json.decoder.JSONDecodeError:
+ raise ParseError('The file format is invalid.')
except IntegrityError:
- content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}
- return Response(content, status=status.HTTP_400_BAD_REQUEST)
+ raise LabelValidationError
| {"golden_diff": "diff --git a/app/api/exceptions.py b/app/api/exceptions.py\n--- a/app/api/exceptions.py\n+++ b/app/api/exceptions.py\n@@ -34,3 +34,8 @@\n class SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n+\n+\n+class LabelValidationError(APIException):\n+ status_code = status.HTTP_400_BAD_REQUEST\n+ default_detail = 'You cannot create a label with same name or shortcut key.'\ndiff --git a/app/api/views/label.py b/app/api/views/label.py\n--- a/app/api/views/label.py\n+++ b/app/api/views/label.py\n@@ -9,6 +9,7 @@\n from rest_framework.response import Response\n from rest_framework.views import APIView\n \n+from ..exceptions import LabelValidationError\n from ..models import Label, Project\n from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\n from ..serializers import LabelSerializer\n@@ -27,6 +28,11 @@\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n \n+ def delete(self, request, *args, **kwargs):\n+ delete_ids = request.data['ids']\n+ Label.objects.filter(pk__in=delete_ids).delete()\n+ return Response(status=status.HTTP_204_NO_CONTENT)\n+\n \n class LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n@@ -43,14 +49,14 @@\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n- labels = json.load(request.data['file'])\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n- for label in labels:\n- serializer = LabelSerializer(data=label)\n- serializer.is_valid(raise_exception=True)\n- serializer.save(project=project)\n+ labels = json.load(request.data['file'])\n+ serializer = LabelSerializer(data=labels, many=True)\n+ serializer.is_valid(raise_exception=True)\n+ serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n+ except json.decoder.JSONDecodeError:\n+ raise ParseError('The file format is invalid.')\n except IntegrityError:\n- content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}\n- return Response(content, status=status.HTTP_400_BAD_REQUEST)\n+ raise LabelValidationError\n", "issue": "[Enhancement request] Meaningful error on labels naming conflict\nFeature description\r\n---------\r\nTry rename a label to an existing name.\r\n\r\nYou get a 500 error.\r\n\r\nDesired: a meaningful error.\r\n\r\nRelated: #601, #826.\n", "before_files": [{"content": "import json\n\nfrom django.db import IntegrityError, transaction\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import MultiPartParser\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..models import Label, Project\nfrom ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\nfrom ..serializers import LabelSerializer\n\n\nclass LabelList(generics.ListCreateAPIView):\n serializer_class = LabelSerializer\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n return project.labels\n\n def perform_create(self, serializer):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n\n\nclass LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n serializer_class = LabelSerializer\n lookup_url_kwarg = 'label_id'\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n\nclass LabelUploadAPI(APIView):\n parser_classes = (MultiPartParser,)\n permission_classes = [IsAuthenticated & IsProjectAdmin]\n\n @transaction.atomic\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n labels = json.load(request.data['file'])\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n for label in labels:\n serializer = LabelSerializer(data=label)\n serializer.is_valid(raise_exception=True)\n serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n except IntegrityError:\n content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}\n return Response(content, status=status.HTTP_400_BAD_REQUEST)\n", "path": "app/api/views/label.py"}, {"content": "from rest_framework import status\nfrom rest_framework.exceptions import (APIException, PermissionDenied,\n ValidationError)\n\n\nclass FileParseException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Invalid file format, line {}: {}'\n default_code = 'invalid'\n\n def __init__(self, line_num, line, code=None):\n detail = self.default_detail.format(line_num, line)\n super().__init__(detail, code)\n\n\nclass AutoLabelingException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Auto labeling not allowed for the document with labels.'\n\n\nclass AutoLabeliingPermissionDenied(PermissionDenied):\n default_detail = 'You do not have permission to perform auto labeling.' \\\n 'Please ask the project administrators to add you.'\n\n\nclass URLConnectionError(ValidationError):\n default_detail = 'Failed to establish a connection. Please check the URL or network.'\n\n\nclass AWSTokenError(ValidationError):\n default_detail = 'The security token included in the request is invalid.'\n\n\nclass SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n", "path": "app/api/exceptions.py"}], "after_files": [{"content": "import json\n\nfrom django.db import IntegrityError, transaction\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import MultiPartParser\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..exceptions import LabelValidationError\nfrom ..models import Label, Project\nfrom ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\nfrom ..serializers import LabelSerializer\n\n\nclass LabelList(generics.ListCreateAPIView):\n serializer_class = LabelSerializer\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n return project.labels\n\n def perform_create(self, serializer):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data['ids']\n Label.objects.filter(pk__in=delete_ids).delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n serializer_class = LabelSerializer\n lookup_url_kwarg = 'label_id'\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n\nclass LabelUploadAPI(APIView):\n parser_classes = (MultiPartParser,)\n permission_classes = [IsAuthenticated & IsProjectAdmin]\n\n @transaction.atomic\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n labels = json.load(request.data['file'])\n serializer = LabelSerializer(data=labels, many=True)\n serializer.is_valid(raise_exception=True)\n serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n except json.decoder.JSONDecodeError:\n raise ParseError('The file format is invalid.')\n except IntegrityError:\n raise LabelValidationError\n", "path": "app/api/views/label.py"}, {"content": "from rest_framework import status\nfrom rest_framework.exceptions import (APIException, PermissionDenied,\n ValidationError)\n\n\nclass FileParseException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Invalid file format, line {}: {}'\n default_code = 'invalid'\n\n def __init__(self, line_num, line, code=None):\n detail = self.default_detail.format(line_num, line)\n super().__init__(detail, code)\n\n\nclass AutoLabelingException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Auto labeling not allowed for the document with labels.'\n\n\nclass AutoLabeliingPermissionDenied(PermissionDenied):\n default_detail = 'You do not have permission to perform auto labeling.' \\\n 'Please ask the project administrators to add you.'\n\n\nclass URLConnectionError(ValidationError):\n default_detail = 'Failed to establish a connection. Please check the URL or network.'\n\n\nclass AWSTokenError(ValidationError):\n default_detail = 'The security token included in the request is invalid.'\n\n\nclass SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n\n\nclass LabelValidationError(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'You cannot create a label with same name or shortcut key.'\n", "path": "app/api/exceptions.py"}]} | 1,221 | 577 |
gh_patches_debug_37307 | rasdani/github-patches | git_diff | bridgecrewio__checkov-464 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scanning IAM policy only takes First SID in json rather than looping through
**Describe the bug**
It seems when specifying more than one SID in a json, the policies do not loop through each one rather it just looks at the first one and ends.
**To Reproduce**
Steps to reproduce the behavior:
1. Create policy with more than one SID
`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SqsAllow",
"Effect": "Allow",
"Action": [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ListDeadLetterSourceQueues",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Resource": "*"
},
{
"Sid": "ALL",
"Effect": "Allow",
"Action": [ "*"
],
"Resource": ["*"]
},`
2. Run Checkov against policy
**Expected behavior**
I would expect the scan to check each json within the policy rather than the first one
**Desktop (please complete the following information):**
- OS: Mac
- Checkov Version: 1.0.442
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMStarActionPolicyDocument(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure no IAM policies documents allow \"*\" as a statement's actions"
10 id = "CKV_AWS_63"
11 supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if 'policy' in conf.keys():
17 try:
18 policy_block = json.loads(conf['policy'][0])
19 if 'Statement' in policy_block.keys():
20 if 'Action' in policy_block['Statement'][0] and \
21 policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
22 policy_block['Statement'][0]['Action'][0] == "*":
23 return CheckResult.FAILED
24 except: # nosec
25 pass
26 return CheckResult.PASSED
27
28
29 check = IAMStarActionPolicyDocument()
30
```
Path: `checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMAdminPolicyDocument(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure IAM policies that allow full \"*-*\" administrative privileges are not created"
10 id = "CKV_AWS_62"
11 supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if 'policy' in conf.keys():
17 try:
18 policy_block = json.loads(conf['policy'][0])
19 if 'Statement' in policy_block.keys():
20 if 'Action' in policy_block['Statement'][0] and \
21 policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
22 policy_block['Statement'][0]['Action'][0] == "*" and \
23 'Resource' in policy_block['Statement'][0] and \
24 policy_block['Statement'][0]['Resource'] == '*':
25 return CheckResult.FAILED
26 except: # nosec
27 pass
28 return CheckResult.PASSED
29
30
31 check = IAMAdminPolicyDocument()
32
```
Path: `checkov/terraform/checks/data/aws/StarActionPolicyDocument.py`
Content:
```
1 from checkov.terraform.checks.data.base_check import BaseDataCheck
2 from checkov.common.models.enums import CheckResult, CheckCategories
3
4
5 class StarActionPolicyDocument(BaseDataCheck):
6 def __init__(self):
7 name = "Ensure no IAM policies documents allow \"*\" as a statement's actions"
8 id = "CKV_AWS_49"
9 supported_data = ['aws_iam_policy_document']
10 categories = [CheckCategories.IAM]
11 super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)
12
13 def scan_data_conf(self, conf):
14 """
15 validates iam policy document
16 https://learn.hashicorp.com/terraform/aws/iam-policy
17 :param conf: aws_kms_key configuration
18 :return: <CheckResult>
19 """
20 key = 'statement'
21 if key in conf.keys():
22 for statement in conf['statement']:
23 if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':
24 return CheckResult.FAILED
25 return CheckResult.PASSED
26
27
28 check = StarActionPolicyDocument()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
--- a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
+++ b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
@@ -19,7 +19,7 @@
"""
key = 'statement'
if key in conf.keys():
- for statement in conf['statement']:
+ for statement in conf[key]:
if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':
return CheckResult.FAILED
return CheckResult.PASSED
diff --git a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
--- a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
+++ b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
@@ -17,13 +17,13 @@
try:
policy_block = json.loads(conf['policy'][0])
if 'Statement' in policy_block.keys():
- if 'Action' in policy_block['Statement'][0] and \
- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
- policy_block['Statement'][0]['Action'][0] == "*" and \
- 'Resource' in policy_block['Statement'][0] and \
- policy_block['Statement'][0]['Resource'] == '*':
+ for statement in policy_block['Statement']:
+ if 'Action' in statement and \
+ statement.get('Effect', ['Allow']) == 'Allow' and \
+ '*' in statement.get('Action', ['']) and \
+ '*' in statement.get('Resource', ['']):
return CheckResult.FAILED
- except: # nosec
+ except: # nosec
pass
return CheckResult.PASSED
diff --git a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
--- a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
+++ b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
@@ -17,9 +17,10 @@
try:
policy_block = json.loads(conf['policy'][0])
if 'Statement' in policy_block.keys():
- if 'Action' in policy_block['Statement'][0] and \
- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
- policy_block['Statement'][0]['Action'][0] == "*":
+ for statement in policy_block['Statement']:
+ if 'Action' in statement and \
+ statement.get('Effect', ['Allow']) == 'Allow' and \
+ '*' in statement.get('Action', ['']):
return CheckResult.FAILED
except: # nosec
pass
| {"golden_diff": "diff --git a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n--- a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n+++ b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n@@ -19,7 +19,7 @@\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n- for statement in conf['statement']:\n+ for statement in conf[key]:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\ndiff --git a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n--- a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n+++ b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n@@ -17,13 +17,13 @@\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n- if 'Action' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n- policy_block['Statement'][0]['Action'][0] == \"*\" and \\\n- 'Resource' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0]['Resource'] == '*':\n+ for statement in policy_block['Statement']:\n+ if 'Action' in statement and \\\n+ statement.get('Effect', ['Allow']) == 'Allow' and \\\n+ '*' in statement.get('Action', ['']) and \\\n+ '*' in statement.get('Resource', ['']):\n return CheckResult.FAILED\n- except: # nosec\n+ except: # nosec\n pass\n return CheckResult.PASSED\n \ndiff --git a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n--- a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n+++ b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n@@ -17,9 +17,10 @@\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n- if 'Action' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n- policy_block['Statement'][0]['Action'][0] == \"*\":\n+ for statement in policy_block['Statement']:\n+ if 'Action' in statement and \\\n+ statement.get('Effect', ['Allow']) == 'Allow' and \\\n+ '*' in statement.get('Action', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n", "issue": "Scanning IAM policy only takes First SID in json rather than looping through\n**Describe the bug**\r\nIt seems when specifying more than one SID in a json, the policies do not loop through each one rather it just looks at the first one and ends. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create policy with more than one SID\r\n`{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Sid\": \"SqsAllow\",\r\n \"Effect\": \"Allow\",\r\n \"Action\": [\r\n \"sqs:GetQueueAttributes\",\r\n \"sqs:GetQueueUrl\",\r\n \"sqs:ListDeadLetterSourceQueues\",\r\n \"sqs:ListQueues\",\r\n \"sqs:ReceiveMessage\",\r\n \"sqs:SendMessage\",\r\n \"sqs:SendMessageBatch\"\r\n ],\r\n \"Resource\": \"*\"\r\n },\r\n {\r\n \"Sid\": \"ALL\",\r\n \"Effect\": \"Allow\",\r\n \"Action\": [ \"*\"\r\n ],\r\n \"Resource\": [\"*\"]\r\n },`\r\n2. Run Checkov against policy\r\n\r\n\r\n**Expected behavior**\r\nI would expect the scan to check each json within the policy rather than the first one\r\n\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Mac\r\n - Checkov Version: 1.0.442\r\n\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMStarActionPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_63\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n if 'Action' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n policy_block['Statement'][0]['Action'][0] == \"*\":\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMStarActionPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py"}, {"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMAdminPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM policies that allow full \\\"*-*\\\" administrative privileges are not created\"\n id = \"CKV_AWS_62\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n if 'Action' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n policy_block['Statement'][0]['Action'][0] == \"*\" and \\\n 'Resource' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0]['Resource'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMAdminPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py"}, {"content": "from checkov.terraform.checks.data.base_check import BaseDataCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\n\nclass StarActionPolicyDocument(BaseDataCheck):\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_49\"\n supported_data = ['aws_iam_policy_document']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf):\n \"\"\"\n validates iam policy document\n https://learn.hashicorp.com/terraform/aws/iam-policy\n :param conf: aws_kms_key configuration\n :return: <CheckResult>\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n for statement in conf['statement']:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = StarActionPolicyDocument()\n", "path": "checkov/terraform/checks/data/aws/StarActionPolicyDocument.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMStarActionPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_63\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n for statement in policy_block['Statement']:\n if 'Action' in statement and \\\n statement.get('Effect', ['Allow']) == 'Allow' and \\\n '*' in statement.get('Action', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMStarActionPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py"}, {"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMAdminPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM policies that allow full \\\"*-*\\\" administrative privileges are not created\"\n id = \"CKV_AWS_62\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n for statement in policy_block['Statement']:\n if 'Action' in statement and \\\n statement.get('Effect', ['Allow']) == 'Allow' and \\\n '*' in statement.get('Action', ['']) and \\\n '*' in statement.get('Resource', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMAdminPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py"}, {"content": "from checkov.terraform.checks.data.base_check import BaseDataCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\n\nclass StarActionPolicyDocument(BaseDataCheck):\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_49\"\n supported_data = ['aws_iam_policy_document']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf):\n \"\"\"\n validates iam policy document\n https://learn.hashicorp.com/terraform/aws/iam-policy\n :param conf: aws_kms_key configuration\n :return: <CheckResult>\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n for statement in conf[key]:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = StarActionPolicyDocument()\n", "path": "checkov/terraform/checks/data/aws/StarActionPolicyDocument.py"}]} | 1,585 | 675 |
gh_patches_debug_20479 | rasdani/github-patches | git_diff | sunpy__sunpy-3235 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The ticks for the HGS overlay on map plots are white and invisible by default
Also the HPC ticks are on all four axes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/visualization/wcsaxes_compat.py`
Content:
```
1 """
2 This module provides functions to make WCSAxes work in SunPy.
3 """
4 import matplotlib.pyplot as plt
5
6 import astropy.units as u
7 from astropy.visualization import wcsaxes
8
9 # Force is put here to enable disabling all checks in this module.
10 # It should only be used by tests and other such hacks.
11 _FORCE_NO_WCSAXES = False
12
13 __all__ = ["is_wcsaxes", "gca_wcs", "get_world_transform",
14 "default_wcs_grid", "wcsaxes_heliographic_overlay"]
15
16
17 def is_wcsaxes(axes):
18 """
19 Tests a `matplotlib.axes.Axes` object to see if it is an instance of
20 `~astropy.visualization.wcsaxes.WCSAxes`.
21
22 Parameters
23 ----------
24 axes : `matplotlib.axes`
25 Axes to test.
26
27 Returns
28 -------
29 `bool`
30 Result of the test.
31 """
32 if not _FORCE_NO_WCSAXES:
33 return isinstance(axes, wcsaxes.WCSAxes)
34 else:
35 return False
36
37
38 def gca_wcs(wcs, fig=None, slices=None):
39 """
40 Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`
41 if possible.
42
43 Parameters
44 ----------
45 wcs : `astropy.wcs.WCS`
46 A `~astropy.wcs.WCS` object used to create a new axes.
47 fig : `matplotlib.figure.Figure`
48 The figure in which to check for the axes.
49 slices : `tuple`
50 ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe
51 which two dimensions of the `~astropy.wcs.WCS` object are being plotted.
52 This slices the multidimensional wcs object in the way it needs to be sliced.
53
54 Returns
55 -------
56 `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`
57 The current axes, or a new one if created.
58 """
59 if not fig:
60 fig = plt.gcf()
61
62 if not len(fig.get_axes()):
63 if not _FORCE_NO_WCSAXES:
64 ax = plt.gca(projection=wcs, slices=slices)
65 else:
66 ax = plt.gca()
67 else:
68 ax = plt.gca()
69
70 return ax
71
72
73 def get_world_transform(axes):
74 """
75 Get the transformation to world coordinates.
76
77 If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this
78 returns the transform to the "world" coordinates, otherwise it returns
79 the transform to the matplotlib data coordinates, which are assumed to be in
80 world coordinates.
81
82 Parameters
83 ----------
84 axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`
85 The axes to get the transform from.
86
87 Returns
88 -------
89 `~matplotlib.transforms.CompositeGenericTransform`
90 The transformation object.
91 """
92 if is_wcsaxes(axes):
93 transform = axes.get_transform('world')
94 else:
95 transform = axes.transData
96
97 return transform
98
99
100 def default_wcs_grid(axes):
101 """
102 Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid
103 formatting.
104
105 Parameters
106 ----------
107 axes : `~astropy.visualization.wcsaxes.WCSAxes`
108 The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world
109 coordinate grid on.
110 """
111 axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',
112 linewidth=0.5)
113
114
115 @u.quantity_input
116 def wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):
117 """
118 Create a heliographic overlay using
119 `~astropy.visualization.wcsaxes.WCSAxes`.
120
121 Will draw a grid and label the top axes.
122
123 Parameters
124 ----------
125 axes : `~astropy.visualization.wcsaxes.WCSAxes`
126 The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.
127 grid_spacing: `~astropy.units.Quantity`
128 Spacing for longitude and latitude grid in degrees.
129
130 Returns
131 -------
132 `~astropy.visualization.wcsaxes.WCSAxes`
133 The overlay object.
134
135 Notes
136 -----
137 Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.
138 """
139 # Unpack spacing
140 if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:
141 lon_space = lat_space = grid_spacing
142 elif grid_spacing.size == 2:
143 lon_space, lat_space = grid_spacing
144 else:
145 raise ValueError("grid_spacing must be a Quantity of length one or two.")
146
147 overlay = axes.get_coords_overlay('heliographic_stonyhurst')
148
149 lon = overlay[0]
150 lat = overlay[1]
151
152 lon.coord_wrap = 180
153 lon.set_major_formatter('dd')
154
155 lon.set_axislabel('Solar Longitude', minpad=0.8)
156 lat.set_axislabel('Solar Latitude', minpad=0.9)
157
158 lon.set_ticks_position('tr')
159 lat.set_ticks_position('tr')
160
161 grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}
162 grid_kw.update(kwargs)
163
164 lon.set_ticks(spacing=lon_space, color=grid_kw['color'])
165 lat.set_ticks(spacing=lat_space, color=grid_kw['color'])
166
167 overlay.grid(**grid_kw)
168
169 if axes.title:
170 x, y = axes.title.get_position()
171 axes.title.set_position([x, y + 0.08])
172
173 return overlay
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py
--- a/sunpy/visualization/wcsaxes_compat.py
+++ b/sunpy/visualization/wcsaxes_compat.py
@@ -144,6 +144,12 @@
else:
raise ValueError("grid_spacing must be a Quantity of length one or two.")
+ # Set the native coordinates to be bottom and left only so they don't share
+ # axes with the overlay.
+ c1, c2 = axes.coords
+ c1.set_ticks_position('bl')
+ c2.set_ticks_position('bl')
+
overlay = axes.get_coords_overlay('heliographic_stonyhurst')
lon = overlay[0]
@@ -161,8 +167,10 @@
grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}
grid_kw.update(kwargs)
- lon.set_ticks(spacing=lon_space, color=grid_kw['color'])
- lat.set_ticks(spacing=lat_space, color=grid_kw['color'])
+ # Don't plot white ticks by default (only if explicitly asked)
+ tick_color = grid_kw['color'] if 'color' in kwargs else 'k'
+ lon.set_ticks(spacing=lon_space, color=tick_color)
+ lat.set_ticks(spacing=lat_space, color=tick_color)
overlay.grid(**grid_kw)
| {"golden_diff": "diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py\n--- a/sunpy/visualization/wcsaxes_compat.py\n+++ b/sunpy/visualization/wcsaxes_compat.py\n@@ -144,6 +144,12 @@\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n \n+ # Set the native coordinates to be bottom and left only so they don't share\n+ # axes with the overlay.\n+ c1, c2 = axes.coords\n+ c1.set_ticks_position('bl')\n+ c2.set_ticks_position('bl')\n+\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n \n lon = overlay[0]\n@@ -161,8 +167,10 @@\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n \n- lon.set_ticks(spacing=lon_space, color=grid_kw['color'])\n- lat.set_ticks(spacing=lat_space, color=grid_kw['color'])\n+ # Don't plot white ticks by default (only if explicitly asked)\n+ tick_color = grid_kw['color'] if 'color' in kwargs else 'k'\n+ lon.set_ticks(spacing=lon_space, color=tick_color)\n+ lat.set_ticks(spacing=lat_space, color=tick_color)\n \n overlay.grid(**grid_kw)\n", "issue": "The ticks for the HGS overlay on map plots are white and invisible by default\nAlso the HPC ticks are on all four axes.\n", "before_files": [{"content": "\"\"\"\nThis module provides functions to make WCSAxes work in SunPy.\n\"\"\"\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\nfrom astropy.visualization import wcsaxes\n\n# Force is put here to enable disabling all checks in this module.\n# It should only be used by tests and other such hacks.\n_FORCE_NO_WCSAXES = False\n\n__all__ = [\"is_wcsaxes\", \"gca_wcs\", \"get_world_transform\",\n \"default_wcs_grid\", \"wcsaxes_heliographic_overlay\"]\n\n\ndef is_wcsaxes(axes):\n \"\"\"\n Tests a `matplotlib.axes.Axes` object to see if it is an instance of\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Parameters\n ----------\n axes : `matplotlib.axes`\n Axes to test.\n\n Returns\n -------\n `bool`\n Result of the test.\n \"\"\"\n if not _FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None, slices=None):\n \"\"\"\n Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`\n if possible.\n\n Parameters\n ----------\n wcs : `astropy.wcs.WCS`\n A `~astropy.wcs.WCS` object used to create a new axes.\n fig : `matplotlib.figure.Figure`\n The figure in which to check for the axes.\n slices : `tuple`\n ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe\n which two dimensions of the `~astropy.wcs.WCS` object are being plotted.\n This slices the multidimensional wcs object in the way it needs to be sliced.\n\n Returns\n -------\n `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`\n The current axes, or a new one if created.\n \"\"\"\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if not _FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs, slices=slices)\n else:\n ax = plt.gca()\n else:\n ax = plt.gca()\n\n return ax\n\n\ndef get_world_transform(axes):\n \"\"\"\n Get the transformation to world coordinates.\n\n If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this\n returns the transform to the \"world\" coordinates, otherwise it returns\n the transform to the matplotlib data coordinates, which are assumed to be in\n world coordinates.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`\n The axes to get the transform from.\n\n Returns\n -------\n `~matplotlib.transforms.CompositeGenericTransform`\n The transformation object.\n \"\"\"\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid\n formatting.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world\n coordinate grid on.\n \"\"\"\n axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',\n linewidth=0.5)\n\n\[email protected]_input\ndef wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):\n \"\"\"\n Create a heliographic overlay using\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Will draw a grid and label the top axes.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.\n grid_spacing: `~astropy.units.Quantity`\n Spacing for longitude and latitude grid in degrees.\n\n Returns\n -------\n `~astropy.visualization.wcsaxes.WCSAxes`\n The overlay object.\n\n Notes\n -----\n Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.\n \"\"\"\n # Unpack spacing\n if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:\n lon_space = lat_space = grid_spacing\n elif grid_spacing.size == 2:\n lon_space, lat_space = grid_spacing\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude', minpad=0.8)\n lat.set_axislabel('Solar Latitude', minpad=0.9)\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n\n lon.set_ticks(spacing=lon_space, color=grid_kw['color'])\n lat.set_ticks(spacing=lat_space, color=grid_kw['color'])\n\n overlay.grid(**grid_kw)\n\n if axes.title:\n x, y = axes.title.get_position()\n axes.title.set_position([x, y + 0.08])\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}], "after_files": [{"content": "\"\"\"\nThis module provides functions to make WCSAxes work in SunPy.\n\"\"\"\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\nfrom astropy.visualization import wcsaxes\n\n# Force is put here to enable disabling all checks in this module.\n# It should only be used by tests and other such hacks.\n_FORCE_NO_WCSAXES = False\n\n__all__ = [\"is_wcsaxes\", \"gca_wcs\", \"get_world_transform\",\n \"default_wcs_grid\", \"wcsaxes_heliographic_overlay\"]\n\n\ndef is_wcsaxes(axes):\n \"\"\"\n Tests a `matplotlib.axes.Axes` object to see if it is an instance of\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Parameters\n ----------\n axes : `matplotlib.axes`\n Axes to test.\n\n Returns\n -------\n `bool`\n Result of the test.\n \"\"\"\n if not _FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None, slices=None):\n \"\"\"\n Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`\n if possible.\n\n Parameters\n ----------\n wcs : `astropy.wcs.WCS`\n A `~astropy.wcs.WCS` object used to create a new axes.\n fig : `matplotlib.figure.Figure`\n The figure in which to check for the axes.\n slices : `tuple`\n ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe\n which two dimensions of the `~astropy.wcs.WCS` object are being plotted.\n This slices the multidimensional wcs object in the way it needs to be sliced.\n\n Returns\n -------\n `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`\n The current axes, or a new one if created.\n \"\"\"\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if not _FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs, slices=slices)\n else:\n ax = plt.gca()\n else:\n ax = plt.gca()\n\n return ax\n\n\ndef get_world_transform(axes):\n \"\"\"\n Get the transformation to world coordinates.\n\n If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this\n returns the transform to the \"world\" coordinates, otherwise it returns\n the transform to the matplotlib data coordinates, which are assumed to be in\n world coordinates.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`\n The axes to get the transform from.\n\n Returns\n -------\n `~matplotlib.transforms.CompositeGenericTransform`\n The transformation object.\n \"\"\"\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid\n formatting.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world\n coordinate grid on.\n \"\"\"\n axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',\n linewidth=0.5)\n\n\[email protected]_input\ndef wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):\n \"\"\"\n Create a heliographic overlay using\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Will draw a grid and label the top axes.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.\n grid_spacing: `~astropy.units.Quantity`\n Spacing for longitude and latitude grid in degrees.\n\n Returns\n -------\n `~astropy.visualization.wcsaxes.WCSAxes`\n The overlay object.\n\n Notes\n -----\n Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.\n \"\"\"\n # Unpack spacing\n if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:\n lon_space = lat_space = grid_spacing\n elif grid_spacing.size == 2:\n lon_space, lat_space = grid_spacing\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n\n # Set the native coordinates to be bottom and left only so they don't share\n # axes with the overlay.\n c1, c2 = axes.coords\n c1.set_ticks_position('bl')\n c2.set_ticks_position('bl')\n\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude', minpad=0.8)\n lat.set_axislabel('Solar Latitude', minpad=0.9)\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n\n # Don't plot white ticks by default (only if explicitly asked)\n tick_color = grid_kw['color'] if 'color' in kwargs else 'k'\n lon.set_ticks(spacing=lon_space, color=tick_color)\n lat.set_ticks(spacing=lat_space, color=tick_color)\n\n overlay.grid(**grid_kw)\n\n if axes.title:\n x, y = axes.title.get_position()\n axes.title.set_position([x, y + 0.08])\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}]} | 1,949 | 330 |
gh_patches_debug_12976 | rasdani/github-patches | git_diff | urllib3__urllib3-2042 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
urllib3 logo is unreadable in docs in dark mode
This is a recent Furo addition, you can see it in this pull request build: https://urllib3--2026.org.readthedocs.build/en/2026/index.html. Here's what I see (with Firefox on macOS with dark mode enabled):
<img width="237" alt="urllib3 logo in dark mode in docs" src="https://user-images.githubusercontent.com/42327/96408490-ad2c8300-11f4-11eb-8054-661fb38a6c23.png">
I'm not sure what the correct fix is here. The obvious one would be to force a white background. I guess we could also... add a dark mode urllib3 logo, by switching black letters to white?
(The rest of the content looks good, even if the contrast seems low to me.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import os
2 import sys
3 from datetime import date
4
5 # If extensions (or modules to document with autodoc) are in another directory,
6 # add these directories to sys.path here. If the directory is relative to the
7 # documentation root, use os.path.abspath to make it absolute, like shown here.
8
9 root_path = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
10 sys.path.insert(0, root_path)
11
12 # Mock some expensive/platform-specific modules so build will work.
13 # (https://read-the-docs.readthedocs.io/en/latest/faq.html#\
14 # i-get-import-errors-on-libraries-that-depend-on-c-modules)
15 from unittest import mock
16
17
18 class MockModule(mock.Mock):
19 @classmethod
20 def __getattr__(cls, name):
21 return MockModule()
22
23
24 MOCK_MODULES = ("ntlm",)
25
26 sys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)
27
28
29 import urllib3
30
31 # -- General configuration -----------------------------------------------------
32
33
34 # Add any Sphinx extension module names here, as strings. They can be extensions
35 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
36 extensions = [
37 "sphinx.ext.autodoc",
38 "sphinx.ext.doctest",
39 "sphinx.ext.intersphinx",
40 ]
41
42 # Test code blocks only when explicitly specified
43 doctest_test_doctest_blocks = ""
44
45 # Add any paths that contain templates here, relative to this directory.
46 templates_path = ["_templates"]
47
48 # The suffix of source filenames.
49 source_suffix = ".rst"
50
51 # The master toctree document.
52 master_doc = "index"
53
54 # General information about the project.
55 project = "urllib3"
56 copyright = f"{date.today().year}, Andrey Petrov"
57
58 # The short X.Y version.
59 version = urllib3.__version__
60 # The full version, including alpha/beta/rc tags.
61 release = version
62
63 # List of patterns, relative to source directory, that match files and
64 # directories to ignore when looking for source files.
65 exclude_patterns = ["_build"]
66
67 # The name of the Pygments (syntax highlighting) style to use.
68 pygments_style = "friendly"
69
70 # The theme to use for HTML and HTML Help pages. See the documentation for
71 # a list of builtin themes.
72 html_theme = "furo"
73 html_favicon = "images/favicon.png"
74 html_logo = "images/banner.svg"
75
76 html_theme_options = {
77 "announcement": """
78 <a style=\"text-decoration: none; color: white;\"
79 href=\"https://opencollective.com/urllib3\">
80 <img src=\"/en/latest/_static/favicon.png\"/> Sponsor urllib3 v2.0 on Open Collective
81 </a>
82 """,
83 "sidebar_hide_name": True,
84 }
85
86 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -73,8 +73,8 @@
# a list of builtin themes.
html_theme = "furo"
html_favicon = "images/favicon.png"
-html_logo = "images/banner.svg"
+html_static_path = ["_static"]
html_theme_options = {
"announcement": """
<a style=\"text-decoration: none; color: white;\"
@@ -83,6 +83,8 @@
</a>
""",
"sidebar_hide_name": True,
+ "light_logo": "banner.svg",
+ "dark_logo": "dark-logo.svg",
}
intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -73,8 +73,8 @@\n # a list of builtin themes.\n html_theme = \"furo\"\n html_favicon = \"images/favicon.png\"\n-html_logo = \"images/banner.svg\"\n \n+html_static_path = [\"_static\"]\n html_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n@@ -83,6 +83,8 @@\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n+ \"light_logo\": \"banner.svg\",\n+ \"dark_logo\": \"dark-logo.svg\",\n }\n \n intersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "issue": "urllib3 logo is unreadable in docs in dark mode\nThis is a recent Furo addition, you can see it in this pull request build: https://urllib3--2026.org.readthedocs.build/en/2026/index.html. Here's what I see (with Firefox on macOS with dark mode enabled):\r\n\r\n<img width=\"237\" alt=\"urllib3 logo in dark mode in docs\" src=\"https://user-images.githubusercontent.com/42327/96408490-ad2c8300-11f4-11eb-8054-661fb38a6c23.png\">\r\n\r\nI'm not sure what the correct fix is here. The obvious one would be to force a white background. I guess we could also... add a dark mode urllib3 logo, by switching black letters to white?\r\n\r\n(The rest of the content looks good, even if the contrast seems low to me.)\n", "before_files": [{"content": "import os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# Mock some expensive/platform-specific modules so build will work.\n# (https://read-the-docs.readthedocs.io/en/latest/faq.html#\\\n# i-get-import-errors-on-libraries-that-depend-on-c-modules)\nfrom unittest import mock\n\n\nclass MockModule(mock.Mock):\n @classmethod\n def __getattr__(cls, name):\n return MockModule()\n\n\nMOCK_MODULES = (\"ntlm\",)\n\nsys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)\n\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = f\"{date.today().year}, Andrey Petrov\"\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\nhtml_logo = \"images/banner.svg\"\n\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://opencollective.com/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Sponsor urllib3 v2.0 on Open Collective\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# Mock some expensive/platform-specific modules so build will work.\n# (https://read-the-docs.readthedocs.io/en/latest/faq.html#\\\n# i-get-import-errors-on-libraries-that-depend-on-c-modules)\nimport mock\n\n\nclass MockModule(mock.Mock):\n @classmethod\n def __getattr__(cls, name):\n return MockModule()\n\n\nMOCK_MODULES = (\"ntlm\",)\n\nsys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)\n\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = \"{year}, Andrey Petrov\".format(year=date.today().year)\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\n\nhtml_static_path = [\"_static\"]\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://opencollective.com/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Sponsor urllib3 v2.0 on Open Collective\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n \"light_logo\": \"banner.svg\",\n \"dark_logo\": \"dark-logo.svg\",\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "path": "docs/conf.py"}]} | 1,242 | 171 |
gh_patches_debug_31822 | rasdani/github-patches | git_diff | TencentBlueKing__bk-user-805 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
启动命令: gunicorn 支持退出前打印堆栈
遇到gunicorn 直接退出的情况, 可以使用 https://stackoverflow.com/questions/57167240/is-it-possible-to-get-a-stack-trace-when-a-gunicorn-worker-hits-a-timeout 方式调试, 打印退出前堆栈, 从而推断问题代码位置
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/login/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS
4 Community Edition) available.
5 Copyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.
6 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
7 You may obtain a copy of the License at http://opensource.org/licenses/MIT
8 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10 specific language governing permissions and limitations under the License.
11 """
12
13 import os
14
15 from dj_static import Cling
16 from django.core.wsgi import get_wsgi_application
17
18 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bklogin.config.prod")
19
20 application = Cling(get_wsgi_application())
21
```
Path: `src/saas/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 import os
12
13 from django.core.wsgi import get_wsgi_application
14
15 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_shell.config.overlays.prod")
16
17 application = get_wsgi_application()
18
```
Path: `src/api/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 import os
12
13 from django.core.wsgi import get_wsgi_application
14
15 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_core.config.overlays.prod")
16
17 application = get_wsgi_application()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/api/wsgi.py b/src/api/wsgi.py
--- a/src/api/wsgi.py
+++ b/src/api/wsgi.py
@@ -8,10 +8,13 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_core.config.overlays.prod")
application = get_wsgi_application()
diff --git a/src/login/wsgi.py b/src/login/wsgi.py
--- a/src/login/wsgi.py
+++ b/src/login/wsgi.py
@@ -10,11 +10,14 @@
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from dj_static import Cling
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bklogin.config.prod")
application = Cling(get_wsgi_application())
diff --git a/src/saas/wsgi.py b/src/saas/wsgi.py
--- a/src/saas/wsgi.py
+++ b/src/saas/wsgi.py
@@ -8,10 +8,13 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_shell.config.overlays.prod")
application = get_wsgi_application()
| {"golden_diff": "diff --git a/src/api/wsgi.py b/src/api/wsgi.py\n--- a/src/api/wsgi.py\n+++ b/src/api/wsgi.py\n@@ -8,10 +8,13 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n+import faulthandler\n import os\n \n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n \n application = get_wsgi_application()\ndiff --git a/src/login/wsgi.py b/src/login/wsgi.py\n--- a/src/login/wsgi.py\n+++ b/src/login/wsgi.py\n@@ -10,11 +10,14 @@\n specific language governing permissions and limitations under the License.\n \"\"\"\n \n+import faulthandler\n import os\n \n from dj_static import Cling\n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n \n application = Cling(get_wsgi_application())\ndiff --git a/src/saas/wsgi.py b/src/saas/wsgi.py\n--- a/src/saas/wsgi.py\n+++ b/src/saas/wsgi.py\n@@ -8,10 +8,13 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n+import faulthandler\n import os\n \n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n \n application = get_wsgi_application()\n", "issue": "\u542f\u52a8\u547d\u4ee4: gunicorn \u652f\u6301\u9000\u51fa\u524d\u6253\u5370\u5806\u6808\n\u9047\u5230gunicorn \u76f4\u63a5\u9000\u51fa\u7684\u60c5\u51b5, \u53ef\u4ee5\u4f7f\u7528 https://stackoverflow.com/questions/57167240/is-it-possible-to-get-a-stack-trace-when-a-gunicorn-worker-hits-a-timeout \u65b9\u5f0f\u8c03\u8bd5, \u6253\u5370\u9000\u51fa\u524d\u5806\u6808, \u4ece\u800c\u63a8\u65ad\u95ee\u9898\u4ee3\u7801\u4f4d\u7f6e\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencent is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91PaaS\u5e73\u53f0\u793e\u533a\u7248 (BlueKing PaaS\nCommunity Edition) available.\nCopyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\n\nimport os\n\nfrom dj_static import Cling\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n\napplication = Cling(get_wsgi_application())\n", "path": "src/login/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/saas/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/api/wsgi.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencent is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91PaaS\u5e73\u53f0\u793e\u533a\u7248 (BlueKing PaaS\nCommunity Edition) available.\nCopyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\n\nimport faulthandler\nimport os\n\nfrom dj_static import Cling\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n\napplication = Cling(get_wsgi_application())\n", "path": "src/login/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport faulthandler\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/saas/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport faulthandler\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/api/wsgi.py"}]} | 1,089 | 399 |
gh_patches_debug_17087 | rasdani/github-patches | git_diff | ivy-llc__ivy-17675 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
median
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/stat.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def mean(input, axis=None, keepdim=False, out=None):
12 ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)
13 ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret
14 return ret
15
16
17 @with_unsupported_dtypes({"2.5.0 and below": ("complex", "int8")}, "paddle")
18 @to_ivy_arrays_and_back
19 def numel(x, name=None):
20 prod = ivy.prod(x.size, dtype=ivy.int64)
21 try:
22 length = len(x)
23 except (ValueError, TypeError):
24 length = 1 # if 0 dimensional tensor with 1 element
25 return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])
26
27
28 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
29 @to_ivy_arrays_and_back
30 def nanquantile(a, q, axis=None, keepdims=False, interpolation="linear", out=None):
31 return ivy.nanquantile(
32 a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/stat.py b/ivy/functional/frontends/paddle/tensor/stat.py
--- a/ivy/functional/frontends/paddle/tensor/stat.py
+++ b/ivy/functional/frontends/paddle/tensor/stat.py
@@ -1,6 +1,6 @@
# global
import ivy
-from ivy.func_wrapper import with_unsupported_dtypes
+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.paddle.func_wrapper import (
to_ivy_arrays_and_back,
)
@@ -31,3 +31,17 @@
return ivy.nanquantile(
a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out
)
+
+
+@with_supported_dtypes(
+ {"2.5.0 and below": ("bool", "float16", "float32", "float64", "int32", "int64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def median(x, axis=None, keepdim=False, name=None):
+ x = (
+ ivy.astype(x, ivy.float64)
+ if ivy.dtype(x) == "float64"
+ else ivy.astype(x, ivy.float32)
+ )
+ return ivy.median(x, axis=axis, keepdims=keepdim)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/stat.py b/ivy/functional/frontends/paddle/tensor/stat.py\n--- a/ivy/functional/frontends/paddle/tensor/stat.py\n+++ b/ivy/functional/frontends/paddle/tensor/stat.py\n@@ -1,6 +1,6 @@\n # global\n import ivy\n-from ivy.func_wrapper import with_unsupported_dtypes\n+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\n from ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n )\n@@ -31,3 +31,17 @@\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n+\n+\n+@with_supported_dtypes(\n+ {\"2.5.0 and below\": (\"bool\", \"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def median(x, axis=None, keepdim=False, name=None):\n+ x = (\n+ ivy.astype(x, ivy.float64)\n+ if ivy.dtype(x) == \"float64\"\n+ else ivy.astype(x, ivy.float32)\n+ )\n+ return ivy.median(x, axis=axis, keepdims=keepdim)\n", "issue": "median\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef mean(input, axis=None, keepdim=False, out=None):\n ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)\n ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"complex\", \"int8\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef numel(x, name=None):\n prod = ivy.prod(x.size, dtype=ivy.int64)\n try:\n length = len(x)\n except (ValueError, TypeError):\n length = 1 # if 0 dimensional tensor with 1 element\n return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef nanquantile(a, q, axis=None, keepdims=False, interpolation=\"linear\", out=None):\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n", "path": "ivy/functional/frontends/paddle/tensor/stat.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef mean(input, axis=None, keepdim=False, out=None):\n ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)\n ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"complex\", \"int8\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef numel(x, name=None):\n prod = ivy.prod(x.size, dtype=ivy.int64)\n try:\n length = len(x)\n except (ValueError, TypeError):\n length = 1 # if 0 dimensional tensor with 1 element\n return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef nanquantile(a, q, axis=None, keepdims=False, interpolation=\"linear\", out=None):\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef median(x, axis=None, keepdim=False, name=None):\n x = (\n ivy.astype(x, ivy.float64)\n if ivy.dtype(x) == \"float64\"\n else ivy.astype(x, ivy.float32)\n )\n return ivy.median(x, axis=axis, keepdims=keepdim)\n", "path": "ivy/functional/frontends/paddle/tensor/stat.py"}]} | 683 | 321 |
gh_patches_debug_2019 | rasdani/github-patches | git_diff | litestar-org__litestar-1005 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: openapi render for multiple tags isn't consistent
**Describe the bug**
When the openapi renders tags from both a controller and a route it is not deterministic. This may not be a bug? But it surprised me so thought I'd raise it.
I'm unsure if I'm doing something crazy but for a project, we check in the generated json openapi schema so we can browse the API live in gitlab. I've recently added a tag to both a controller and a route in it. But because the order of the tags isn't consistent they are going to keep flip flopping as we have a pre-commit that generates the json to make sure it's up to date. I hope that ramble makes sense...
**To Reproduce**
```python
from typing import Dict
from starlite import Starlite, Controller, get
class TestController(Controller):
tags = ["a"]
@get("/", tags=["b"])
def hello_world(self) -> Dict[str, str]:
"""Handler function that returns a greeting dictionary."""
return {"hello": "world"}
app = Starlite(route_handlers=[TestController])
print(app.openapi_schema.paths["/"].get.tags)
```
If you run that multiple times, you will see you get either:
```python
['a', 'b']
```
or
```python
['b', 'a']
```
**Additional context**
I believe the problem is [here](https://github.com/starlite-api/starlite/blob/835749112e8364c1516f45973c924774aca22ca9/starlite/openapi/path_item.py#L59) as it forces construction of a new set. Sorting them before returning would be viable as there shouldn't be _too many_ tags and it's a one time thing I believe?
But as I said, it may not be a problem you care about as I could be doing something silly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/openapi/path_item.py`
Content:
```
1 from inspect import cleandoc
2 from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast
3
4 from pydantic_openapi_schema.v3_1_0.operation import Operation
5 from pydantic_openapi_schema.v3_1_0.path_item import PathItem
6
7 from starlite.openapi.parameters import create_parameter_for_handler
8 from starlite.openapi.request_body import create_request_body
9 from starlite.openapi.responses import create_responses
10 from starlite.utils.helpers import unwrap_partial
11
12 if TYPE_CHECKING:
13 from pydantic import BaseModel
14 from pydantic_openapi_schema.v3_1_0 import SecurityRequirement
15
16 from starlite.handlers import HTTPRouteHandler
17 from starlite.plugins.base import PluginProtocol
18 from starlite.routes import HTTPRoute
19
20
21 def get_description_for_handler(route_handler: "HTTPRouteHandler", use_handler_docstrings: bool) -> Optional[str]:
22 """Produce the operation description for a route handler, either by using the description value if provided,
23
24 or the docstring - if config is enabled.
25
26 Args:
27 route_handler: A route handler instance.
28 use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped
29 handler function.
30
31 Returns:
32 An optional description string
33 """
34 handler_description = route_handler.description
35 if handler_description is None and use_handler_docstrings:
36 fn = unwrap_partial(route_handler.fn.value)
37 return cleandoc(fn.__doc__) if fn.__doc__ else None
38 return handler_description
39
40
41 def extract_layered_values(
42 route_handler: "HTTPRouteHandler",
43 ) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:
44 """Extract the tags and security values from the route handler layers.
45
46 Args:
47 route_handler: A Route Handler instance.
48
49 Returns:
50 A tuple of optional lists.
51 """
52 tags: List[str] = []
53 security: List["SecurityRequirement"] = []
54 for layer in route_handler.ownership_layers:
55 if layer.tags:
56 tags.extend(layer.tags)
57 if layer.security:
58 security.extend(layer.security)
59 return list(set(tags)) if tags else None, security or None
60
61
62 def create_path_item(
63 route: "HTTPRoute", create_examples: bool, plugins: List["PluginProtocol"], use_handler_docstrings: bool
64 ) -> PathItem:
65 """Create a PathItem model for the given route parsing all http_methods into Operation Models."""
66 path_item = PathItem()
67 for http_method, handler_tuple in route.route_handler_map.items():
68 route_handler, _ = handler_tuple
69 if route_handler.include_in_schema:
70 handler_fields = cast("BaseModel", route_handler.signature_model).__fields__
71 parameters = (
72 create_parameter_for_handler(
73 route_handler=route_handler,
74 handler_fields=handler_fields,
75 path_parameters=route.path_parameters,
76 generate_examples=create_examples,
77 )
78 or None
79 )
80 raises_validation_error = bool("data" in handler_fields or path_item.parameters or parameters)
81 handler_name = unwrap_partial(route_handler.handler_name).replace("_", " ").title()
82 request_body = None
83 if "data" in handler_fields:
84 request_body = create_request_body(
85 field=handler_fields["data"], generate_examples=create_examples, plugins=plugins
86 )
87
88 tags, security = extract_layered_values(route_handler)
89 operation = Operation(
90 operationId=route_handler.operation_id or handler_name,
91 tags=tags,
92 summary=route_handler.summary,
93 description=get_description_for_handler(route_handler, use_handler_docstrings),
94 deprecated=route_handler.deprecated,
95 responses=create_responses(
96 route_handler=route_handler,
97 raises_validation_error=raises_validation_error,
98 generate_examples=create_examples,
99 plugins=plugins,
100 ),
101 requestBody=request_body,
102 parameters=parameters, # type: ignore[arg-type]
103 security=security,
104 )
105 setattr(path_item, http_method.lower(), operation)
106 return path_item
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/openapi/path_item.py b/starlite/openapi/path_item.py
--- a/starlite/openapi/path_item.py
+++ b/starlite/openapi/path_item.py
@@ -56,7 +56,7 @@
tags.extend(layer.tags)
if layer.security:
security.extend(layer.security)
- return list(set(tags)) if tags else None, security or None
+ return sorted(set(tags)) if tags else None, security or None
def create_path_item(
| {"golden_diff": "diff --git a/starlite/openapi/path_item.py b/starlite/openapi/path_item.py\n--- a/starlite/openapi/path_item.py\n+++ b/starlite/openapi/path_item.py\n@@ -56,7 +56,7 @@\n tags.extend(layer.tags)\n if layer.security:\n security.extend(layer.security)\n- return list(set(tags)) if tags else None, security or None\n+ return sorted(set(tags)) if tags else None, security or None\n \n \n def create_path_item(\n", "issue": "Bug: openapi render for multiple tags isn't consistent\n**Describe the bug**\r\nWhen the openapi renders tags from both a controller and a route it is not deterministic. This may not be a bug? But it surprised me so thought I'd raise it.\r\n\r\nI'm unsure if I'm doing something crazy but for a project, we check in the generated json openapi schema so we can browse the API live in gitlab. I've recently added a tag to both a controller and a route in it. But because the order of the tags isn't consistent they are going to keep flip flopping as we have a pre-commit that generates the json to make sure it's up to date. I hope that ramble makes sense...\r\n\r\n**To Reproduce**\r\n```python\r\nfrom typing import Dict \r\n \r\nfrom starlite import Starlite, Controller, get \r\n \r\nclass TestController(Controller): \r\n tags = [\"a\"] \r\n \r\n @get(\"/\", tags=[\"b\"]) \r\n def hello_world(self) -> Dict[str, str]: \r\n \"\"\"Handler function that returns a greeting dictionary.\"\"\"\r\n return {\"hello\": \"world\"} \r\n \r\n \r\napp = Starlite(route_handlers=[TestController]) \r\nprint(app.openapi_schema.paths[\"/\"].get.tags) \r\n```\r\nIf you run that multiple times, you will see you get either:\r\n```python\r\n['a', 'b']\r\n```\r\nor\r\n```python\r\n['b', 'a']\r\n```\r\n\r\n**Additional context**\r\nI believe the problem is [here](https://github.com/starlite-api/starlite/blob/835749112e8364c1516f45973c924774aca22ca9/starlite/openapi/path_item.py#L59) as it forces construction of a new set. Sorting them before returning would be viable as there shouldn't be _too many_ tags and it's a one time thing I believe?\r\n\r\nBut as I said, it may not be a problem you care about as I could be doing something silly.\r\n\n", "before_files": [{"content": "from inspect import cleandoc\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast\n\nfrom pydantic_openapi_schema.v3_1_0.operation import Operation\nfrom pydantic_openapi_schema.v3_1_0.path_item import PathItem\n\nfrom starlite.openapi.parameters import create_parameter_for_handler\nfrom starlite.openapi.request_body import create_request_body\nfrom starlite.openapi.responses import create_responses\nfrom starlite.utils.helpers import unwrap_partial\n\nif TYPE_CHECKING:\n from pydantic import BaseModel\n from pydantic_openapi_schema.v3_1_0 import SecurityRequirement\n\n from starlite.handlers import HTTPRouteHandler\n from starlite.plugins.base import PluginProtocol\n from starlite.routes import HTTPRoute\n\n\ndef get_description_for_handler(route_handler: \"HTTPRouteHandler\", use_handler_docstrings: bool) -> Optional[str]:\n \"\"\"Produce the operation description for a route handler, either by using the description value if provided,\n\n or the docstring - if config is enabled.\n\n Args:\n route_handler: A route handler instance.\n use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped\n handler function.\n\n Returns:\n An optional description string\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and use_handler_docstrings:\n fn = unwrap_partial(route_handler.fn.value)\n return cleandoc(fn.__doc__) if fn.__doc__ else None\n return handler_description\n\n\ndef extract_layered_values(\n route_handler: \"HTTPRouteHandler\",\n) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:\n \"\"\"Extract the tags and security values from the route handler layers.\n\n Args:\n route_handler: A Route Handler instance.\n\n Returns:\n A tuple of optional lists.\n \"\"\"\n tags: List[str] = []\n security: List[\"SecurityRequirement\"] = []\n for layer in route_handler.ownership_layers:\n if layer.tags:\n tags.extend(layer.tags)\n if layer.security:\n security.extend(layer.security)\n return list(set(tags)) if tags else None, security or None\n\n\ndef create_path_item(\n route: \"HTTPRoute\", create_examples: bool, plugins: List[\"PluginProtocol\"], use_handler_docstrings: bool\n) -> PathItem:\n \"\"\"Create a PathItem model for the given route parsing all http_methods into Operation Models.\"\"\"\n path_item = PathItem()\n for http_method, handler_tuple in route.route_handler_map.items():\n route_handler, _ = handler_tuple\n if route_handler.include_in_schema:\n handler_fields = cast(\"BaseModel\", route_handler.signature_model).__fields__\n parameters = (\n create_parameter_for_handler(\n route_handler=route_handler,\n handler_fields=handler_fields,\n path_parameters=route.path_parameters,\n generate_examples=create_examples,\n )\n or None\n )\n raises_validation_error = bool(\"data\" in handler_fields or path_item.parameters or parameters)\n handler_name = unwrap_partial(route_handler.handler_name).replace(\"_\", \" \").title()\n request_body = None\n if \"data\" in handler_fields:\n request_body = create_request_body(\n field=handler_fields[\"data\"], generate_examples=create_examples, plugins=plugins\n )\n\n tags, security = extract_layered_values(route_handler)\n operation = Operation(\n operationId=route_handler.operation_id or handler_name,\n tags=tags,\n summary=route_handler.summary,\n description=get_description_for_handler(route_handler, use_handler_docstrings),\n deprecated=route_handler.deprecated,\n responses=create_responses(\n route_handler=route_handler,\n raises_validation_error=raises_validation_error,\n generate_examples=create_examples,\n plugins=plugins,\n ),\n requestBody=request_body,\n parameters=parameters, # type: ignore[arg-type]\n security=security,\n )\n setattr(path_item, http_method.lower(), operation)\n return path_item\n", "path": "starlite/openapi/path_item.py"}], "after_files": [{"content": "from inspect import cleandoc\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast\n\nfrom pydantic_openapi_schema.v3_1_0.operation import Operation\nfrom pydantic_openapi_schema.v3_1_0.path_item import PathItem\n\nfrom starlite.openapi.parameters import create_parameter_for_handler\nfrom starlite.openapi.request_body import create_request_body\nfrom starlite.openapi.responses import create_responses\nfrom starlite.utils.helpers import unwrap_partial\n\nif TYPE_CHECKING:\n from pydantic import BaseModel\n from pydantic_openapi_schema.v3_1_0 import SecurityRequirement\n\n from starlite.handlers import HTTPRouteHandler\n from starlite.plugins.base import PluginProtocol\n from starlite.routes import HTTPRoute\n\n\ndef get_description_for_handler(route_handler: \"HTTPRouteHandler\", use_handler_docstrings: bool) -> Optional[str]:\n \"\"\"Produce the operation description for a route handler, either by using the description value if provided,\n\n or the docstring - if config is enabled.\n\n Args:\n route_handler: A route handler instance.\n use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped\n handler function.\n\n Returns:\n An optional description string\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and use_handler_docstrings:\n fn = unwrap_partial(route_handler.fn.value)\n return cleandoc(fn.__doc__) if fn.__doc__ else None\n return handler_description\n\n\ndef extract_layered_values(\n route_handler: \"HTTPRouteHandler\",\n) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:\n \"\"\"Extract the tags and security values from the route handler layers.\n\n Args:\n route_handler: A Route Handler instance.\n\n Returns:\n A tuple of optional lists.\n \"\"\"\n tags: List[str] = []\n security: List[\"SecurityRequirement\"] = []\n for layer in route_handler.ownership_layers:\n if layer.tags:\n tags.extend(layer.tags)\n if layer.security:\n security.extend(layer.security)\n return sorted(set(tags)) if tags else None, security or None\n\n\ndef create_path_item(\n route: \"HTTPRoute\", create_examples: bool, plugins: List[\"PluginProtocol\"], use_handler_docstrings: bool\n) -> PathItem:\n \"\"\"Create a PathItem model for the given route parsing all http_methods into Operation Models.\"\"\"\n path_item = PathItem()\n for http_method, handler_tuple in route.route_handler_map.items():\n route_handler, _ = handler_tuple\n if route_handler.include_in_schema:\n handler_fields = cast(\"BaseModel\", route_handler.signature_model).__fields__\n parameters = (\n create_parameter_for_handler(\n route_handler=route_handler,\n handler_fields=handler_fields,\n path_parameters=route.path_parameters,\n generate_examples=create_examples,\n )\n or None\n )\n raises_validation_error = bool(\"data\" in handler_fields or path_item.parameters or parameters)\n handler_name = unwrap_partial(route_handler.handler_name).replace(\"_\", \" \").title()\n request_body = None\n if \"data\" in handler_fields:\n request_body = create_request_body(\n field=handler_fields[\"data\"], generate_examples=create_examples, plugins=plugins\n )\n\n tags, security = extract_layered_values(route_handler)\n operation = Operation(\n operationId=route_handler.operation_id or handler_name,\n tags=tags,\n summary=route_handler.summary,\n description=get_description_for_handler(route_handler, use_handler_docstrings),\n deprecated=route_handler.deprecated,\n responses=create_responses(\n route_handler=route_handler,\n raises_validation_error=raises_validation_error,\n generate_examples=create_examples,\n plugins=plugins,\n ),\n requestBody=request_body,\n parameters=parameters, # type: ignore[arg-type]\n security=security,\n )\n setattr(path_item, http_method.lower(), operation)\n return path_item\n", "path": "starlite/openapi/path_item.py"}]} | 1,762 | 107 |
gh_patches_debug_3584 | rasdani/github-patches | git_diff | vas3k__vas3k.club-220 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Только часть id до дефиса выделена когда тебя @тэгнули

https://vas3k.club/post/2295/#comment-8177cee9-5bef-49bf-bade-44deea61e5d5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `common/regexp.py`
Content:
```
1 import re
2
3 USERNAME_RE = re.compile(r"(?:\s|\n|^)@([A-Za-z0-9_]{3,})")
4 IMAGE_RE = re.compile(r"(http(s?):)([/|.|\w|\s|-])*\.(?:jpg|jpeg|gif|png)")
5 VIDEO_RE = re.compile(r"(http(s?):)([/|.|\w|\s|-])*\.(?:mov|mp4)")
6 YOUTUBE_RE = re.compile(
7 r"http(?:s?):\/\/(?:www\.)?youtu(?:be\.com\/watch\?v=|\.be\/)([\w\-\_]*)(&(amp;)?[\w\?=]*)?"
8 )
9 TWITTER_RE = re.compile(r"(https?:\/\/twitter.com\/[a-zA-Z0-9_]+\/status\/[\d]+)")
10 FAVICON_RE = re.compile(r"(http(s?):)([/|.|\w|\s|-])*\.(?:jpg|jpeg|gif|png|ico)")
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/common/regexp.py b/common/regexp.py
--- a/common/regexp.py
+++ b/common/regexp.py
@@ -1,6 +1,6 @@
import re
-USERNAME_RE = re.compile(r"(?:\s|\n|^)@([A-Za-z0-9_]{3,})")
+USERNAME_RE = re.compile(r"(?:\s|\n|^)@([A-Za-z0-9_-]{3,})")
IMAGE_RE = re.compile(r"(http(s?):)([/|.|\w|\s|-])*\.(?:jpg|jpeg|gif|png)")
VIDEO_RE = re.compile(r"(http(s?):)([/|.|\w|\s|-])*\.(?:mov|mp4)")
YOUTUBE_RE = re.compile(
| {"golden_diff": "diff --git a/common/regexp.py b/common/regexp.py\n--- a/common/regexp.py\n+++ b/common/regexp.py\n@@ -1,6 +1,6 @@\n import re\n \n-USERNAME_RE = re.compile(r\"(?:\\s|\\n|^)@([A-Za-z0-9_]{3,})\")\n+USERNAME_RE = re.compile(r\"(?:\\s|\\n|^)@([A-Za-z0-9_-]{3,})\")\n IMAGE_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:jpg|jpeg|gif|png)\")\n VIDEO_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:mov|mp4)\")\n YOUTUBE_RE = re.compile(\n", "issue": "\u0422\u043e\u043b\u044c\u043a\u043e \u0447\u0430\u0441\u0442\u044c id \u0434\u043e \u0434\u0435\u0444\u0438\u0441\u0430 \u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0430 \u043a\u043e\u0433\u0434\u0430 \u0442\u0435\u0431\u044f @\u0442\u044d\u0433\u043d\u0443\u043b\u0438\n\r\nhttps://vas3k.club/post/2295/#comment-8177cee9-5bef-49bf-bade-44deea61e5d5\r\n\r\n\r\n\n", "before_files": [{"content": "import re\n\nUSERNAME_RE = re.compile(r\"(?:\\s|\\n|^)@([A-Za-z0-9_]{3,})\")\nIMAGE_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:jpg|jpeg|gif|png)\")\nVIDEO_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:mov|mp4)\")\nYOUTUBE_RE = re.compile(\n r\"http(?:s?):\\/\\/(?:www\\.)?youtu(?:be\\.com\\/watch\\?v=|\\.be\\/)([\\w\\-\\_]*)(&(amp;)?\u200c\u200b[\\w\\?\u200c\u200b=]*)?\"\n)\nTWITTER_RE = re.compile(r\"(https?:\\/\\/twitter.com\\/[a-zA-Z0-9_]+\\/status\\/[\\d]+)\")\nFAVICON_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:jpg|jpeg|gif|png|ico)\")\n", "path": "common/regexp.py"}], "after_files": [{"content": "import re\n\nUSERNAME_RE = re.compile(r\"(?:\\s|\\n|^)@([A-Za-z0-9_-]{3,})\")\nIMAGE_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:jpg|jpeg|gif|png)\")\nVIDEO_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:mov|mp4)\")\nYOUTUBE_RE = re.compile(\n r\"http(?:s?):\\/\\/(?:www\\.)?youtu(?:be\\.com\\/watch\\?v=|\\.be\\/)([\\w\\-\\_]*)(&(amp;)?\u200c\u200b[\\w\\?\u200c\u200b=]*)?\"\n)\nTWITTER_RE = re.compile(r\"(https?:\\/\\/twitter.com\\/[a-zA-Z0-9_]+\\/status\\/[\\d]+)\")\nFAVICON_RE = re.compile(r\"(http(s?):)([/|.|\\w|\\s|-])*\\.(?:jpg|jpeg|gif|png|ico)\")\n", "path": "common/regexp.py"}]} | 607 | 164 |
gh_patches_debug_10581 | rasdani/github-patches | git_diff | pytorch__rl-598 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] timeit profiling class does not correctly track how many times a function has been called.
## Describe the bug
In file **pytorch/rl/torchrl/_utils.py**, in the class **timeit**, starting at line 31, the code is the following
```
count = self._REG[self.name][1]
self._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)
self._REG[self.name][1] = self._REG[self.name][1] + t
self._REG[self.name][2] = count + 1
```
I understand self._REG[self.name][2] tracks the number of times a function has been called.
In that case, shouldn't the first line be changed to
```
count = self._REG[self.name][2]
```
## Checklist
- [X] I have checked that there is no similar issue in the repo (**required**)
- [ ] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)
- [ ] I have provided a minimal working example to reproduce the bug (**required**)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchrl/_utils.py`
Content:
```
1 import collections
2 import math
3 import time
4
5 import numpy as np
6
7
8 class timeit:
9 """A dirty but easy to use decorator for profiling code."""
10
11 _REG = {}
12
13 def __init__(self, name):
14 self.name = name
15
16 def __call__(self, fn):
17 def decorated_fn(*args, **kwargs):
18 with self:
19 out = fn(*args, **kwargs)
20 return out
21
22 return decorated_fn
23
24 def __enter__(self):
25 self.t0 = time.time()
26
27 def __exit__(self, exc_type, exc_val, exc_tb):
28 t = time.time() - self.t0
29 self._REG.setdefault(self.name, [0.0, 0.0, 0])
30
31 count = self._REG[self.name][1]
32 self._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)
33 self._REG[self.name][1] = self._REG[self.name][1] + t
34 self._REG[self.name][2] = count + 1
35
36 @staticmethod
37 def print(prefix=None):
38 keys = list(timeit._REG)
39 keys.sort()
40 for name in keys:
41 strings = []
42 if prefix:
43 strings.append(prefix)
44 strings.append(
45 f"{name} took {timeit._REG[name][0] * 1000:4.4} msec (total = {timeit._REG[name][1]} sec)"
46 )
47 print(" -- ".join(strings))
48
49 @staticmethod
50 def erase():
51 for k in timeit._REG:
52 timeit._REG[k] = [0.0, 0.0, 0]
53
54
55 def _check_for_faulty_process(processes):
56 terminate = False
57 for p in processes:
58 if not p.is_alive():
59 terminate = True
60 for _p in processes:
61 if _p.is_alive():
62 _p.terminate()
63 if terminate:
64 break
65 if terminate:
66 raise RuntimeError(
67 "At least one process failed. Check for more infos in the log."
68 )
69
70
71 def seed_generator(seed):
72 """A seed generator function.
73
74 Given a seeding integer, generates a deterministic next seed to be used in a
75 seeding sequence.
76
77 Args:
78 seed (int): initial seed.
79
80 Returns: Next seed of the chain.
81
82 """
83 max_seed_val = (
84 2 ** 32 - 1
85 ) # https://discuss.pytorch.org/t/what-is-the-max-seed-you-can-set-up/145688
86 rng = np.random.default_rng(seed)
87 seed = int.from_bytes(rng.bytes(8), "big")
88 return seed % max_seed_val
89
90
91 class KeyDependentDefaultDict(collections.defaultdict):
92 """A key-dependent default dict.
93
94 Examples:
95 >>> my_dict = KeyDependentDefaultDict(lambda key: "foo_" + key)
96 >>> print(my_dict["bar"])
97 foo_bar
98 """
99
100 def __init__(self, fun):
101 self.fun = fun
102 super().__init__()
103
104 def __missing__(self, key):
105 value = self.fun(key)
106 self[key] = value
107 return value
108
109
110 def prod(sequence):
111 """General prod function, that generalised usage across math and np.
112
113 Created for multiple python versions compatibility).
114
115 """
116 if hasattr(math, "prod"):
117 return math.prod(sequence)
118 else:
119 return int(np.prod(sequence))
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchrl/_utils.py b/torchrl/_utils.py
--- a/torchrl/_utils.py
+++ b/torchrl/_utils.py
@@ -26,12 +26,13 @@
def __exit__(self, exc_type, exc_val, exc_tb):
t = time.time() - self.t0
- self._REG.setdefault(self.name, [0.0, 0.0, 0])
+ val = self._REG.setdefault(self.name, [0.0, 0.0, 0])
- count = self._REG[self.name][1]
- self._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)
- self._REG[self.name][1] = self._REG[self.name][1] + t
- self._REG[self.name][2] = count + 1
+ count = val[2]
+ N = count + 1
+ val[0] = val[0] * (count / N) + t / N
+ val[1] += t
+ val[2] = N
@staticmethod
def print(prefix=None):
| {"golden_diff": "diff --git a/torchrl/_utils.py b/torchrl/_utils.py\n--- a/torchrl/_utils.py\n+++ b/torchrl/_utils.py\n@@ -26,12 +26,13 @@\n \n def __exit__(self, exc_type, exc_val, exc_tb):\n t = time.time() - self.t0\n- self._REG.setdefault(self.name, [0.0, 0.0, 0])\n+ val = self._REG.setdefault(self.name, [0.0, 0.0, 0])\n \n- count = self._REG[self.name][1]\n- self._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)\n- self._REG[self.name][1] = self._REG[self.name][1] + t\n- self._REG[self.name][2] = count + 1\n+ count = val[2]\n+ N = count + 1\n+ val[0] = val[0] * (count / N) + t / N\n+ val[1] += t\n+ val[2] = N\n \n @staticmethod\n def print(prefix=None):\n", "issue": "[BUG] timeit profiling class does not correctly track how many times a function has been called.\n## Describe the bug\r\n\r\nIn file **pytorch/rl/torchrl/_utils.py**, in the class **timeit**, starting at line 31, the code is the following\r\n\r\n```\r\ncount = self._REG[self.name][1]\r\nself._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)\r\nself._REG[self.name][1] = self._REG[self.name][1] + t\r\nself._REG[self.name][2] = count + 1\r\n```\r\n\r\nI understand self._REG[self.name][2] tracks the number of times a function has been called.\r\nIn that case, shouldn't the first line be changed to \r\n\r\n```\r\ncount = self._REG[self.name][2]\r\n```\r\n\r\n## Checklist\r\n\r\n- [X] I have checked that there is no similar issue in the repo (**required**)\r\n- [ ] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)\r\n- [ ] I have provided a minimal working example to reproduce the bug (**required**)\r\n\n", "before_files": [{"content": "import collections\nimport math\nimport time\n\nimport numpy as np\n\n\nclass timeit:\n \"\"\"A dirty but easy to use decorator for profiling code.\"\"\"\n\n _REG = {}\n\n def __init__(self, name):\n self.name = name\n\n def __call__(self, fn):\n def decorated_fn(*args, **kwargs):\n with self:\n out = fn(*args, **kwargs)\n return out\n\n return decorated_fn\n\n def __enter__(self):\n self.t0 = time.time()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n t = time.time() - self.t0\n self._REG.setdefault(self.name, [0.0, 0.0, 0])\n\n count = self._REG[self.name][1]\n self._REG[self.name][0] = (self._REG[self.name][0] * count + t) / (count + 1)\n self._REG[self.name][1] = self._REG[self.name][1] + t\n self._REG[self.name][2] = count + 1\n\n @staticmethod\n def print(prefix=None):\n keys = list(timeit._REG)\n keys.sort()\n for name in keys:\n strings = []\n if prefix:\n strings.append(prefix)\n strings.append(\n f\"{name} took {timeit._REG[name][0] * 1000:4.4} msec (total = {timeit._REG[name][1]} sec)\"\n )\n print(\" -- \".join(strings))\n\n @staticmethod\n def erase():\n for k in timeit._REG:\n timeit._REG[k] = [0.0, 0.0, 0]\n\n\ndef _check_for_faulty_process(processes):\n terminate = False\n for p in processes:\n if not p.is_alive():\n terminate = True\n for _p in processes:\n if _p.is_alive():\n _p.terminate()\n if terminate:\n break\n if terminate:\n raise RuntimeError(\n \"At least one process failed. Check for more infos in the log.\"\n )\n\n\ndef seed_generator(seed):\n \"\"\"A seed generator function.\n\n Given a seeding integer, generates a deterministic next seed to be used in a\n seeding sequence.\n\n Args:\n seed (int): initial seed.\n\n Returns: Next seed of the chain.\n\n \"\"\"\n max_seed_val = (\n 2 ** 32 - 1\n ) # https://discuss.pytorch.org/t/what-is-the-max-seed-you-can-set-up/145688\n rng = np.random.default_rng(seed)\n seed = int.from_bytes(rng.bytes(8), \"big\")\n return seed % max_seed_val\n\n\nclass KeyDependentDefaultDict(collections.defaultdict):\n \"\"\"A key-dependent default dict.\n\n Examples:\n >>> my_dict = KeyDependentDefaultDict(lambda key: \"foo_\" + key)\n >>> print(my_dict[\"bar\"])\n foo_bar\n \"\"\"\n\n def __init__(self, fun):\n self.fun = fun\n super().__init__()\n\n def __missing__(self, key):\n value = self.fun(key)\n self[key] = value\n return value\n\n\ndef prod(sequence):\n \"\"\"General prod function, that generalised usage across math and np.\n\n Created for multiple python versions compatibility).\n\n \"\"\"\n if hasattr(math, \"prod\"):\n return math.prod(sequence)\n else:\n return int(np.prod(sequence))\n", "path": "torchrl/_utils.py"}], "after_files": [{"content": "import collections\nimport math\nimport time\n\nimport numpy as np\n\n\nclass timeit:\n \"\"\"A dirty but easy to use decorator for profiling code.\"\"\"\n\n _REG = {}\n\n def __init__(self, name):\n self.name = name\n\n def __call__(self, fn):\n def decorated_fn(*args, **kwargs):\n with self:\n out = fn(*args, **kwargs)\n return out\n\n return decorated_fn\n\n def __enter__(self):\n self.t0 = time.time()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n t = time.time() - self.t0\n val = self._REG.setdefault(self.name, [0.0, 0.0, 0])\n\n count = val[2]\n N = count + 1\n val[0] = val[0] * (count / N) + t / N\n val[1] += t\n val[2] = N\n\n @staticmethod\n def print(prefix=None):\n keys = list(timeit._REG)\n keys.sort()\n for name in keys:\n strings = []\n if prefix:\n strings.append(prefix)\n strings.append(\n f\"{name} took {timeit._REG[name][0] * 1000:4.4} msec (total = {timeit._REG[name][1]} sec)\"\n )\n print(\" -- \".join(strings))\n\n @staticmethod\n def erase():\n for k in timeit._REG:\n timeit._REG[k] = [0.0, 0.0, 0]\n\n\ndef _check_for_faulty_process(processes):\n terminate = False\n for p in processes:\n if not p.is_alive():\n terminate = True\n for _p in processes:\n if _p.is_alive():\n _p.terminate()\n if terminate:\n break\n if terminate:\n raise RuntimeError(\n \"At least one process failed. Check for more infos in the log.\"\n )\n\n\ndef seed_generator(seed):\n \"\"\"A seed generator function.\n\n Given a seeding integer, generates a deterministic next seed to be used in a\n seeding sequence.\n\n Args:\n seed (int): initial seed.\n\n Returns: Next seed of the chain.\n\n \"\"\"\n max_seed_val = (\n 2 ** 32 - 1\n ) # https://discuss.pytorch.org/t/what-is-the-max-seed-you-can-set-up/145688\n rng = np.random.default_rng(seed)\n seed = int.from_bytes(rng.bytes(8), \"big\")\n return seed % max_seed_val\n\n\nclass KeyDependentDefaultDict(collections.defaultdict):\n \"\"\"A key-dependent default dict.\n\n Examples:\n >>> my_dict = KeyDependentDefaultDict(lambda key: \"foo_\" + key)\n >>> print(my_dict[\"bar\"])\n foo_bar\n \"\"\"\n\n def __init__(self, fun):\n self.fun = fun\n super().__init__()\n\n def __missing__(self, key):\n value = self.fun(key)\n self[key] = value\n return value\n\n\ndef prod(sequence):\n \"\"\"General prod function, that generalised usage across math and np.\n\n Created for multiple python versions compatibility).\n\n \"\"\"\n if hasattr(math, \"prod\"):\n return math.prod(sequence)\n else:\n return int(np.prod(sequence))\n", "path": "torchrl/_utils.py"}]} | 1,538 | 274 |
gh_patches_debug_39579 | rasdani/github-patches | git_diff | vyperlang__vyper-2071 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Infinite loop from multidimensional array in calldata
### Version Information
* vyper Version (output of `vyper --version`): latest master
* OS: linux
* Python Version: `3.8.2`
### What's your issue about?
Using a multidimensional array in the function inputs, where the length of the >=2nd dimension is 6 or more, causes an infinite loop when calling the function.
For example, each of the following methods will compile but attempting to call them fails with out of gas:
```python
@public
def foo(a: uint256[1][6]):
pass
@public
def bar(a: uint256[1][1][6]):
pass
```
For comparison, these methods execute as expected:
```python
@public
def foo(a: uint256[6][1]):
pass
@public
def bar(a: uint256[100][5][5][5]):
pass
```
#### Some observations:
* The length of the first array element has no effect.
* The data type has no effect.
* The location of the array within calldata, and total number of arguments, has no effect.
* The number of dimensions, or dimension that exceeds a length of 5, has no effect.
### How can it be fixed?
Unsure at this time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/arg_clamps.py`
Content:
```
1 import functools
2 import uuid
3
4 from vyper.parser.lll_node import LLLnode
5 from vyper.types.types import (
6 ByteArrayLike,
7 ListType,
8 get_size_of_type,
9 is_base_type,
10 )
11 from vyper.utils import MemoryPositions
12
13
14 def _mk_calldatacopy_copier(pos, sz, mempos):
15 return ["calldatacopy", mempos, ["add", 4, pos], sz]
16
17
18 def _mk_codecopy_copier(pos, sz, mempos):
19 return ["codecopy", mempos, ["add", "~codelen", pos], sz]
20
21
22 def make_arg_clamper(datapos, mempos, typ, is_init=False):
23 """
24 Clamps argument to type limits.
25 """
26
27 if not is_init:
28 data_decl = ["calldataload", ["add", 4, datapos]]
29 copier = functools.partial(_mk_calldatacopy_copier, mempos=mempos)
30 else:
31 data_decl = ["codeload", ["add", "~codelen", datapos]]
32 copier = functools.partial(_mk_codecopy_copier, mempos=mempos)
33 # Numbers: make sure they're in range
34 if is_base_type(typ, "int128"):
35 return LLLnode.from_list(
36 [
37 "clamp",
38 ["mload", MemoryPositions.MINNUM],
39 data_decl,
40 ["mload", MemoryPositions.MAXNUM],
41 ],
42 typ=typ,
43 annotation="checking int128 input",
44 )
45 # Booleans: make sure they're zero or one
46 elif is_base_type(typ, "bool"):
47 return LLLnode.from_list(
48 ["uclamplt", data_decl, 2], typ=typ, annotation="checking bool input",
49 )
50 # Addresses: make sure they're in range
51 elif is_base_type(typ, "address"):
52 return LLLnode.from_list(
53 ["uclamplt", data_decl, ["mload", MemoryPositions.ADDRSIZE]],
54 typ=typ,
55 annotation="checking address input",
56 )
57 # Bytes: make sure they have the right size
58 elif isinstance(typ, ByteArrayLike):
59 return LLLnode.from_list(
60 [
61 "seq",
62 copier(data_decl, 32 + typ.maxlen),
63 ["assert", ["le", ["calldataload", ["add", 4, data_decl]], typ.maxlen]],
64 ],
65 typ=None,
66 annotation="checking bytearray input",
67 )
68 # Lists: recurse
69 elif isinstance(typ, ListType):
70 if typ.count > 5 or (type(datapos) is list and type(mempos) is list):
71 subtype_size = get_size_of_type(typ.subtype)
72 i_incr = subtype_size * 32
73
74 mem_to = subtype_size * 32 * (typ.count - 1)
75 loop_label = f"_check_list_loop_{str(uuid.uuid4())}"
76
77 offset = 288
78 o = [
79 ["mstore", offset, 0], # init loop
80 ["label", loop_label],
81 make_arg_clamper(
82 ["add", datapos, ["mload", offset]],
83 ["add", mempos, ["mload", offset]],
84 typ.subtype,
85 is_init,
86 ),
87 ["mstore", offset, ["add", ["mload", offset], i_incr]],
88 ["if", ["lt", ["mload", offset], mem_to], ["goto", loop_label]],
89 ]
90 else:
91 o = []
92 for i in range(typ.count):
93 offset = get_size_of_type(typ.subtype) * 32 * i
94 o.append(make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init))
95 return LLLnode.from_list(["seq"] + o, typ=None, annotation="checking list input")
96 # Otherwise don't make any checks
97 else:
98 return LLLnode.from_list("pass")
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/parser/arg_clamps.py b/vyper/parser/arg_clamps.py
--- a/vyper/parser/arg_clamps.py
+++ b/vyper/parser/arg_clamps.py
@@ -22,6 +22,22 @@
def make_arg_clamper(datapos, mempos, typ, is_init=False):
"""
Clamps argument to type limits.
+
+ Arguments
+ ---------
+ datapos : int | LLLnode
+ Calldata offset of the value being clamped
+ mempos : int | LLLnode
+ Memory offset that the value is stored at during clamping
+ typ : vyper.types.types.BaseType
+ Type of the value
+ is_init : bool, optional
+ Boolean indicating if we are generating init bytecode
+
+ Returns
+ -------
+ LLLnode
+ Arg clamper LLL
"""
if not is_init:
@@ -68,31 +84,45 @@
# Lists: recurse
elif isinstance(typ, ListType):
if typ.count > 5 or (type(datapos) is list and type(mempos) is list):
- subtype_size = get_size_of_type(typ.subtype)
- i_incr = subtype_size * 32
+ # find ultimate base type
+ subtype = typ.subtype
+ while hasattr(subtype, "subtype"):
+ subtype = subtype.subtype
+
+ # make arg clamper for the base type
+ offset = MemoryPositions.FREE_LOOP_INDEX
+ clamper = make_arg_clamper(
+ ["add", datapos, ["mload", offset]],
+ ["add", mempos, ["mload", offset]],
+ subtype,
+ is_init,
+ )
+ if clamper.value == "pass":
+ # no point looping if the base type doesn't require clamping
+ return clamper
+
+ # loop the entire array at once, even if it's multidimensional
+ type_size = get_size_of_type(typ)
+ i_incr = get_size_of_type(subtype) * 32
- mem_to = subtype_size * 32 * (typ.count - 1)
+ mem_to = type_size * 32
loop_label = f"_check_list_loop_{str(uuid.uuid4())}"
- offset = 288
- o = [
+ lll_node = [
["mstore", offset, 0], # init loop
["label", loop_label],
- make_arg_clamper(
- ["add", datapos, ["mload", offset]],
- ["add", mempos, ["mload", offset]],
- typ.subtype,
- is_init,
- ),
+ clamper,
["mstore", offset, ["add", ["mload", offset], i_incr]],
["if", ["lt", ["mload", offset], mem_to], ["goto", loop_label]],
]
else:
- o = []
+ lll_node = []
for i in range(typ.count):
offset = get_size_of_type(typ.subtype) * 32 * i
- o.append(make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init))
- return LLLnode.from_list(["seq"] + o, typ=None, annotation="checking list input")
+ lll_node.append(
+ make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init)
+ )
+ return LLLnode.from_list(["seq"] + lll_node, typ=None, annotation="checking list input")
# Otherwise don't make any checks
else:
return LLLnode.from_list("pass")
| {"golden_diff": "diff --git a/vyper/parser/arg_clamps.py b/vyper/parser/arg_clamps.py\n--- a/vyper/parser/arg_clamps.py\n+++ b/vyper/parser/arg_clamps.py\n@@ -22,6 +22,22 @@\n def make_arg_clamper(datapos, mempos, typ, is_init=False):\n \"\"\"\n Clamps argument to type limits.\n+\n+ Arguments\n+ ---------\n+ datapos : int | LLLnode\n+ Calldata offset of the value being clamped\n+ mempos : int | LLLnode\n+ Memory offset that the value is stored at during clamping\n+ typ : vyper.types.types.BaseType\n+ Type of the value\n+ is_init : bool, optional\n+ Boolean indicating if we are generating init bytecode\n+\n+ Returns\n+ -------\n+ LLLnode\n+ Arg clamper LLL\n \"\"\"\n \n if not is_init:\n@@ -68,31 +84,45 @@\n # Lists: recurse\n elif isinstance(typ, ListType):\n if typ.count > 5 or (type(datapos) is list and type(mempos) is list):\n- subtype_size = get_size_of_type(typ.subtype)\n- i_incr = subtype_size * 32\n+ # find ultimate base type\n+ subtype = typ.subtype\n+ while hasattr(subtype, \"subtype\"):\n+ subtype = subtype.subtype\n+\n+ # make arg clamper for the base type\n+ offset = MemoryPositions.FREE_LOOP_INDEX\n+ clamper = make_arg_clamper(\n+ [\"add\", datapos, [\"mload\", offset]],\n+ [\"add\", mempos, [\"mload\", offset]],\n+ subtype,\n+ is_init,\n+ )\n+ if clamper.value == \"pass\":\n+ # no point looping if the base type doesn't require clamping\n+ return clamper\n+\n+ # loop the entire array at once, even if it's multidimensional\n+ type_size = get_size_of_type(typ)\n+ i_incr = get_size_of_type(subtype) * 32\n \n- mem_to = subtype_size * 32 * (typ.count - 1)\n+ mem_to = type_size * 32\n loop_label = f\"_check_list_loop_{str(uuid.uuid4())}\"\n \n- offset = 288\n- o = [\n+ lll_node = [\n [\"mstore\", offset, 0], # init loop\n [\"label\", loop_label],\n- make_arg_clamper(\n- [\"add\", datapos, [\"mload\", offset]],\n- [\"add\", mempos, [\"mload\", offset]],\n- typ.subtype,\n- is_init,\n- ),\n+ clamper,\n [\"mstore\", offset, [\"add\", [\"mload\", offset], i_incr]],\n [\"if\", [\"lt\", [\"mload\", offset], mem_to], [\"goto\", loop_label]],\n ]\n else:\n- o = []\n+ lll_node = []\n for i in range(typ.count):\n offset = get_size_of_type(typ.subtype) * 32 * i\n- o.append(make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init))\n- return LLLnode.from_list([\"seq\"] + o, typ=None, annotation=\"checking list input\")\n+ lll_node.append(\n+ make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init)\n+ )\n+ return LLLnode.from_list([\"seq\"] + lll_node, typ=None, annotation=\"checking list input\")\n # Otherwise don't make any checks\n else:\n return LLLnode.from_list(\"pass\")\n", "issue": "Infinite loop from multidimensional array in calldata\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): latest master\r\n* OS: linux\r\n* Python Version: `3.8.2`\r\n\r\n### What's your issue about?\r\n\r\nUsing a multidimensional array in the function inputs, where the length of the >=2nd dimension is 6 or more, causes an infinite loop when calling the function.\r\n\r\nFor example, each of the following methods will compile but attempting to call them fails with out of gas:\r\n\r\n```python\r\n@public\r\ndef foo(a: uint256[1][6]):\r\n pass\r\n\r\n@public\r\ndef bar(a: uint256[1][1][6]):\r\n pass\r\n```\r\n\r\nFor comparison, these methods execute as expected:\r\n\r\n```python\r\n@public\r\ndef foo(a: uint256[6][1]):\r\n pass\r\n\r\n@public\r\ndef bar(a: uint256[100][5][5][5]):\r\n pass\r\n```\r\n\r\n#### Some observations:\r\n\r\n* The length of the first array element has no effect.\r\n* The data type has no effect.\r\n* The location of the array within calldata, and total number of arguments, has no effect.\r\n* The number of dimensions, or dimension that exceeds a length of 5, has no effect.\r\n\r\n### How can it be fixed?\r\nUnsure at this time.\n", "before_files": [{"content": "import functools\nimport uuid\n\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.types.types import (\n ByteArrayLike,\n ListType,\n get_size_of_type,\n is_base_type,\n)\nfrom vyper.utils import MemoryPositions\n\n\ndef _mk_calldatacopy_copier(pos, sz, mempos):\n return [\"calldatacopy\", mempos, [\"add\", 4, pos], sz]\n\n\ndef _mk_codecopy_copier(pos, sz, mempos):\n return [\"codecopy\", mempos, [\"add\", \"~codelen\", pos], sz]\n\n\ndef make_arg_clamper(datapos, mempos, typ, is_init=False):\n \"\"\"\n Clamps argument to type limits.\n \"\"\"\n\n if not is_init:\n data_decl = [\"calldataload\", [\"add\", 4, datapos]]\n copier = functools.partial(_mk_calldatacopy_copier, mempos=mempos)\n else:\n data_decl = [\"codeload\", [\"add\", \"~codelen\", datapos]]\n copier = functools.partial(_mk_codecopy_copier, mempos=mempos)\n # Numbers: make sure they're in range\n if is_base_type(typ, \"int128\"):\n return LLLnode.from_list(\n [\n \"clamp\",\n [\"mload\", MemoryPositions.MINNUM],\n data_decl,\n [\"mload\", MemoryPositions.MAXNUM],\n ],\n typ=typ,\n annotation=\"checking int128 input\",\n )\n # Booleans: make sure they're zero or one\n elif is_base_type(typ, \"bool\"):\n return LLLnode.from_list(\n [\"uclamplt\", data_decl, 2], typ=typ, annotation=\"checking bool input\",\n )\n # Addresses: make sure they're in range\n elif is_base_type(typ, \"address\"):\n return LLLnode.from_list(\n [\"uclamplt\", data_decl, [\"mload\", MemoryPositions.ADDRSIZE]],\n typ=typ,\n annotation=\"checking address input\",\n )\n # Bytes: make sure they have the right size\n elif isinstance(typ, ByteArrayLike):\n return LLLnode.from_list(\n [\n \"seq\",\n copier(data_decl, 32 + typ.maxlen),\n [\"assert\", [\"le\", [\"calldataload\", [\"add\", 4, data_decl]], typ.maxlen]],\n ],\n typ=None,\n annotation=\"checking bytearray input\",\n )\n # Lists: recurse\n elif isinstance(typ, ListType):\n if typ.count > 5 or (type(datapos) is list and type(mempos) is list):\n subtype_size = get_size_of_type(typ.subtype)\n i_incr = subtype_size * 32\n\n mem_to = subtype_size * 32 * (typ.count - 1)\n loop_label = f\"_check_list_loop_{str(uuid.uuid4())}\"\n\n offset = 288\n o = [\n [\"mstore\", offset, 0], # init loop\n [\"label\", loop_label],\n make_arg_clamper(\n [\"add\", datapos, [\"mload\", offset]],\n [\"add\", mempos, [\"mload\", offset]],\n typ.subtype,\n is_init,\n ),\n [\"mstore\", offset, [\"add\", [\"mload\", offset], i_incr]],\n [\"if\", [\"lt\", [\"mload\", offset], mem_to], [\"goto\", loop_label]],\n ]\n else:\n o = []\n for i in range(typ.count):\n offset = get_size_of_type(typ.subtype) * 32 * i\n o.append(make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init))\n return LLLnode.from_list([\"seq\"] + o, typ=None, annotation=\"checking list input\")\n # Otherwise don't make any checks\n else:\n return LLLnode.from_list(\"pass\")\n", "path": "vyper/parser/arg_clamps.py"}], "after_files": [{"content": "import functools\nimport uuid\n\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.types.types import (\n ByteArrayLike,\n ListType,\n get_size_of_type,\n is_base_type,\n)\nfrom vyper.utils import MemoryPositions\n\n\ndef _mk_calldatacopy_copier(pos, sz, mempos):\n return [\"calldatacopy\", mempos, [\"add\", 4, pos], sz]\n\n\ndef _mk_codecopy_copier(pos, sz, mempos):\n return [\"codecopy\", mempos, [\"add\", \"~codelen\", pos], sz]\n\n\ndef make_arg_clamper(datapos, mempos, typ, is_init=False):\n \"\"\"\n Clamps argument to type limits.\n\n Arguments\n ---------\n datapos : int | LLLnode\n Calldata offset of the value being clamped\n mempos : int | LLLnode\n Memory offset that the value is stored at during clamping\n typ : vyper.types.types.BaseType\n Type of the value\n is_init : bool, optional\n Boolean indicating if we are generating init bytecode\n\n Returns\n -------\n LLLnode\n Arg clamper LLL\n \"\"\"\n\n if not is_init:\n data_decl = [\"calldataload\", [\"add\", 4, datapos]]\n copier = functools.partial(_mk_calldatacopy_copier, mempos=mempos)\n else:\n data_decl = [\"codeload\", [\"add\", \"~codelen\", datapos]]\n copier = functools.partial(_mk_codecopy_copier, mempos=mempos)\n # Numbers: make sure they're in range\n if is_base_type(typ, \"int128\"):\n return LLLnode.from_list(\n [\n \"clamp\",\n [\"mload\", MemoryPositions.MINNUM],\n data_decl,\n [\"mload\", MemoryPositions.MAXNUM],\n ],\n typ=typ,\n annotation=\"checking int128 input\",\n )\n # Booleans: make sure they're zero or one\n elif is_base_type(typ, \"bool\"):\n return LLLnode.from_list(\n [\"uclamplt\", data_decl, 2], typ=typ, annotation=\"checking bool input\",\n )\n # Addresses: make sure they're in range\n elif is_base_type(typ, \"address\"):\n return LLLnode.from_list(\n [\"uclamplt\", data_decl, [\"mload\", MemoryPositions.ADDRSIZE]],\n typ=typ,\n annotation=\"checking address input\",\n )\n # Bytes: make sure they have the right size\n elif isinstance(typ, ByteArrayLike):\n return LLLnode.from_list(\n [\n \"seq\",\n copier(data_decl, 32 + typ.maxlen),\n [\"assert\", [\"le\", [\"calldataload\", [\"add\", 4, data_decl]], typ.maxlen]],\n ],\n typ=None,\n annotation=\"checking bytearray input\",\n )\n # Lists: recurse\n elif isinstance(typ, ListType):\n if typ.count > 5 or (type(datapos) is list and type(mempos) is list):\n # find ultimate base type\n subtype = typ.subtype\n while hasattr(subtype, \"subtype\"):\n subtype = subtype.subtype\n\n # make arg clamper for the base type\n offset = MemoryPositions.FREE_LOOP_INDEX\n clamper = make_arg_clamper(\n [\"add\", datapos, [\"mload\", offset]],\n [\"add\", mempos, [\"mload\", offset]],\n subtype,\n is_init,\n )\n if clamper.value == \"pass\":\n # no point looping if the base type doesn't require clamping\n return clamper\n\n # loop the entire array at once, even if it's multidimensional\n type_size = get_size_of_type(typ)\n i_incr = get_size_of_type(subtype) * 32\n\n mem_to = type_size * 32\n loop_label = f\"_check_list_loop_{str(uuid.uuid4())}\"\n\n lll_node = [\n [\"mstore\", offset, 0], # init loop\n [\"label\", loop_label],\n clamper,\n [\"mstore\", offset, [\"add\", [\"mload\", offset], i_incr]],\n [\"if\", [\"lt\", [\"mload\", offset], mem_to], [\"goto\", loop_label]],\n ]\n else:\n lll_node = []\n for i in range(typ.count):\n offset = get_size_of_type(typ.subtype) * 32 * i\n lll_node.append(\n make_arg_clamper(datapos + offset, mempos + offset, typ.subtype, is_init)\n )\n return LLLnode.from_list([\"seq\"] + lll_node, typ=None, annotation=\"checking list input\")\n # Otherwise don't make any checks\n else:\n return LLLnode.from_list(\"pass\")\n", "path": "vyper/parser/arg_clamps.py"}]} | 1,605 | 837 |
gh_patches_debug_1907 | rasdani/github-patches | git_diff | google__flax-628 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
After update from 0.2.0: AttributeError: module 'jax.core' has no attribute 'eval_context'
After updating from flax 0.2.0 to flax 0.2.2 I get the above error message. Downgrading to 0.2.0 solves this, so the error source is located. I'm working with the now deprecated flax.nn package if backward-compatibility might be the reason for this issue.
The Issue is encountered in a custom RNN, when using the init_by_shape function in conjunction with jax.lax.scan.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding='utf-8').read()
24 except IOError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.1.59",
30 "matplotlib", # only needed for tensorboard export
31 "dataclasses;python_version<'3.7'", # will only install on py3.6
32 "msgpack",
33 ]
34
35 tests_require = [
36 "atari-py",
37 "clu", # All examples.
38 "gym",
39 "jaxlib",
40 "ml-collections",
41 "opencv-python",
42 "pytest",
43 "pytest-cov",
44 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
45 "sentencepiece", # WMT example.
46 "svn",
47 "tensorflow",
48 "tensorflow_text", # WMT example.
49 "tensorflow_datasets",
50 ]
51
52 __version__ = None
53
54 with open('flax/version.py') as f:
55 exec(f.read(), globals())
56
57 setup(
58 name="flax",
59 version=__version__,
60 description="Flax: A neural network library for JAX designed for flexibility",
61 long_description="\n\n".join([README]),
62 long_description_content_type='text/markdown',
63 classifiers=[
64 "Development Status :: 3 - Alpha",
65 "Intended Audience :: Developers",
66 "Intended Audience :: Science/Research",
67 "License :: OSI Approved :: Apache Software License",
68 "Programming Language :: Python :: 3.7",
69 "Topic :: Scientific/Engineering :: Artificial Intelligence",
70 ],
71 keywords="",
72 author="Flax team",
73 author_email="[email protected]",
74 url="https://github.com/google/flax",
75 packages=find_packages(),
76 include_package_data=False,
77 zip_safe=False,
78 install_requires=install_requires,
79 extras_require={
80 "testing": tests_require,
81 },
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.1.59",
+ "jax>=0.1.77",
"matplotlib", # only needed for tensorboard export
"dataclasses;python_version<'3.7'", # will only install on py3.6
"msgpack",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \n install_requires = [\n \"numpy>=1.12\",\n- \"jax>=0.1.59\",\n+ \"jax>=0.1.77\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n", "issue": "After update from 0.2.0: AttributeError: module 'jax.core' has no attribute 'eval_context'\nAfter updating from flax 0.2.0 to flax 0.2.2 I get the above error message. Downgrading to 0.2.0 solves this, so the error source is located. I'm working with the now deprecated flax.nn package if backward-compatibility might be the reason for this issue.\r\nThe Issue is encountered in a custom RNN, when using the init_by_shape function in conjunction with jax.lax.scan.\n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.59\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"atari-py\",\n \"clu\", # All examples.\n \"gym\",\n \"jaxlib\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow\",\n \"tensorflow_text\", # WMT example.\n \"tensorflow_datasets\",\n]\n\n__version__ = None\n\nwith open('flax/version.py') as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.77\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"atari-py\",\n \"clu\", # All examples.\n \"gym\",\n \"jaxlib\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow\",\n \"tensorflow_text\", # WMT example.\n \"tensorflow_datasets\",\n]\n\n__version__ = None\n\nwith open('flax/version.py') as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,129 | 110 |
gh_patches_debug_5356 | rasdani/github-patches | git_diff | getsentry__sentry-python-1093 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
threading.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10
Ref : https://github.com/python/cpython/pull/25174
https://github.com/getsentry/sentry-python/blob/927903e3b354a42e427d91129c399d64d480a6b9/sentry_sdk/worker.py#L69
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/worker.py`
Content:
```
1 import os
2 import threading
3
4 from time import sleep, time
5 from sentry_sdk._compat import check_thread_support
6 from sentry_sdk._queue import Queue, Full
7 from sentry_sdk.utils import logger
8 from sentry_sdk.consts import DEFAULT_QUEUE_SIZE
9
10 from sentry_sdk._types import MYPY
11
12 if MYPY:
13 from typing import Any
14 from typing import Optional
15 from typing import Callable
16
17
18 _TERMINATOR = object()
19
20
21 class BackgroundWorker(object):
22 def __init__(self, queue_size=DEFAULT_QUEUE_SIZE):
23 # type: (int) -> None
24 check_thread_support()
25 self._queue = Queue(queue_size) # type: Queue
26 self._lock = threading.Lock()
27 self._thread = None # type: Optional[threading.Thread]
28 self._thread_for_pid = None # type: Optional[int]
29
30 @property
31 def is_alive(self):
32 # type: () -> bool
33 if self._thread_for_pid != os.getpid():
34 return False
35 if not self._thread:
36 return False
37 return self._thread.is_alive()
38
39 def _ensure_thread(self):
40 # type: () -> None
41 if not self.is_alive:
42 self.start()
43
44 def _timed_queue_join(self, timeout):
45 # type: (float) -> bool
46 deadline = time() + timeout
47 queue = self._queue
48
49 queue.all_tasks_done.acquire()
50
51 try:
52 while queue.unfinished_tasks:
53 delay = deadline - time()
54 if delay <= 0:
55 return False
56 queue.all_tasks_done.wait(timeout=delay)
57
58 return True
59 finally:
60 queue.all_tasks_done.release()
61
62 def start(self):
63 # type: () -> None
64 with self._lock:
65 if not self.is_alive:
66 self._thread = threading.Thread(
67 target=self._target, name="raven-sentry.BackgroundWorker"
68 )
69 self._thread.setDaemon(True)
70 self._thread.start()
71 self._thread_for_pid = os.getpid()
72
73 def kill(self):
74 # type: () -> None
75 """
76 Kill worker thread. Returns immediately. Not useful for
77 waiting on shutdown for events, use `flush` for that.
78 """
79 logger.debug("background worker got kill request")
80 with self._lock:
81 if self._thread:
82 try:
83 self._queue.put_nowait(_TERMINATOR)
84 except Full:
85 logger.debug("background worker queue full, kill failed")
86
87 self._thread = None
88 self._thread_for_pid = None
89
90 def flush(self, timeout, callback=None):
91 # type: (float, Optional[Any]) -> None
92 logger.debug("background worker got flush request")
93 with self._lock:
94 if self.is_alive and timeout > 0.0:
95 self._wait_flush(timeout, callback)
96 logger.debug("background worker flushed")
97
98 def _wait_flush(self, timeout, callback):
99 # type: (float, Optional[Any]) -> None
100 initial_timeout = min(0.1, timeout)
101 if not self._timed_queue_join(initial_timeout):
102 pending = self._queue.qsize() + 1
103 logger.debug("%d event(s) pending on flush", pending)
104 if callback is not None:
105 callback(pending, timeout)
106
107 if not self._timed_queue_join(timeout - initial_timeout):
108 pending = self._queue.qsize() + 1
109 logger.error("flush timed out, dropped %s events", pending)
110
111 def submit(self, callback):
112 # type: (Callable[[], None]) -> bool
113 self._ensure_thread()
114 try:
115 self._queue.put_nowait(callback)
116 return True
117 except Full:
118 return False
119
120 def _target(self):
121 # type: () -> None
122 while True:
123 callback = self._queue.get()
124 try:
125 if callback is _TERMINATOR:
126 break
127 try:
128 callback()
129 except Exception:
130 logger.error("Failed processing job", exc_info=True)
131 finally:
132 self._queue.task_done()
133 sleep(0)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py
--- a/sentry_sdk/worker.py
+++ b/sentry_sdk/worker.py
@@ -66,7 +66,7 @@
self._thread = threading.Thread(
target=self._target, name="raven-sentry.BackgroundWorker"
)
- self._thread.setDaemon(True)
+ self._thread.daemon = True
self._thread.start()
self._thread_for_pid = os.getpid()
| {"golden_diff": "diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py\n--- a/sentry_sdk/worker.py\n+++ b/sentry_sdk/worker.py\n@@ -66,7 +66,7 @@\n self._thread = threading.Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n- self._thread.setDaemon(True)\n+ self._thread.daemon = True\n self._thread.start()\n self._thread_for_pid = os.getpid()\n", "issue": "threading.setDaemon has been deprecated in favor of setting daemon attribute directly in Python 3.10\nRef : https://github.com/python/cpython/pull/25174\r\n\r\nhttps://github.com/getsentry/sentry-python/blob/927903e3b354a42e427d91129c399d64d480a6b9/sentry_sdk/worker.py#L69\n", "before_files": [{"content": "import os\nimport threading\n\nfrom time import sleep, time\nfrom sentry_sdk._compat import check_thread_support\nfrom sentry_sdk._queue import Queue, Full\nfrom sentry_sdk.utils import logger\nfrom sentry_sdk.consts import DEFAULT_QUEUE_SIZE\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n\n_TERMINATOR = object()\n\n\nclass BackgroundWorker(object):\n def __init__(self, queue_size=DEFAULT_QUEUE_SIZE):\n # type: (int) -> None\n check_thread_support()\n self._queue = Queue(queue_size) # type: Queue\n self._lock = threading.Lock()\n self._thread = None # type: Optional[threading.Thread]\n self._thread_for_pid = None # type: Optional[int]\n\n @property\n def is_alive(self):\n # type: () -> bool\n if self._thread_for_pid != os.getpid():\n return False\n if not self._thread:\n return False\n return self._thread.is_alive()\n\n def _ensure_thread(self):\n # type: () -> None\n if not self.is_alive:\n self.start()\n\n def _timed_queue_join(self, timeout):\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n\n queue.all_tasks_done.acquire()\n\n try:\n while queue.unfinished_tasks:\n delay = deadline - time()\n if delay <= 0:\n return False\n queue.all_tasks_done.wait(timeout=delay)\n\n return True\n finally:\n queue.all_tasks_done.release()\n\n def start(self):\n # type: () -> None\n with self._lock:\n if not self.is_alive:\n self._thread = threading.Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n self._thread.setDaemon(True)\n self._thread.start()\n self._thread_for_pid = os.getpid()\n\n def kill(self):\n # type: () -> None\n \"\"\"\n Kill worker thread. Returns immediately. Not useful for\n waiting on shutdown for events, use `flush` for that.\n \"\"\"\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n try:\n self._queue.put_nowait(_TERMINATOR)\n except Full:\n logger.debug(\"background worker queue full, kill failed\")\n\n self._thread = None\n self._thread_for_pid = None\n\n def flush(self, timeout, callback=None):\n # type: (float, Optional[Any]) -> None\n logger.debug(\"background worker got flush request\")\n with self._lock:\n if self.is_alive and timeout > 0.0:\n self._wait_flush(timeout, callback)\n logger.debug(\"background worker flushed\")\n\n def _wait_flush(self, timeout, callback):\n # type: (float, Optional[Any]) -> None\n initial_timeout = min(0.1, timeout)\n if not self._timed_queue_join(initial_timeout):\n pending = self._queue.qsize() + 1\n logger.debug(\"%d event(s) pending on flush\", pending)\n if callback is not None:\n callback(pending, timeout)\n\n if not self._timed_queue_join(timeout - initial_timeout):\n pending = self._queue.qsize() + 1\n logger.error(\"flush timed out, dropped %s events\", pending)\n\n def submit(self, callback):\n # type: (Callable[[], None]) -> bool\n self._ensure_thread()\n try:\n self._queue.put_nowait(callback)\n return True\n except Full:\n return False\n\n def _target(self):\n # type: () -> None\n while True:\n callback = self._queue.get()\n try:\n if callback is _TERMINATOR:\n break\n try:\n callback()\n except Exception:\n logger.error(\"Failed processing job\", exc_info=True)\n finally:\n self._queue.task_done()\n sleep(0)\n", "path": "sentry_sdk/worker.py"}], "after_files": [{"content": "import os\nimport threading\n\nfrom time import sleep, time\nfrom sentry_sdk._compat import check_thread_support\nfrom sentry_sdk._queue import Queue, Full\nfrom sentry_sdk.utils import logger\nfrom sentry_sdk.consts import DEFAULT_QUEUE_SIZE\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n\n_TERMINATOR = object()\n\n\nclass BackgroundWorker(object):\n def __init__(self, queue_size=DEFAULT_QUEUE_SIZE):\n # type: (int) -> None\n check_thread_support()\n self._queue = Queue(queue_size) # type: Queue\n self._lock = threading.Lock()\n self._thread = None # type: Optional[threading.Thread]\n self._thread_for_pid = None # type: Optional[int]\n\n @property\n def is_alive(self):\n # type: () -> bool\n if self._thread_for_pid != os.getpid():\n return False\n if not self._thread:\n return False\n return self._thread.is_alive()\n\n def _ensure_thread(self):\n # type: () -> None\n if not self.is_alive:\n self.start()\n\n def _timed_queue_join(self, timeout):\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n\n queue.all_tasks_done.acquire()\n\n try:\n while queue.unfinished_tasks:\n delay = deadline - time()\n if delay <= 0:\n return False\n queue.all_tasks_done.wait(timeout=delay)\n\n return True\n finally:\n queue.all_tasks_done.release()\n\n def start(self):\n # type: () -> None\n with self._lock:\n if not self.is_alive:\n self._thread = threading.Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n self._thread.daemon = True\n self._thread.start()\n self._thread_for_pid = os.getpid()\n\n def kill(self):\n # type: () -> None\n \"\"\"\n Kill worker thread. Returns immediately. Not useful for\n waiting on shutdown for events, use `flush` for that.\n \"\"\"\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n try:\n self._queue.put_nowait(_TERMINATOR)\n except Full:\n logger.debug(\"background worker queue full, kill failed\")\n\n self._thread = None\n self._thread_for_pid = None\n\n def flush(self, timeout, callback=None):\n # type: (float, Optional[Any]) -> None\n logger.debug(\"background worker got flush request\")\n with self._lock:\n if self.is_alive and timeout > 0.0:\n self._wait_flush(timeout, callback)\n logger.debug(\"background worker flushed\")\n\n def _wait_flush(self, timeout, callback):\n # type: (float, Optional[Any]) -> None\n initial_timeout = min(0.1, timeout)\n if not self._timed_queue_join(initial_timeout):\n pending = self._queue.qsize() + 1\n logger.debug(\"%d event(s) pending on flush\", pending)\n if callback is not None:\n callback(pending, timeout)\n\n if not self._timed_queue_join(timeout - initial_timeout):\n pending = self._queue.qsize() + 1\n logger.error(\"flush timed out, dropped %s events\", pending)\n\n def submit(self, callback):\n # type: (Callable[[], None]) -> None\n self._ensure_thread()\n try:\n self._queue.put_nowait(callback)\n except Full:\n self.on_full_queue(callback)\n\n def on_full_queue(self, callback):\n # type: (Optional[Any]) -> None\n logger.error(\"background worker queue full, dropping event\")\n\n def _target(self):\n # type: () -> None\n while True:\n callback = self._queue.get()\n try:\n if callback is _TERMINATOR:\n break\n try:\n callback()\n except Exception:\n logger.error(\"Failed processing job\", exc_info=True)\n finally:\n self._queue.task_done()\n sleep(0)\n", "path": "sentry_sdk/worker.py"}]} | 1,550 | 109 |
gh_patches_debug_13158 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2850 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_144 false negative after updating to AWS Provider 4.0
**Describe the issue**
After updating our AWS Provider to ~>4.0, we started getting a failure on `CKV_AWS_144` in our bucket module, despite having a properly configured `aws_s3_bucket_lifecycle_configuration` block.
**Examples**
Sample code:
```hcl
provider "aws" {
alias = "aws-primary"
region = "us-east-1"
}
provider "aws" {
alias = "aws-dr"
region = "us-west-2"
}
resource "aws_s3_bucket" "test_bucket" {
bucket = var.bucket_name
}
resource "aws_s3_bucket" "test_dr_bucket" {
provider = aws.aws-dr
bucket = "${var.bucket_name}-dr"
}
resource "aws_s3_bucket_versioning" "test_bucket_versioning" {
bucket = aws_s3_bucket.test_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_versioning" "test_dr_bucket_versioning" {
provider = aws.aws-dr
bucket = aws_s3_bucket.test_dr_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_iam_role" "dr_replication" {
name_prefix = "replication"
description = "Allow S3 to assume the role for replication"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3ReplicationAssume",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_policy" "dr_replication" {
name_prefix = "replication"
description = "Allows reading for replication."
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.test_bucket.arn}"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.test_bucket.arn}/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags",
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Effect": "Allow",
"Resource": "${aws_s3_bucket.test_dr_bucket.arn}/*"
}
]
}
POLICY
}
resource "aws_iam_policy_attachment" "dr_replication" {
name = "replication"
roles = [aws_iam_role.dr_replication.name]
policy_arn = aws_iam_policy.dr_replication.arn
}
resource "aws_s3_bucket_replication_configuration" "dr_bucket_replication" {
# Must have bucket versioning enabled first
depends_on = [
aws_s3_bucket_versioning.test_bucket_versioning,
aws_s3_bucket_versioning.test_dr_bucket_versioning,
]
role = aws_iam_role.dr_replication.arn
bucket = aws_s3_bucket.test_bucket.id
rule {
id = "entire_bucket"
status = "Enabled"
destination {
bucket = aws_s3_bucket.test_dr_bucket.arn
storage_class = "DEEP_ARCHIVE"
}
}
}
```
Expected: `CKV_AWS_144` will pass.
Actual:
```
Check: CKV_AWS_144: "Ensure that S3 bucket has cross-region replication enabled"
FAILED for resource: aws_s3_bucket.test_bucket
File: /dr_test.tf:11-13
Guide: https://docs.bridgecrew.io/docs/ensure-that-s3-bucket-has-cross-region-replication-enabled
11 | resource "aws_s3_bucket" "test_bucket" {
12 | bucket = var.bucket_name
13 | }
```
**Desktop (please complete the following information):**
- OS: MacOS 10.14.6
- Checkov Version 2.0.1074
**Additional context**
On the surface, this looks like related to https://github.com/bridgecrewio/checkov/issues/2399 and https://github.com/bridgecrewio/checkov/pull/2724, but to the `CKV_AWS_144` rule.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py`
Content:
```
1 from checkov.common.models.consts import ANY_VALUE
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3 from checkov.common.models.enums import CheckCategories
4
5
6 class S3BucketReplicationConfiguration(BaseResourceValueCheck):
7 def __init__(self):
8 name = "Ensure that S3 bucket has cross-region replication enabled"
9 id = "CKV_AWS_144"
10 supported_resources = ['aws_s3_bucket']
11 categories = [CheckCategories.GENERAL_SECURITY]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def get_inspected_key(self):
15 return "replication_configuration/[0]/role"
16
17 def get_expected_value(self):
18 return ANY_VALUE
19
20
21 check = S3BucketReplicationConfiguration()
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py b/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py
deleted file mode 100644
--- a/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from checkov.common.models.consts import ANY_VALUE
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
-from checkov.common.models.enums import CheckCategories
-
-
-class S3BucketReplicationConfiguration(BaseResourceValueCheck):
- def __init__(self):
- name = "Ensure that S3 bucket has cross-region replication enabled"
- id = "CKV_AWS_144"
- supported_resources = ['aws_s3_bucket']
- categories = [CheckCategories.GENERAL_SECURITY]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
-
- def get_inspected_key(self):
- return "replication_configuration/[0]/role"
-
- def get_expected_value(self):
- return ANY_VALUE
-
-
-check = S3BucketReplicationConfiguration()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py b/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py\ndeleted file mode 100644\n--- a/checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py\n+++ /dev/null\n@@ -1,21 +0,0 @@\n-from checkov.common.models.consts import ANY_VALUE\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-from checkov.common.models.enums import CheckCategories\n-\n-\n-class S3BucketReplicationConfiguration(BaseResourceValueCheck):\n- def __init__(self):\n- name = \"Ensure that S3 bucket has cross-region replication enabled\"\n- id = \"CKV_AWS_144\"\n- supported_resources = ['aws_s3_bucket']\n- categories = [CheckCategories.GENERAL_SECURITY]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n-\n- def get_inspected_key(self):\n- return \"replication_configuration/[0]/role\"\n-\n- def get_expected_value(self):\n- return ANY_VALUE\n-\n-\n-check = S3BucketReplicationConfiguration()\n", "issue": "CKV_AWS_144 false negative after updating to AWS Provider 4.0 \n**Describe the issue**\r\nAfter updating our AWS Provider to ~>4.0, we started getting a failure on `CKV_AWS_144` in our bucket module, despite having a properly configured `aws_s3_bucket_lifecycle_configuration` block.\r\n\r\n**Examples**\r\nSample code:\r\n\r\n```hcl\r\nprovider \"aws\" {\r\n alias = \"aws-primary\"\r\n region = \"us-east-1\"\r\n}\r\n\r\nprovider \"aws\" {\r\n alias = \"aws-dr\"\r\n region = \"us-west-2\"\r\n}\r\n\r\nresource \"aws_s3_bucket\" \"test_bucket\" {\r\n bucket = var.bucket_name\r\n}\r\n\r\nresource \"aws_s3_bucket\" \"test_dr_bucket\" {\r\n provider = aws.aws-dr\r\n bucket = \"${var.bucket_name}-dr\"\r\n}\r\n\r\nresource \"aws_s3_bucket_versioning\" \"test_bucket_versioning\" {\r\n bucket = aws_s3_bucket.test_bucket.id\r\n versioning_configuration {\r\n status = \"Enabled\"\r\n }\r\n}\r\n\r\nresource \"aws_s3_bucket_versioning\" \"test_dr_bucket_versioning\" {\r\n provider = aws.aws-dr\r\n bucket = aws_s3_bucket.test_dr_bucket.id\r\n versioning_configuration {\r\n status = \"Enabled\"\r\n }\r\n}\r\n\r\nresource \"aws_iam_role\" \"dr_replication\" {\r\n name_prefix = \"replication\"\r\n description = \"Allow S3 to assume the role for replication\"\r\n\r\n assume_role_policy = <<POLICY\r\n{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Sid\": \"s3ReplicationAssume\",\r\n \"Effect\": \"Allow\",\r\n \"Principal\": {\r\n \"Service\": \"s3.amazonaws.com\"\r\n },\r\n \"Action\": \"sts:AssumeRole\"\r\n }\r\n ]\r\n}\r\nPOLICY\r\n}\r\n\r\nresource \"aws_iam_policy\" \"dr_replication\" {\r\n name_prefix = \"replication\"\r\n description = \"Allows reading for replication.\"\r\n\r\n policy = <<POLICY\r\n{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Action\": [\r\n \"s3:GetReplicationConfiguration\",\r\n \"s3:ListBucket\"\r\n ],\r\n \"Effect\": \"Allow\",\r\n \"Resource\": [\r\n \"${aws_s3_bucket.test_bucket.arn}\"\r\n ]\r\n },\r\n {\r\n \"Action\": [\r\n \"s3:GetObjectVersion\",\r\n \"s3:GetObjectVersionForReplication\",\r\n \"s3:GetObjectVersionAcl\"\r\n ],\r\n \"Effect\": \"Allow\",\r\n \"Resource\": [\r\n \"${aws_s3_bucket.test_bucket.arn}/*\"\r\n ]\r\n },\r\n {\r\n \"Action\": [\r\n \"s3:ReplicateObject\",\r\n \"s3:ReplicateTags\",\r\n \"s3:ObjectOwnerOverrideToBucketOwner\"\r\n ],\r\n \"Effect\": \"Allow\",\r\n \"Resource\": \"${aws_s3_bucket.test_dr_bucket.arn}/*\"\r\n }\r\n ]\r\n}\r\nPOLICY\r\n}\r\n\r\nresource \"aws_iam_policy_attachment\" \"dr_replication\" {\r\n name = \"replication\"\r\n roles = [aws_iam_role.dr_replication.name]\r\n policy_arn = aws_iam_policy.dr_replication.arn\r\n}\r\n\r\nresource \"aws_s3_bucket_replication_configuration\" \"dr_bucket_replication\" {\r\n\r\n # Must have bucket versioning enabled first\r\n depends_on = [\r\n aws_s3_bucket_versioning.test_bucket_versioning,\r\n aws_s3_bucket_versioning.test_dr_bucket_versioning,\r\n ]\r\n\r\n role = aws_iam_role.dr_replication.arn\r\n bucket = aws_s3_bucket.test_bucket.id\r\n\r\n rule {\r\n id = \"entire_bucket\"\r\n status = \"Enabled\"\r\n\r\n destination {\r\n bucket = aws_s3_bucket.test_dr_bucket.arn\r\n storage_class = \"DEEP_ARCHIVE\"\r\n }\r\n }\r\n}\r\n\r\n```\r\n\r\nExpected: `CKV_AWS_144` will pass.\r\n\r\nActual: \r\n```\r\nCheck: CKV_AWS_144: \"Ensure that S3 bucket has cross-region replication enabled\"\r\n FAILED for resource: aws_s3_bucket.test_bucket\r\n File: /dr_test.tf:11-13\r\n Guide: https://docs.bridgecrew.io/docs/ensure-that-s3-bucket-has-cross-region-replication-enabled\r\n\r\n 11 | resource \"aws_s3_bucket\" \"test_bucket\" {\r\n 12 | bucket = var.bucket_name\r\n 13 | }\r\n```\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS 10.14.6\r\n - Checkov Version 2.0.1074\r\n\r\n**Additional context**\r\nOn the surface, this looks like related to https://github.com/bridgecrewio/checkov/issues/2399 and https://github.com/bridgecrewio/checkov/pull/2724, but to the `CKV_AWS_144` rule.\n", "before_files": [{"content": "from checkov.common.models.consts import ANY_VALUE\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\n\n\nclass S3BucketReplicationConfiguration(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that S3 bucket has cross-region replication enabled\"\n id = \"CKV_AWS_144\"\n supported_resources = ['aws_s3_bucket']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"replication_configuration/[0]/role\"\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = S3BucketReplicationConfiguration()\n", "path": "checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py"}], "after_files": [{"content": null, "path": "checkov/terraform/checks/resource/aws/S3BucketReplicationConfiguration.py"}]} | 1,546 | 265 |
gh_patches_debug_38571 | rasdani/github-patches | git_diff | archlinux__archinstall-1659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Save configuration" Improve UX
*Feature Request*
I have some time in the next week, I may try and implement this depending on how many roadblocks I hit. Currently, to save a configuration, you are prompted to enter a directory. If it isn't valid, you have to scratch your head wondering what the exact paths are, because you can't discover any directory without exiting `archinstall`.
It would be great if there was a sort of tab completion, or even filesystem traversal similar to `ncdu` to find a directory to save the configuration in.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/user_interaction/save_conf.py`
Content:
```
1 from __future__ import annotations
2
3 from pathlib import Path
4 from typing import Any, Dict, TYPE_CHECKING
5
6 from ..configuration import ConfigurationOutput
7 from ..menu import Menu
8 from ..menu.menu import MenuSelectionType
9 from ..output import log
10
11 if TYPE_CHECKING:
12 _: Any
13
14
15 def save_config(config: Dict):
16
17 def preview(selection: str):
18 if options['user_config'] == selection:
19 json_config = config_output.user_config_to_json()
20 return f'{config_output.user_configuration_file}\n{json_config}'
21 elif options['user_creds'] == selection:
22 if json_config := config_output.user_credentials_to_json():
23 return f'{config_output.user_credentials_file}\n{json_config}'
24 else:
25 return str(_('No configuration'))
26 elif options['disk_layout'] == selection:
27 if json_config := config_output.disk_layout_to_json():
28 return f'{config_output.disk_layout_file}\n{json_config}'
29 else:
30 return str(_('No configuration'))
31 elif options['all'] == selection:
32 output = f'{config_output.user_configuration_file}\n'
33 if json_config := config_output.user_credentials_to_json():
34 output += f'{config_output.user_credentials_file}\n'
35 if json_config := config_output.disk_layout_to_json():
36 output += f'{config_output.disk_layout_file}\n'
37 return output[:-1]
38 return None
39
40 config_output = ConfigurationOutput(config)
41
42 options = {
43 'user_config': str(_('Save user configuration')),
44 'user_creds': str(_('Save user credentials')),
45 'disk_layout': str(_('Save disk layout')),
46 'all': str(_('Save all'))
47 }
48
49 choice = Menu(
50 _('Choose which configuration to save'),
51 list(options.values()),
52 sort=False,
53 skip=True,
54 preview_size=0.75,
55 preview_command=preview
56 ).run()
57
58 if choice.type_ == MenuSelectionType.Skip:
59 return
60
61 while True:
62 path = input(_('Enter a directory for the configuration(s) to be saved: ')).strip(' ')
63 dest_path = Path(path)
64 if dest_path.exists() and dest_path.is_dir():
65 break
66 log(_('Not a valid directory: {}').format(dest_path), fg='red')
67
68 if options['user_config'] == choice.value:
69 config_output.save_user_config(dest_path)
70 elif options['user_creds'] == choice.value:
71 config_output.save_user_creds(dest_path)
72 elif options['disk_layout'] == choice.value:
73 config_output.save_disk_layout(dest_path)
74 elif options['all'] == choice.value:
75 config_output.save_user_config(dest_path)
76 config_output.save_user_creds(dest_path)
77 config_output.save_disk_layout(dest_path)
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/archinstall/lib/user_interaction/save_conf.py b/archinstall/lib/user_interaction/save_conf.py
--- a/archinstall/lib/user_interaction/save_conf.py
+++ b/archinstall/lib/user_interaction/save_conf.py
@@ -1,9 +1,12 @@
from __future__ import annotations
+import logging
+
from pathlib import Path
from typing import Any, Dict, TYPE_CHECKING
from ..configuration import ConfigurationOutput
+from ..general import SysCommand
from ..menu import Menu
from ..menu.menu import MenuSelectionType
from ..output import log
@@ -58,20 +61,75 @@
if choice.type_ == MenuSelectionType.Skip:
return
- while True:
- path = input(_('Enter a directory for the configuration(s) to be saved: ')).strip(' ')
- dest_path = Path(path)
- if dest_path.exists() and dest_path.is_dir():
- break
- log(_('Not a valid directory: {}').format(dest_path), fg='red')
-
- if options['user_config'] == choice.value:
- config_output.save_user_config(dest_path)
- elif options['user_creds'] == choice.value:
- config_output.save_user_creds(dest_path)
- elif options['disk_layout'] == choice.value:
- config_output.save_disk_layout(dest_path)
- elif options['all'] == choice.value:
- config_output.save_user_config(dest_path)
- config_output.save_user_creds(dest_path)
- config_output.save_disk_layout(dest_path)
+ dirs_to_exclude = [
+ '/bin',
+ '/dev',
+ '/lib',
+ '/lib64',
+ '/lost+found',
+ '/opt',
+ '/proc',
+ '/run',
+ '/sbin',
+ '/srv',
+ '/sys',
+ '/usr',
+ '/var',
+ ]
+ log(
+ _('When picking a directory to save configuration files to,'
+ ' by default we will ignore the following folders: ') + ','.join(dirs_to_exclude),
+ level=logging.DEBUG
+ )
+
+ log(_('Finding possible directories to save configuration files ...'), level=logging.INFO)
+
+ find_exclude = '-path ' + ' -prune -o -path '.join(dirs_to_exclude) + ' -prune '
+ file_picker_command = f'find / {find_exclude} -o -type d -print0'
+ possible_save_dirs = list(
+ filter(None, SysCommand(file_picker_command).decode().split('\x00'))
+ )
+
+ selection = Menu(
+ _('Select directory (or directories) for saving configuration files'),
+ possible_save_dirs,
+ multi=True,
+ skip=True,
+ allow_reset=False,
+ ).run()
+
+ match selection.type_:
+ case MenuSelectionType.Skip:
+ return
+ case _:
+ save_dirs = selection.value
+
+ prompt = _('Do you want to save {} configuration file(s) in the following locations?\n\n{}').format(
+ list(options.keys())[list(options.values()).index(choice.value)],
+ save_dirs
+ )
+ save_confirmation = Menu(prompt, Menu.yes_no(), default_option=Menu.yes()).run()
+ if save_confirmation == Menu.no():
+ return
+
+ log(
+ _('Saving {} configuration files to {}').format(
+ list(options.keys())[list(options.values()).index(choice.value)],
+ save_dirs
+ ),
+ level=logging.DEBUG
+ )
+
+ if save_dirs is not None:
+ for save_dir_str in save_dirs:
+ save_dir = Path(save_dir_str)
+ if options['user_config'] == choice.value:
+ config_output.save_user_config(save_dir)
+ elif options['user_creds'] == choice.value:
+ config_output.save_user_creds(save_dir)
+ elif options['disk_layout'] == choice.value:
+ config_output.save_disk_layout(save_dir)
+ elif options['all'] == choice.value:
+ config_output.save_user_config(save_dir)
+ config_output.save_user_creds(save_dir)
+ config_output.save_disk_layout(save_dir)
| {"golden_diff": "diff --git a/archinstall/lib/user_interaction/save_conf.py b/archinstall/lib/user_interaction/save_conf.py\n--- a/archinstall/lib/user_interaction/save_conf.py\n+++ b/archinstall/lib/user_interaction/save_conf.py\n@@ -1,9 +1,12 @@\n from __future__ import annotations\n \n+import logging\n+\n from pathlib import Path\n from typing import Any, Dict, TYPE_CHECKING\n \n from ..configuration import ConfigurationOutput\n+from ..general import SysCommand\n from ..menu import Menu\n from ..menu.menu import MenuSelectionType\n from ..output import log\n@@ -58,20 +61,75 @@\n \tif choice.type_ == MenuSelectionType.Skip:\n \t\treturn\n \n-\twhile True:\n-\t\tpath = input(_('Enter a directory for the configuration(s) to be saved: ')).strip(' ')\n-\t\tdest_path = Path(path)\n-\t\tif dest_path.exists() and dest_path.is_dir():\n-\t\t\tbreak\n-\t\tlog(_('Not a valid directory: {}').format(dest_path), fg='red')\n-\n-\tif options['user_config'] == choice.value:\n-\t\tconfig_output.save_user_config(dest_path)\n-\telif options['user_creds'] == choice.value:\n-\t\tconfig_output.save_user_creds(dest_path)\n-\telif options['disk_layout'] == choice.value:\n-\t\tconfig_output.save_disk_layout(dest_path)\n-\telif options['all'] == choice.value:\n-\t\tconfig_output.save_user_config(dest_path)\n-\t\tconfig_output.save_user_creds(dest_path)\n-\t\tconfig_output.save_disk_layout(dest_path)\n+\tdirs_to_exclude = [\n+\t\t'/bin',\n+\t\t'/dev',\n+\t\t'/lib',\n+\t\t'/lib64',\n+\t\t'/lost+found',\n+\t\t'/opt',\n+\t\t'/proc',\n+\t\t'/run',\n+\t\t'/sbin',\n+\t\t'/srv',\n+\t\t'/sys',\n+\t\t'/usr',\n+\t\t'/var',\n+\t]\n+\tlog(\n+\t\t_('When picking a directory to save configuration files to,'\n+\t\t' by default we will ignore the following folders: ') + ','.join(dirs_to_exclude),\n+\t\tlevel=logging.DEBUG\n+\t)\n+\n+\tlog(_('Finding possible directories to save configuration files ...'), level=logging.INFO)\n+\t\n+\tfind_exclude = '-path ' + ' -prune -o -path '.join(dirs_to_exclude) + ' -prune '\n+\tfile_picker_command = f'find / {find_exclude} -o -type d -print0'\n+\tpossible_save_dirs = list(\n+\t\tfilter(None, SysCommand(file_picker_command).decode().split('\\x00'))\n+\t)\n+\n+\tselection = Menu(\n+\t\t_('Select directory (or directories) for saving configuration files'),\n+\t\tpossible_save_dirs,\n+\t\tmulti=True,\n+\t\tskip=True,\n+\t\tallow_reset=False,\n+\t).run()\n+\n+\tmatch selection.type_:\n+\t\tcase MenuSelectionType.Skip:\n+\t\t\treturn\n+\t\tcase _:\n+\t\t\tsave_dirs = selection.value\n+\n+\tprompt = _('Do you want to save {} configuration file(s) in the following locations?\\n\\n{}').format(\n+\t\tlist(options.keys())[list(options.values()).index(choice.value)],\n+\t\tsave_dirs\n+\t)\n+\tsave_confirmation = Menu(prompt, Menu.yes_no(), default_option=Menu.yes()).run()\n+\tif save_confirmation == Menu.no():\n+\t\treturn\n+\t\n+\tlog(\n+\t\t_('Saving {} configuration files to {}').format(\n+\t\t\tlist(options.keys())[list(options.values()).index(choice.value)],\n+\t\t\tsave_dirs\n+\t\t),\n+\t\tlevel=logging.DEBUG\n+\t)\n+\t\n+\tif save_dirs is not None:\n+\t\tfor save_dir_str in save_dirs:\n+\t\t\tsave_dir = Path(save_dir_str)\n+\t\t\tif options['user_config'] == choice.value:\n+\t\t\t\tconfig_output.save_user_config(save_dir)\n+\t\t\telif options['user_creds'] == choice.value:\n+\t\t\t\tconfig_output.save_user_creds(save_dir)\n+\t\t\telif options['disk_layout'] == choice.value:\n+\t\t\t\tconfig_output.save_disk_layout(save_dir)\n+\t\t\telif options['all'] == choice.value:\n+\t\t\t\tconfig_output.save_user_config(save_dir)\n+\t\t\t\tconfig_output.save_user_creds(save_dir)\n+\t\t\t\tconfig_output.save_disk_layout(save_dir)\n", "issue": "\"Save configuration\" Improve UX\n*Feature Request*\r\n\r\nI have some time in the next week, I may try and implement this depending on how many roadblocks I hit. Currently, to save a configuration, you are prompted to enter a directory. If it isn't valid, you have to scratch your head wondering what the exact paths are, because you can't discover any directory without exiting `archinstall`. \r\n\r\nIt would be great if there was a sort of tab completion, or even filesystem traversal similar to `ncdu` to find a directory to save the configuration in.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, Dict, TYPE_CHECKING\n\nfrom ..configuration import ConfigurationOutput\nfrom ..menu import Menu\nfrom ..menu.menu import MenuSelectionType\nfrom ..output import log\n\nif TYPE_CHECKING:\n\t_: Any\n\n\ndef save_config(config: Dict):\n\n\tdef preview(selection: str):\n\t\tif options['user_config'] == selection:\n\t\t\tjson_config = config_output.user_config_to_json()\n\t\t\treturn f'{config_output.user_configuration_file}\\n{json_config}'\n\t\telif options['user_creds'] == selection:\n\t\t\tif json_config := config_output.user_credentials_to_json():\n\t\t\t\treturn f'{config_output.user_credentials_file}\\n{json_config}'\n\t\t\telse:\n\t\t\t\treturn str(_('No configuration'))\n\t\telif options['disk_layout'] == selection:\n\t\t\tif json_config := config_output.disk_layout_to_json():\n\t\t\t\treturn f'{config_output.disk_layout_file}\\n{json_config}'\n\t\t\telse:\n\t\t\t\treturn str(_('No configuration'))\n\t\telif options['all'] == selection:\n\t\t\toutput = f'{config_output.user_configuration_file}\\n'\n\t\t\tif json_config := config_output.user_credentials_to_json():\n\t\t\t\toutput += f'{config_output.user_credentials_file}\\n'\n\t\t\tif json_config := config_output.disk_layout_to_json():\n\t\t\t\toutput += f'{config_output.disk_layout_file}\\n'\n\t\t\treturn output[:-1]\n\t\treturn None\n\n\tconfig_output = ConfigurationOutput(config)\n\n\toptions = {\n\t\t'user_config': str(_('Save user configuration')),\n\t\t'user_creds': str(_('Save user credentials')),\n\t\t'disk_layout': str(_('Save disk layout')),\n\t\t'all': str(_('Save all'))\n\t}\n\n\tchoice = Menu(\n\t\t_('Choose which configuration to save'),\n\t\tlist(options.values()),\n\t\tsort=False,\n\t\tskip=True,\n\t\tpreview_size=0.75,\n\t\tpreview_command=preview\n\t).run()\n\n\tif choice.type_ == MenuSelectionType.Skip:\n\t\treturn\n\n\twhile True:\n\t\tpath = input(_('Enter a directory for the configuration(s) to be saved: ')).strip(' ')\n\t\tdest_path = Path(path)\n\t\tif dest_path.exists() and dest_path.is_dir():\n\t\t\tbreak\n\t\tlog(_('Not a valid directory: {}').format(dest_path), fg='red')\n\n\tif options['user_config'] == choice.value:\n\t\tconfig_output.save_user_config(dest_path)\n\telif options['user_creds'] == choice.value:\n\t\tconfig_output.save_user_creds(dest_path)\n\telif options['disk_layout'] == choice.value:\n\t\tconfig_output.save_disk_layout(dest_path)\n\telif options['all'] == choice.value:\n\t\tconfig_output.save_user_config(dest_path)\n\t\tconfig_output.save_user_creds(dest_path)\n\t\tconfig_output.save_disk_layout(dest_path)\n", "path": "archinstall/lib/user_interaction/save_conf.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\n\nfrom pathlib import Path\nfrom typing import Any, Dict, TYPE_CHECKING\n\nfrom ..configuration import ConfigurationOutput\nfrom ..general import SysCommand\nfrom ..menu import Menu\nfrom ..menu.menu import MenuSelectionType\nfrom ..output import log\n\nif TYPE_CHECKING:\n\t_: Any\n\n\ndef save_config(config: Dict):\n\n\tdef preview(selection: str):\n\t\tif options['user_config'] == selection:\n\t\t\tjson_config = config_output.user_config_to_json()\n\t\t\treturn f'{config_output.user_configuration_file}\\n{json_config}'\n\t\telif options['user_creds'] == selection:\n\t\t\tif json_config := config_output.user_credentials_to_json():\n\t\t\t\treturn f'{config_output.user_credentials_file}\\n{json_config}'\n\t\t\telse:\n\t\t\t\treturn str(_('No configuration'))\n\t\telif options['disk_layout'] == selection:\n\t\t\tif json_config := config_output.disk_layout_to_json():\n\t\t\t\treturn f'{config_output.disk_layout_file}\\n{json_config}'\n\t\t\telse:\n\t\t\t\treturn str(_('No configuration'))\n\t\telif options['all'] == selection:\n\t\t\toutput = f'{config_output.user_configuration_file}\\n'\n\t\t\tif json_config := config_output.user_credentials_to_json():\n\t\t\t\toutput += f'{config_output.user_credentials_file}\\n'\n\t\t\tif json_config := config_output.disk_layout_to_json():\n\t\t\t\toutput += f'{config_output.disk_layout_file}\\n'\n\t\t\treturn output[:-1]\n\t\treturn None\n\n\tconfig_output = ConfigurationOutput(config)\n\n\toptions = {\n\t\t'user_config': str(_('Save user configuration')),\n\t\t'user_creds': str(_('Save user credentials')),\n\t\t'disk_layout': str(_('Save disk layout')),\n\t\t'all': str(_('Save all'))\n\t}\n\n\tchoice = Menu(\n\t\t_('Choose which configuration to save'),\n\t\tlist(options.values()),\n\t\tsort=False,\n\t\tskip=True,\n\t\tpreview_size=0.75,\n\t\tpreview_command=preview\n\t).run()\n\n\tif choice.type_ == MenuSelectionType.Skip:\n\t\treturn\n\n\tdirs_to_exclude = [\n\t\t'/bin',\n\t\t'/dev',\n\t\t'/lib',\n\t\t'/lib64',\n\t\t'/lost+found',\n\t\t'/opt',\n\t\t'/proc',\n\t\t'/run',\n\t\t'/sbin',\n\t\t'/srv',\n\t\t'/sys',\n\t\t'/usr',\n\t\t'/var',\n\t]\n\tlog(\n\t\t_('When picking a directory to save configuration files to,'\n\t\t' by default we will ignore the following folders: ') + ','.join(dirs_to_exclude),\n\t\tlevel=logging.DEBUG\n\t)\n\n\tlog(_('Finding possible directories to save configuration files ...'), level=logging.INFO)\n\t\n\tfind_exclude = '-path ' + ' -prune -o -path '.join(dirs_to_exclude) + ' -prune '\n\tfile_picker_command = f'find / {find_exclude} -o -type d -print0'\n\tpossible_save_dirs = list(\n\t\tfilter(None, SysCommand(file_picker_command).decode().split('\\x00'))\n\t)\n\n\tselection = Menu(\n\t\t_('Select directory (or directories) for saving configuration files'),\n\t\tpossible_save_dirs,\n\t\tmulti=True,\n\t\tskip=True,\n\t\tallow_reset=False,\n\t).run()\n\n\tmatch selection.type_:\n\t\tcase MenuSelectionType.Skip:\n\t\t\treturn\n\t\tcase _:\n\t\t\tsave_dirs = selection.value\n\n\tprompt = _('Do you want to save {} configuration file(s) in the following locations?\\n\\n{}').format(\n\t\tlist(options.keys())[list(options.values()).index(choice.value)],\n\t\tsave_dirs\n\t)\n\tsave_confirmation = Menu(prompt, Menu.yes_no(), default_option=Menu.yes()).run()\n\tif save_confirmation == Menu.no():\n\t\treturn\n\t\n\tlog(\n\t\t_('Saving {} configuration files to {}').format(\n\t\t\tlist(options.keys())[list(options.values()).index(choice.value)],\n\t\t\tsave_dirs\n\t\t),\n\t\tlevel=logging.DEBUG\n\t)\n\t\n\tif save_dirs is not None:\n\t\tfor save_dir_str in save_dirs:\n\t\t\tsave_dir = Path(save_dir_str)\n\t\t\tif options['user_config'] == choice.value:\n\t\t\t\tconfig_output.save_user_config(save_dir)\n\t\t\telif options['user_creds'] == choice.value:\n\t\t\t\tconfig_output.save_user_creds(save_dir)\n\t\t\telif options['disk_layout'] == choice.value:\n\t\t\t\tconfig_output.save_disk_layout(save_dir)\n\t\t\telif options['all'] == choice.value:\n\t\t\t\tconfig_output.save_user_config(save_dir)\n\t\t\t\tconfig_output.save_user_creds(save_dir)\n\t\t\t\tconfig_output.save_disk_layout(save_dir)\n", "path": "archinstall/lib/user_interaction/save_conf.py"}]} | 1,116 | 923 |
gh_patches_debug_14642 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-6819 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mitmdump does not exit
#### Problem Description
Mitmdump does not exit automatically when executing:
`mitmdump -nr infile -w outfile
`
Until version 10.0.0 it was working properly and when running mitmdump with "-n" the process finished automatically once the outfile was written.
#### Steps to reproduce the behavior:
1. Generate a mitm file
2. Execute mitmdump -nr infile -w outfile
#### System Information
Mitmproxy: 10.3.0 binary
Python: 3.12.3
OpenSSL: OpenSSL 3.2.1 30 Jan 2024
Platform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/readfile.py`
Content:
```
1 import asyncio
2 import logging
3 import os.path
4 import sys
5 from typing import BinaryIO
6 from typing import Optional
7
8 from mitmproxy import command
9 from mitmproxy import ctx
10 from mitmproxy import exceptions
11 from mitmproxy import flowfilter
12 from mitmproxy import io
13
14 logger = logging.getLogger(__name__)
15
16
17 class ReadFile:
18 """
19 An addon that handles reading from file on startup.
20 """
21
22 def __init__(self):
23 self.filter = None
24 self._read_task: asyncio.Task | None = None
25
26 def load(self, loader):
27 loader.add_option("rfile", Optional[str], None, "Read flows from file.")
28 loader.add_option(
29 "readfile_filter", Optional[str], None, "Read only matching flows."
30 )
31
32 def configure(self, updated):
33 if "readfile_filter" in updated:
34 if ctx.options.readfile_filter:
35 try:
36 self.filter = flowfilter.parse(ctx.options.readfile_filter)
37 except ValueError as e:
38 raise exceptions.OptionsError(str(e)) from e
39 else:
40 self.filter = None
41
42 async def load_flows(self, fo: BinaryIO) -> int:
43 cnt = 0
44 freader = io.FlowReader(fo)
45 try:
46 for flow in freader.stream():
47 if self.filter and not self.filter(flow):
48 continue
49 await ctx.master.load_flow(flow)
50 cnt += 1
51 except (OSError, exceptions.FlowReadException) as e:
52 if cnt:
53 logging.warning("Flow file corrupted - loaded %i flows." % cnt)
54 else:
55 logging.error("Flow file corrupted.")
56 raise exceptions.FlowReadException(str(e)) from e
57 else:
58 return cnt
59
60 async def load_flows_from_path(self, path: str) -> int:
61 path = os.path.expanduser(path)
62 try:
63 with open(path, "rb") as f:
64 return await self.load_flows(f)
65 except OSError as e:
66 logging.error(f"Cannot load flows: {e}")
67 raise exceptions.FlowReadException(str(e)) from e
68
69 async def doread(self, rfile: str) -> None:
70 try:
71 await self.load_flows_from_path(rfile)
72 except exceptions.FlowReadException as e:
73 logger.exception(f"Failed to read {ctx.options.rfile}: {e}")
74 finally:
75 self._read_task = None
76
77 def running(self):
78 if ctx.options.rfile:
79 self._read_task = asyncio.create_task(self.doread(ctx.options.rfile))
80
81 @command.command("readfile.reading")
82 def reading(self) -> bool:
83 return bool(self._read_task)
84
85
86 class ReadFileStdin(ReadFile):
87 """Support the special case of "-" for reading from stdin"""
88
89 async def load_flows_from_path(self, path: str) -> int:
90 if path == "-": # pragma: no cover
91 # Need to think about how to test this. This function is scheduled
92 # onto the event loop, where a sys.stdin mock has no effect.
93 return await self.load_flows(sys.stdin.buffer)
94 else:
95 return await super().load_flows_from_path(path)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/addons/readfile.py b/mitmproxy/addons/readfile.py
--- a/mitmproxy/addons/readfile.py
+++ b/mitmproxy/addons/readfile.py
@@ -71,8 +71,6 @@
await self.load_flows_from_path(rfile)
except exceptions.FlowReadException as e:
logger.exception(f"Failed to read {ctx.options.rfile}: {e}")
- finally:
- self._read_task = None
def running(self):
if ctx.options.rfile:
@@ -80,7 +78,7 @@
@command.command("readfile.reading")
def reading(self) -> bool:
- return bool(self._read_task)
+ return bool(self._read_task and not self._read_task.done())
class ReadFileStdin(ReadFile):
| {"golden_diff": "diff --git a/mitmproxy/addons/readfile.py b/mitmproxy/addons/readfile.py\n--- a/mitmproxy/addons/readfile.py\n+++ b/mitmproxy/addons/readfile.py\n@@ -71,8 +71,6 @@\n await self.load_flows_from_path(rfile)\n except exceptions.FlowReadException as e:\n logger.exception(f\"Failed to read {ctx.options.rfile}: {e}\")\n- finally:\n- self._read_task = None\n \n def running(self):\n if ctx.options.rfile:\n@@ -80,7 +78,7 @@\n \n @command.command(\"readfile.reading\")\n def reading(self) -> bool:\n- return bool(self._read_task)\n+ return bool(self._read_task and not self._read_task.done())\n \n \n class ReadFileStdin(ReadFile):\n", "issue": "Mitmdump does not exit\n#### Problem Description\r\nMitmdump does not exit automatically when executing:\r\n\r\n`mitmdump -nr infile -w outfile\r\n`\r\n\r\nUntil version 10.0.0 it was working properly and when running mitmdump with \"-n\" the process finished automatically once the outfile was written.\r\n\r\n#### Steps to reproduce the behavior:\r\n1. Generate a mitm file\r\n2. Execute mitmdump -nr infile -w outfile\r\n\r\n#### System Information\r\nMitmproxy: 10.3.0 binary\r\nPython: 3.12.3\r\nOpenSSL: OpenSSL 3.2.1 30 Jan 2024\r\nPlatform: Linux-6.5.0-27-generic-x86_64-with-glibc2.35\r\n\n", "before_files": [{"content": "import asyncio\nimport logging\nimport os.path\nimport sys\nfrom typing import BinaryIO\nfrom typing import Optional\n\nfrom mitmproxy import command\nfrom mitmproxy import ctx\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import io\n\nlogger = logging.getLogger(__name__)\n\n\nclass ReadFile:\n \"\"\"\n An addon that handles reading from file on startup.\n \"\"\"\n\n def __init__(self):\n self.filter = None\n self._read_task: asyncio.Task | None = None\n\n def load(self, loader):\n loader.add_option(\"rfile\", Optional[str], None, \"Read flows from file.\")\n loader.add_option(\n \"readfile_filter\", Optional[str], None, \"Read only matching flows.\"\n )\n\n def configure(self, updated):\n if \"readfile_filter\" in updated:\n if ctx.options.readfile_filter:\n try:\n self.filter = flowfilter.parse(ctx.options.readfile_filter)\n except ValueError as e:\n raise exceptions.OptionsError(str(e)) from e\n else:\n self.filter = None\n\n async def load_flows(self, fo: BinaryIO) -> int:\n cnt = 0\n freader = io.FlowReader(fo)\n try:\n for flow in freader.stream():\n if self.filter and not self.filter(flow):\n continue\n await ctx.master.load_flow(flow)\n cnt += 1\n except (OSError, exceptions.FlowReadException) as e:\n if cnt:\n logging.warning(\"Flow file corrupted - loaded %i flows.\" % cnt)\n else:\n logging.error(\"Flow file corrupted.\")\n raise exceptions.FlowReadException(str(e)) from e\n else:\n return cnt\n\n async def load_flows_from_path(self, path: str) -> int:\n path = os.path.expanduser(path)\n try:\n with open(path, \"rb\") as f:\n return await self.load_flows(f)\n except OSError as e:\n logging.error(f\"Cannot load flows: {e}\")\n raise exceptions.FlowReadException(str(e)) from e\n\n async def doread(self, rfile: str) -> None:\n try:\n await self.load_flows_from_path(rfile)\n except exceptions.FlowReadException as e:\n logger.exception(f\"Failed to read {ctx.options.rfile}: {e}\")\n finally:\n self._read_task = None\n\n def running(self):\n if ctx.options.rfile:\n self._read_task = asyncio.create_task(self.doread(ctx.options.rfile))\n\n @command.command(\"readfile.reading\")\n def reading(self) -> bool:\n return bool(self._read_task)\n\n\nclass ReadFileStdin(ReadFile):\n \"\"\"Support the special case of \"-\" for reading from stdin\"\"\"\n\n async def load_flows_from_path(self, path: str) -> int:\n if path == \"-\": # pragma: no cover\n # Need to think about how to test this. This function is scheduled\n # onto the event loop, where a sys.stdin mock has no effect.\n return await self.load_flows(sys.stdin.buffer)\n else:\n return await super().load_flows_from_path(path)\n", "path": "mitmproxy/addons/readfile.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport os.path\nimport sys\nfrom typing import BinaryIO\nfrom typing import Optional\n\nfrom mitmproxy import command\nfrom mitmproxy import ctx\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import io\n\nlogger = logging.getLogger(__name__)\n\n\nclass ReadFile:\n \"\"\"\n An addon that handles reading from file on startup.\n \"\"\"\n\n def __init__(self):\n self.filter = None\n self._read_task: asyncio.Task | None = None\n\n def load(self, loader):\n loader.add_option(\"rfile\", Optional[str], None, \"Read flows from file.\")\n loader.add_option(\n \"readfile_filter\", Optional[str], None, \"Read only matching flows.\"\n )\n\n def configure(self, updated):\n if \"readfile_filter\" in updated:\n if ctx.options.readfile_filter:\n try:\n self.filter = flowfilter.parse(ctx.options.readfile_filter)\n except ValueError as e:\n raise exceptions.OptionsError(str(e)) from e\n else:\n self.filter = None\n\n async def load_flows(self, fo: BinaryIO) -> int:\n cnt = 0\n freader = io.FlowReader(fo)\n try:\n for flow in freader.stream():\n if self.filter and not self.filter(flow):\n continue\n await ctx.master.load_flow(flow)\n cnt += 1\n except (OSError, exceptions.FlowReadException) as e:\n if cnt:\n logging.warning(\"Flow file corrupted - loaded %i flows.\" % cnt)\n else:\n logging.error(\"Flow file corrupted.\")\n raise exceptions.FlowReadException(str(e)) from e\n else:\n return cnt\n\n async def load_flows_from_path(self, path: str) -> int:\n path = os.path.expanduser(path)\n try:\n with open(path, \"rb\") as f:\n return await self.load_flows(f)\n except OSError as e:\n logging.error(f\"Cannot load flows: {e}\")\n raise exceptions.FlowReadException(str(e)) from e\n\n async def doread(self, rfile: str) -> None:\n try:\n await self.load_flows_from_path(rfile)\n except exceptions.FlowReadException as e:\n logger.exception(f\"Failed to read {ctx.options.rfile}: {e}\")\n\n def running(self):\n if ctx.options.rfile:\n self._read_task = asyncio.create_task(self.doread(ctx.options.rfile))\n\n @command.command(\"readfile.reading\")\n def reading(self) -> bool:\n return bool(self._read_task and not self._read_task.done())\n\n\nclass ReadFileStdin(ReadFile):\n \"\"\"Support the special case of \"-\" for reading from stdin\"\"\"\n\n async def load_flows_from_path(self, path: str) -> int:\n if path == \"-\": # pragma: no cover\n # Need to think about how to test this. This function is scheduled\n # onto the event loop, where a sys.stdin mock has no effect.\n return await self.load_flows(sys.stdin.buffer)\n else:\n return await super().load_flows_from_path(path)\n", "path": "mitmproxy/addons/readfile.py"}]} | 1,308 | 184 |
gh_patches_debug_33189 | rasdani/github-patches | git_diff | arviz-devs__arviz-1221 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent capitalization in `plot_hdi` docstring
**Describe the bug**
Docs have inconsistent capitalization

**To Reproduce**
https://arviz-devs.github.io/arviz/generated/arviz.plot_hdi.html#arviz.plot_hdi
**Expected behavior**
Consistency across arg docstrings
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/hdiplot.py`
Content:
```
1 """Plot highest density intervals for regression data."""
2 import warnings
3
4 import numpy as np
5 from scipy.interpolate import griddata
6 from scipy.signal import savgol_filter
7
8 from ..stats import hdi
9 from .plot_utils import get_plotting_function, matplotlib_kwarg_dealiaser
10 from ..rcparams import rcParams
11 from ..utils import credible_interval_warning
12
13
14 def plot_hdi(
15 x,
16 y,
17 hdi_prob=None,
18 color="C1",
19 circular=False,
20 smooth=True,
21 smooth_kwargs=None,
22 fill_kwargs=None,
23 plot_kwargs=None,
24 ax=None,
25 backend=None,
26 backend_kwargs=None,
27 show=None,
28 credible_interval=None,
29 ):
30 r"""
31 Plot hdi intervals for regression data.
32
33 Parameters
34 ----------
35 x : array-like
36 Values to plot
37 y : array-like
38 values from which to compute the hdi. Assumed shape (chain, draw, \*shape).
39 hdi_prob : float, optional
40 Probability for the highest density interval. Defaults to 0.94.
41 color : str
42 Color used for the limits of the hdi and fill. Should be a valid matplotlib color
43 circular : bool, optional
44 Whether to compute the hdi taking into account `x` is a circular variable
45 (in the range [-np.pi, np.pi]) or not. Defaults to False (i.e non-circular variables).
46 smooth : boolean
47 If True the result will be smoothed by first computing a linear interpolation of the data
48 over a regular grid and then applying the Savitzky-Golay filter to the interpolated data.
49 Defaults to True.
50 smooth_kwargs : dict, optional
51 Additional keywords modifying the Savitzky-Golay filter. See Scipy's documentation for
52 details
53 fill_kwargs : dict
54 Keywords passed to `fill_between` (use fill_kwargs={'alpha': 0} to disable fill).
55 plot_kwargs : dict
56 Keywords passed to hdi limits
57 ax: axes, optional
58 Matplotlib axes or bokeh figures.
59 backend: str, optional
60 Select plotting backend {"matplotlib","bokeh"}. Default "matplotlib".
61 backend_kwargs: bool, optional
62 These are kwargs specific to the backend being used. For additional documentation
63 check the plotting method of the backend.
64 show : bool, optional
65 Call backend show function.
66 credible_interval: float, optional
67 deprecated: Please see hdi_prob
68
69 Returns
70 -------
71 axes : matplotlib axes or bokeh figures
72 """
73 if credible_interval:
74 hdi_prob = credible_interval_warning(credible_interval, hdi_prob)
75
76 plot_kwargs = matplotlib_kwarg_dealiaser(plot_kwargs, "plot")
77 plot_kwargs.setdefault("color", color)
78 plot_kwargs.setdefault("alpha", 0)
79
80 fill_kwargs = matplotlib_kwarg_dealiaser(fill_kwargs, "hexbin")
81 fill_kwargs.setdefault("color", color)
82 fill_kwargs.setdefault("alpha", 0.5)
83
84 x = np.asarray(x)
85 y = np.asarray(y)
86
87 x_shape = x.shape
88 y_shape = y.shape
89 if y_shape[-len(x_shape) :] != x_shape:
90 msg = "Dimension mismatch for x: {} and y: {}."
91 msg += " y-dimensions should be (chain, draw, *x.shape) or"
92 msg += " (draw, *x.shape)"
93 raise TypeError(msg.format(x_shape, y_shape))
94
95 if len(y_shape[: -len(x_shape)]) > 1:
96 new_shape = tuple([-1] + list(x_shape))
97 y = y.reshape(new_shape)
98
99 if hdi_prob is None:
100 hdi_prob = rcParams["stats.hdi_prob"]
101 else:
102 if not 1 >= hdi_prob > 0:
103 raise ValueError("The value of hdi_prob should be in the interval (0, 1]")
104
105 hdi_ = hdi(y, hdi_prob=hdi_prob, circular=circular, multimodal=False)
106
107 if smooth:
108 if smooth_kwargs is None:
109 smooth_kwargs = {}
110 smooth_kwargs.setdefault("window_length", 55)
111 smooth_kwargs.setdefault("polyorder", 2)
112 x_data = np.linspace(x.min(), x.max(), 200)
113 x_data[0] = (x_data[0] + x_data[1]) / 2
114 hdi_interp = griddata(x, hdi_, x_data)
115 y_data = savgol_filter(hdi_interp, axis=0, **smooth_kwargs)
116 else:
117 idx = np.argsort(x)
118 x_data = x[idx]
119 y_data = hdi_[idx]
120
121 hdiplot_kwargs = dict(
122 ax=ax,
123 x_data=x_data,
124 y_data=y_data,
125 plot_kwargs=plot_kwargs,
126 fill_kwargs=fill_kwargs,
127 backend_kwargs=backend_kwargs,
128 show=show,
129 )
130
131 if backend is None:
132 backend = rcParams["plot.backend"]
133 backend = backend.lower()
134
135 # TODO: Add backend kwargs
136 plot = get_plotting_function("plot_hdi", "hdiplot", backend)
137 ax = plot(**hdiplot_kwargs)
138 return ax
139
140
141 def plot_hpd(*args, **kwargs): # noqa: D103
142 warnings.warn("plot_hdi has been deprecated, please use plot_hdi", DeprecationWarning)
143 return plot_hdi(*args, **kwargs)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/arviz/plots/hdiplot.py b/arviz/plots/hdiplot.py
--- a/arviz/plots/hdiplot.py
+++ b/arviz/plots/hdiplot.py
@@ -33,13 +33,13 @@
Parameters
----------
x : array-like
- Values to plot
+ Values to plot.
y : array-like
- values from which to compute the hdi. Assumed shape (chain, draw, \*shape).
+ Values from which to compute the hdi. Assumed shape (chain, draw, \*shape).
hdi_prob : float, optional
Probability for the highest density interval. Defaults to 0.94.
color : str
- Color used for the limits of the hdi and fill. Should be a valid matplotlib color
+ Color used for the limits of the hdi and fill. Should be a valid matplotlib color.
circular : bool, optional
Whether to compute the hdi taking into account `x` is a circular variable
(in the range [-np.pi, np.pi]) or not. Defaults to False (i.e non-circular variables).
@@ -49,11 +49,11 @@
Defaults to True.
smooth_kwargs : dict, optional
Additional keywords modifying the Savitzky-Golay filter. See Scipy's documentation for
- details
+ details.
fill_kwargs : dict
Keywords passed to `fill_between` (use fill_kwargs={'alpha': 0} to disable fill).
plot_kwargs : dict
- Keywords passed to hdi limits
+ Keywords passed to hdi limits.
ax: axes, optional
Matplotlib axes or bokeh figures.
backend: str, optional
@@ -64,7 +64,7 @@
show : bool, optional
Call backend show function.
credible_interval: float, optional
- deprecated: Please see hdi_prob
+ Deprecated: Please see hdi_prob
Returns
-------
| {"golden_diff": "diff --git a/arviz/plots/hdiplot.py b/arviz/plots/hdiplot.py\n--- a/arviz/plots/hdiplot.py\n+++ b/arviz/plots/hdiplot.py\n@@ -33,13 +33,13 @@\n Parameters\n ----------\n x : array-like\n- Values to plot\n+ Values to plot.\n y : array-like\n- values from which to compute the hdi. Assumed shape (chain, draw, \\*shape).\n+ Values from which to compute the hdi. Assumed shape (chain, draw, \\*shape).\n hdi_prob : float, optional\n Probability for the highest density interval. Defaults to 0.94.\n color : str\n- Color used for the limits of the hdi and fill. Should be a valid matplotlib color\n+ Color used for the limits of the hdi and fill. Should be a valid matplotlib color.\n circular : bool, optional\n Whether to compute the hdi taking into account `x` is a circular variable\n (in the range [-np.pi, np.pi]) or not. Defaults to False (i.e non-circular variables).\n@@ -49,11 +49,11 @@\n Defaults to True.\n smooth_kwargs : dict, optional\n Additional keywords modifying the Savitzky-Golay filter. See Scipy's documentation for\n- details\n+ details.\n fill_kwargs : dict\n Keywords passed to `fill_between` (use fill_kwargs={'alpha': 0} to disable fill).\n plot_kwargs : dict\n- Keywords passed to hdi limits\n+ Keywords passed to hdi limits.\n ax: axes, optional\n Matplotlib axes or bokeh figures.\n backend: str, optional\n@@ -64,7 +64,7 @@\n show : bool, optional\n Call backend show function.\n credible_interval: float, optional\n- deprecated: Please see hdi_prob\n+ Deprecated: Please see hdi_prob\n \n Returns\n -------\n", "issue": "Inconsistent capitalization in `plot_hdi` docstring\n**Describe the bug**\r\nDocs have inconsistent capitalization\r\n\r\n\r\n\r\n**To Reproduce**\r\nhttps://arviz-devs.github.io/arviz/generated/arviz.plot_hdi.html#arviz.plot_hdi\r\n\r\n**Expected behavior**\r\nConsistency across arg docstrings\r\n\n", "before_files": [{"content": "\"\"\"Plot highest density intervals for regression data.\"\"\"\nimport warnings\n\nimport numpy as np\nfrom scipy.interpolate import griddata\nfrom scipy.signal import savgol_filter\n\nfrom ..stats import hdi\nfrom .plot_utils import get_plotting_function, matplotlib_kwarg_dealiaser\nfrom ..rcparams import rcParams\nfrom ..utils import credible_interval_warning\n\n\ndef plot_hdi(\n x,\n y,\n hdi_prob=None,\n color=\"C1\",\n circular=False,\n smooth=True,\n smooth_kwargs=None,\n fill_kwargs=None,\n plot_kwargs=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n credible_interval=None,\n):\n r\"\"\"\n Plot hdi intervals for regression data.\n\n Parameters\n ----------\n x : array-like\n Values to plot\n y : array-like\n values from which to compute the hdi. Assumed shape (chain, draw, \\*shape).\n hdi_prob : float, optional\n Probability for the highest density interval. Defaults to 0.94.\n color : str\n Color used for the limits of the hdi and fill. Should be a valid matplotlib color\n circular : bool, optional\n Whether to compute the hdi taking into account `x` is a circular variable\n (in the range [-np.pi, np.pi]) or not. Defaults to False (i.e non-circular variables).\n smooth : boolean\n If True the result will be smoothed by first computing a linear interpolation of the data\n over a regular grid and then applying the Savitzky-Golay filter to the interpolated data.\n Defaults to True.\n smooth_kwargs : dict, optional\n Additional keywords modifying the Savitzky-Golay filter. See Scipy's documentation for\n details\n fill_kwargs : dict\n Keywords passed to `fill_between` (use fill_kwargs={'alpha': 0} to disable fill).\n plot_kwargs : dict\n Keywords passed to hdi limits\n ax: axes, optional\n Matplotlib axes or bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show : bool, optional\n Call backend show function.\n credible_interval: float, optional\n deprecated: Please see hdi_prob\n\n Returns\n -------\n axes : matplotlib axes or bokeh figures\n \"\"\"\n if credible_interval:\n hdi_prob = credible_interval_warning(credible_interval, hdi_prob)\n\n plot_kwargs = matplotlib_kwarg_dealiaser(plot_kwargs, \"plot\")\n plot_kwargs.setdefault(\"color\", color)\n plot_kwargs.setdefault(\"alpha\", 0)\n\n fill_kwargs = matplotlib_kwarg_dealiaser(fill_kwargs, \"hexbin\")\n fill_kwargs.setdefault(\"color\", color)\n fill_kwargs.setdefault(\"alpha\", 0.5)\n\n x = np.asarray(x)\n y = np.asarray(y)\n\n x_shape = x.shape\n y_shape = y.shape\n if y_shape[-len(x_shape) :] != x_shape:\n msg = \"Dimension mismatch for x: {} and y: {}.\"\n msg += \" y-dimensions should be (chain, draw, *x.shape) or\"\n msg += \" (draw, *x.shape)\"\n raise TypeError(msg.format(x_shape, y_shape))\n\n if len(y_shape[: -len(x_shape)]) > 1:\n new_shape = tuple([-1] + list(x_shape))\n y = y.reshape(new_shape)\n\n if hdi_prob is None:\n hdi_prob = rcParams[\"stats.hdi_prob\"]\n else:\n if not 1 >= hdi_prob > 0:\n raise ValueError(\"The value of hdi_prob should be in the interval (0, 1]\")\n\n hdi_ = hdi(y, hdi_prob=hdi_prob, circular=circular, multimodal=False)\n\n if smooth:\n if smooth_kwargs is None:\n smooth_kwargs = {}\n smooth_kwargs.setdefault(\"window_length\", 55)\n smooth_kwargs.setdefault(\"polyorder\", 2)\n x_data = np.linspace(x.min(), x.max(), 200)\n x_data[0] = (x_data[0] + x_data[1]) / 2\n hdi_interp = griddata(x, hdi_, x_data)\n y_data = savgol_filter(hdi_interp, axis=0, **smooth_kwargs)\n else:\n idx = np.argsort(x)\n x_data = x[idx]\n y_data = hdi_[idx]\n\n hdiplot_kwargs = dict(\n ax=ax,\n x_data=x_data,\n y_data=y_data,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_hdi\", \"hdiplot\", backend)\n ax = plot(**hdiplot_kwargs)\n return ax\n\n\ndef plot_hpd(*args, **kwargs): # noqa: D103\n warnings.warn(\"plot_hdi has been deprecated, please use plot_hdi\", DeprecationWarning)\n return plot_hdi(*args, **kwargs)\n", "path": "arviz/plots/hdiplot.py"}], "after_files": [{"content": "\"\"\"Plot highest density intervals for regression data.\"\"\"\nimport warnings\n\nimport numpy as np\nfrom scipy.interpolate import griddata\nfrom scipy.signal import savgol_filter\n\nfrom ..stats import hdi\nfrom .plot_utils import get_plotting_function, matplotlib_kwarg_dealiaser\nfrom ..rcparams import rcParams\nfrom ..utils import credible_interval_warning\n\n\ndef plot_hdi(\n x,\n y,\n hdi_prob=None,\n color=\"C1\",\n circular=False,\n smooth=True,\n smooth_kwargs=None,\n fill_kwargs=None,\n plot_kwargs=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n credible_interval=None,\n):\n r\"\"\"\n Plot hdi intervals for regression data.\n\n Parameters\n ----------\n x : array-like\n Values to plot.\n y : array-like\n Values from which to compute the hdi. Assumed shape (chain, draw, \\*shape).\n hdi_prob : float, optional\n Probability for the highest density interval. Defaults to 0.94.\n color : str\n Color used for the limits of the hdi and fill. Should be a valid matplotlib color.\n circular : bool, optional\n Whether to compute the hdi taking into account `x` is a circular variable\n (in the range [-np.pi, np.pi]) or not. Defaults to False (i.e non-circular variables).\n smooth : boolean\n If True the result will be smoothed by first computing a linear interpolation of the data\n over a regular grid and then applying the Savitzky-Golay filter to the interpolated data.\n Defaults to True.\n smooth_kwargs : dict, optional\n Additional keywords modifying the Savitzky-Golay filter. See Scipy's documentation for\n details.\n fill_kwargs : dict\n Keywords passed to `fill_between` (use fill_kwargs={'alpha': 0} to disable fill).\n plot_kwargs : dict\n Keywords passed to hdi limits.\n ax: axes, optional\n Matplotlib axes or bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show : bool, optional\n Call backend show function.\n credible_interval: float, optional\n Deprecated: Please see hdi_prob\n\n Returns\n -------\n axes : matplotlib axes or bokeh figures\n \"\"\"\n if credible_interval:\n hdi_prob = credible_interval_warning(credible_interval, hdi_prob)\n\n plot_kwargs = matplotlib_kwarg_dealiaser(plot_kwargs, \"plot\")\n plot_kwargs.setdefault(\"color\", color)\n plot_kwargs.setdefault(\"alpha\", 0)\n\n fill_kwargs = matplotlib_kwarg_dealiaser(fill_kwargs, \"hexbin\")\n fill_kwargs.setdefault(\"color\", color)\n fill_kwargs.setdefault(\"alpha\", 0.5)\n\n x = np.asarray(x)\n y = np.asarray(y)\n\n x_shape = x.shape\n y_shape = y.shape\n if y_shape[-len(x_shape) :] != x_shape:\n msg = \"Dimension mismatch for x: {} and y: {}.\"\n msg += \" y-dimensions should be (chain, draw, *x.shape) or\"\n msg += \" (draw, *x.shape)\"\n raise TypeError(msg.format(x_shape, y_shape))\n\n if len(y_shape[: -len(x_shape)]) > 1:\n new_shape = tuple([-1] + list(x_shape))\n y = y.reshape(new_shape)\n\n if hdi_prob is None:\n hdi_prob = rcParams[\"stats.hdi_prob\"]\n else:\n if not 1 >= hdi_prob > 0:\n raise ValueError(\"The value of hdi_prob should be in the interval (0, 1]\")\n\n hdi_ = hdi(y, hdi_prob=hdi_prob, circular=circular, multimodal=False)\n\n if smooth:\n if smooth_kwargs is None:\n smooth_kwargs = {}\n smooth_kwargs.setdefault(\"window_length\", 55)\n smooth_kwargs.setdefault(\"polyorder\", 2)\n x_data = np.linspace(x.min(), x.max(), 200)\n x_data[0] = (x_data[0] + x_data[1]) / 2\n hdi_interp = griddata(x, hdi_, x_data)\n y_data = savgol_filter(hdi_interp, axis=0, **smooth_kwargs)\n else:\n idx = np.argsort(x)\n x_data = x[idx]\n y_data = hdi_[idx]\n\n hdiplot_kwargs = dict(\n ax=ax,\n x_data=x_data,\n y_data=y_data,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_hdi\", \"hdiplot\", backend)\n ax = plot(**hdiplot_kwargs)\n return ax\n\n\ndef plot_hpd(*args, **kwargs): # noqa: D103\n warnings.warn(\"plot_hdi has been deprecated, please use plot_hdi\", DeprecationWarning)\n return plot_hdi(*args, **kwargs)\n", "path": "arviz/plots/hdiplot.py"}]} | 1,888 | 444 |
gh_patches_debug_13929 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1740 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Books on multiple shelves duplicated in the "all books" view
<img width="1172" alt="Screen Shot 2021-12-02 at 11 47 40 AM" src="https://user-images.githubusercontent.com/1807695/144495864-f94425a7-b11f-4df4-8ab7-c850d5a17996.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/shelf/shelf.py`
Content:
```
1 """ shelf views """
2 from collections import namedtuple
3
4 from django.db.models import OuterRef, Subquery, F
5 from django.contrib.auth.decorators import login_required
6 from django.core.paginator import Paginator
7 from django.http import HttpResponseBadRequest
8 from django.shortcuts import get_object_or_404, redirect
9 from django.template.response import TemplateResponse
10 from django.utils.decorators import method_decorator
11 from django.utils.translation import gettext_lazy as _
12 from django.views import View
13
14 from bookwyrm import forms, models
15 from bookwyrm.activitypub import ActivitypubResponse
16 from bookwyrm.settings import PAGE_LENGTH
17 from bookwyrm.views.helpers import is_api_request, get_user_from_username
18
19
20 # pylint: disable=no-self-use
21 class Shelf(View):
22 """shelf page"""
23
24 def get(self, request, username, shelf_identifier=None):
25 """display a shelf"""
26 user = get_user_from_username(request.user, username)
27
28 is_self = user == request.user
29
30 if is_self:
31 shelves = user.shelf_set.all()
32 else:
33 shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()
34
35 # get the shelf and make sure the logged in user should be able to see it
36 if shelf_identifier:
37 shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)
38 shelf.raise_visible_to_user(request.user)
39 books = shelf.books
40 else:
41 # this is a constructed "all books" view, with a fake "shelf" obj
42 FakeShelf = namedtuple(
43 "Shelf", ("identifier", "name", "user", "books", "privacy")
44 )
45 books = (
46 models.Edition.viewer_aware_objects(request.user)
47 .filter(
48 # privacy is ensured because the shelves are already filtered above
49 shelfbook__shelf__in=shelves
50 )
51 .distinct()
52 )
53 shelf = FakeShelf("all", _("All books"), user, books, "public")
54
55 if is_api_request(request) and shelf_identifier:
56 return ActivitypubResponse(shelf.to_activity(**request.GET))
57
58 reviews = models.Review.objects
59 if not is_self:
60 reviews = models.Review.privacy_filter(request.user)
61
62 reviews = reviews.filter(
63 user=user,
64 rating__isnull=False,
65 book__id=OuterRef("id"),
66 deleted=False,
67 ).order_by("-published_date")
68
69 reading = models.ReadThrough.objects
70
71 reading = reading.filter(user=user, book__id=OuterRef("id")).order_by(
72 "start_date"
73 )
74
75 books = books.annotate(
76 rating=Subquery(reviews.values("rating")[:1]),
77 shelved_date=F("shelfbook__shelved_date"),
78 start_date=Subquery(reading.values("start_date")[:1]),
79 finish_date=Subquery(reading.values("finish_date")[:1]),
80 author=Subquery(
81 models.Book.objects.filter(id=OuterRef("id")).values("authors__name")[
82 :1
83 ]
84 ),
85 ).prefetch_related("authors")
86
87 books = sort_books(books, request.GET.get("sort"))
88
89 paginated = Paginator(
90 books,
91 PAGE_LENGTH,
92 )
93 page = paginated.get_page(request.GET.get("page"))
94 data = {
95 "user": user,
96 "is_self": is_self,
97 "shelves": shelves,
98 "shelf": shelf,
99 "books": page,
100 "edit_form": forms.ShelfForm(instance=shelf if shelf_identifier else None),
101 "create_form": forms.ShelfForm(),
102 "sort": request.GET.get("sort"),
103 "page_range": paginated.get_elided_page_range(
104 page.number, on_each_side=2, on_ends=1
105 ),
106 }
107
108 return TemplateResponse(request, "shelf/shelf.html", data)
109
110 @method_decorator(login_required, name="dispatch")
111 # pylint: disable=unused-argument
112 def post(self, request, username, shelf_identifier):
113 """edit a shelf"""
114 user = get_user_from_username(request.user, username)
115 shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)
116 shelf.raise_not_editable(request.user)
117
118 # you can't change the name of the default shelves
119 if not shelf.editable and request.POST.get("name") != shelf.name:
120 return HttpResponseBadRequest()
121
122 form = forms.ShelfForm(request.POST, instance=shelf)
123 if not form.is_valid():
124 return redirect(shelf.local_path)
125 shelf = form.save()
126 return redirect(shelf.local_path)
127
128
129 def sort_books(books, sort):
130 """Books in shelf sorting"""
131 sort_fields = [
132 "title",
133 "author",
134 "shelved_date",
135 "start_date",
136 "finish_date",
137 "rating",
138 ]
139
140 if sort in sort_fields:
141 books = books.order_by(sort)
142 elif sort and sort[1:] in sort_fields:
143 books = books.order_by(F(sort[1:]).desc(nulls_last=True))
144 else:
145 books = books.order_by("-shelved_date")
146 return books
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/shelf/shelf.py b/bookwyrm/views/shelf/shelf.py
--- a/bookwyrm/views/shelf/shelf.py
+++ b/bookwyrm/views/shelf/shelf.py
@@ -72,9 +72,13 @@
"start_date"
)
+ if shelf_identifier:
+ books = books.annotate(shelved_date=F("shelfbook__shelved_date"))
+ else:
+ # sorting by shelved date will cause duplicates in the "all books" view
+ books = books.annotate(shelved_date=F("updated_date"))
books = books.annotate(
rating=Subquery(reviews.values("rating")[:1]),
- shelved_date=F("shelfbook__shelved_date"),
start_date=Subquery(reading.values("start_date")[:1]),
finish_date=Subquery(reading.values("finish_date")[:1]),
author=Subquery(
| {"golden_diff": "diff --git a/bookwyrm/views/shelf/shelf.py b/bookwyrm/views/shelf/shelf.py\n--- a/bookwyrm/views/shelf/shelf.py\n+++ b/bookwyrm/views/shelf/shelf.py\n@@ -72,9 +72,13 @@\n \"start_date\"\n )\n \n+ if shelf_identifier:\n+ books = books.annotate(shelved_date=F(\"shelfbook__shelved_date\"))\n+ else:\n+ # sorting by shelved date will cause duplicates in the \"all books\" view\n+ books = books.annotate(shelved_date=F(\"updated_date\"))\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n- shelved_date=F(\"shelfbook__shelved_date\"),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n finish_date=Subquery(reading.values(\"finish_date\")[:1]),\n author=Subquery(\n", "issue": "Books on multiple shelves duplicated in the \"all books\" view\n<img width=\"1172\" alt=\"Screen Shot 2021-12-02 at 11 47 40 AM\" src=\"https://user-images.githubusercontent.com/1807695/144495864-f94425a7-b11f-4df4-8ab7-c850d5a17996.png\">\r\n\n", "before_files": [{"content": "\"\"\" shelf views \"\"\"\nfrom collections import namedtuple\n\nfrom django.db.models import OuterRef, Subquery, F\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.views.helpers import is_api_request, get_user_from_username\n\n\n# pylint: disable=no-self-use\nclass Shelf(View):\n \"\"\"shelf page\"\"\"\n\n def get(self, request, username, shelf_identifier=None):\n \"\"\"display a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n\n is_self = user == request.user\n\n if is_self:\n shelves = user.shelf_set.all()\n else:\n shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()\n\n # get the shelf and make sure the logged in user should be able to see it\n if shelf_identifier:\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_visible_to_user(request.user)\n books = shelf.books\n else:\n # this is a constructed \"all books\" view, with a fake \"shelf\" obj\n FakeShelf = namedtuple(\n \"Shelf\", (\"identifier\", \"name\", \"user\", \"books\", \"privacy\")\n )\n books = (\n models.Edition.viewer_aware_objects(request.user)\n .filter(\n # privacy is ensured because the shelves are already filtered above\n shelfbook__shelf__in=shelves\n )\n .distinct()\n )\n shelf = FakeShelf(\"all\", _(\"All books\"), user, books, \"public\")\n\n if is_api_request(request) and shelf_identifier:\n return ActivitypubResponse(shelf.to_activity(**request.GET))\n\n reviews = models.Review.objects\n if not is_self:\n reviews = models.Review.privacy_filter(request.user)\n\n reviews = reviews.filter(\n user=user,\n rating__isnull=False,\n book__id=OuterRef(\"id\"),\n deleted=False,\n ).order_by(\"-published_date\")\n\n reading = models.ReadThrough.objects\n\n reading = reading.filter(user=user, book__id=OuterRef(\"id\")).order_by(\n \"start_date\"\n )\n\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n shelved_date=F(\"shelfbook__shelved_date\"),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n finish_date=Subquery(reading.values(\"finish_date\")[:1]),\n author=Subquery(\n models.Book.objects.filter(id=OuterRef(\"id\")).values(\"authors__name\")[\n :1\n ]\n ),\n ).prefetch_related(\"authors\")\n\n books = sort_books(books, request.GET.get(\"sort\"))\n\n paginated = Paginator(\n books,\n PAGE_LENGTH,\n )\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelves,\n \"shelf\": shelf,\n \"books\": page,\n \"edit_form\": forms.ShelfForm(instance=shelf if shelf_identifier else None),\n \"create_form\": forms.ShelfForm(),\n \"sort\": request.GET.get(\"sort\"),\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n\n return TemplateResponse(request, \"shelf/shelf.html\", data)\n\n @method_decorator(login_required, name=\"dispatch\")\n # pylint: disable=unused-argument\n def post(self, request, username, shelf_identifier):\n \"\"\"edit a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_not_editable(request.user)\n\n # you can't change the name of the default shelves\n if not shelf.editable and request.POST.get(\"name\") != shelf.name:\n return HttpResponseBadRequest()\n\n form = forms.ShelfForm(request.POST, instance=shelf)\n if not form.is_valid():\n return redirect(shelf.local_path)\n shelf = form.save()\n return redirect(shelf.local_path)\n\n\ndef sort_books(books, sort):\n \"\"\"Books in shelf sorting\"\"\"\n sort_fields = [\n \"title\",\n \"author\",\n \"shelved_date\",\n \"start_date\",\n \"finish_date\",\n \"rating\",\n ]\n\n if sort in sort_fields:\n books = books.order_by(sort)\n elif sort and sort[1:] in sort_fields:\n books = books.order_by(F(sort[1:]).desc(nulls_last=True))\n else:\n books = books.order_by(\"-shelved_date\")\n return books\n", "path": "bookwyrm/views/shelf/shelf.py"}], "after_files": [{"content": "\"\"\" shelf views \"\"\"\nfrom collections import namedtuple\n\nfrom django.db.models import OuterRef, Subquery, F\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom bookwyrm.views.helpers import is_api_request, get_user_from_username\n\n\n# pylint: disable=no-self-use\nclass Shelf(View):\n \"\"\"shelf page\"\"\"\n\n def get(self, request, username, shelf_identifier=None):\n \"\"\"display a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n\n is_self = user == request.user\n\n if is_self:\n shelves = user.shelf_set.all()\n else:\n shelves = models.Shelf.privacy_filter(request.user).filter(user=user).all()\n\n # get the shelf and make sure the logged in user should be able to see it\n if shelf_identifier:\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_visible_to_user(request.user)\n books = shelf.books\n else:\n # this is a constructed \"all books\" view, with a fake \"shelf\" obj\n FakeShelf = namedtuple(\n \"Shelf\", (\"identifier\", \"name\", \"user\", \"books\", \"privacy\")\n )\n books = (\n models.Edition.viewer_aware_objects(request.user)\n .filter(\n # privacy is ensured because the shelves are already filtered above\n shelfbook__shelf__in=shelves\n )\n .distinct()\n )\n shelf = FakeShelf(\"all\", _(\"All books\"), user, books, \"public\")\n\n if is_api_request(request) and shelf_identifier:\n return ActivitypubResponse(shelf.to_activity(**request.GET))\n\n reviews = models.Review.objects\n if not is_self:\n reviews = models.Review.privacy_filter(request.user)\n\n reviews = reviews.filter(\n user=user,\n rating__isnull=False,\n book__id=OuterRef(\"id\"),\n deleted=False,\n ).order_by(\"-published_date\")\n\n reading = models.ReadThrough.objects\n\n reading = reading.filter(user=user, book__id=OuterRef(\"id\")).order_by(\n \"start_date\"\n )\n\n if shelf_identifier:\n books = books.annotate(shelved_date=F(\"shelfbook__shelved_date\"))\n else:\n # sorting by shelved date will cause duplicates in the \"all books\" view\n books = books.annotate(shelved_date=F(\"updated_date\"))\n books = books.annotate(\n rating=Subquery(reviews.values(\"rating\")[:1]),\n start_date=Subquery(reading.values(\"start_date\")[:1]),\n finish_date=Subquery(reading.values(\"finish_date\")[:1]),\n author=Subquery(\n models.Book.objects.filter(id=OuterRef(\"id\")).values(\"authors__name\")[\n :1\n ]\n ),\n ).prefetch_related(\"authors\")\n\n books = sort_books(books, request.GET.get(\"sort\"))\n\n paginated = Paginator(\n books,\n PAGE_LENGTH,\n )\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelves,\n \"shelf\": shelf,\n \"books\": page,\n \"edit_form\": forms.ShelfForm(instance=shelf if shelf_identifier else None),\n \"create_form\": forms.ShelfForm(),\n \"sort\": request.GET.get(\"sort\"),\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n\n return TemplateResponse(request, \"shelf/shelf.html\", data)\n\n @method_decorator(login_required, name=\"dispatch\")\n # pylint: disable=unused-argument\n def post(self, request, username, shelf_identifier):\n \"\"\"edit a shelf\"\"\"\n user = get_user_from_username(request.user, username)\n shelf = get_object_or_404(user.shelf_set, identifier=shelf_identifier)\n shelf.raise_not_editable(request.user)\n\n # you can't change the name of the default shelves\n if not shelf.editable and request.POST.get(\"name\") != shelf.name:\n return HttpResponseBadRequest()\n\n form = forms.ShelfForm(request.POST, instance=shelf)\n if not form.is_valid():\n return redirect(shelf.local_path)\n shelf = form.save()\n return redirect(shelf.local_path)\n\n\ndef sort_books(books, sort):\n \"\"\"Books in shelf sorting\"\"\"\n sort_fields = [\n \"title\",\n \"author\",\n \"shelved_date\",\n \"start_date\",\n \"finish_date\",\n \"rating\",\n ]\n\n if sort in sort_fields:\n books = books.order_by(sort)\n elif sort and sort[1:] in sort_fields:\n books = books.order_by(F(sort[1:]).desc(nulls_last=True))\n else:\n books = books.order_by(\"-shelved_date\")\n return books\n", "path": "bookwyrm/views/shelf/shelf.py"}]} | 1,805 | 206 |
gh_patches_debug_34322 | rasdani/github-patches | git_diff | networkx__networkx-2525 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create bridges.py
Contains three simple functions for detection and retrieval of bridges or local bridges in a undirected network.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/algorithms/bridges.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # bridges.py - bridge-finding algorithms
3 #
4 # Copyright 2004-2016 NetworkX developers.
5 #
6 # This file is part of NetworkX.
7 #
8 # NetworkX is distributed under a BSD license; see LICENSE.txt for more
9 # information.
10 """Bridge-finding algorithms."""
11 from itertools import chain
12
13 import networkx as nx
14 from networkx.utils import not_implemented_for
15
16 __all__ = ['bridges', 'has_bridges']
17
18
19 @not_implemented_for('multigraph')
20 @not_implemented_for('directed')
21 def bridges(G, root=None):
22 """Generate all bridges in a graph.
23
24 A *bridge* in a graph is an edge whose removal causes the number of
25 connected components of the graph to increase.
26
27 Parameters
28 ----------
29 G : undirected graph
30
31 root : node (optional)
32 A node in the graph `G`. If specified, only the bridges in the
33 connected component containing this node will be returned.
34
35 Yields
36 ------
37 e : edge
38 An edge in the graph whose removal disconnects the graph (or
39 causes the number of connected components to increase).
40
41 Raises
42 ------
43 NodeNotFound
44 If `root` is not in the graph `G`.
45
46 Examples
47 --------
48 The barbell graph with parameter zero has a single bridge::
49
50 >>> G = nx.barbell_graph(10, 0)
51 >>> list(nx.bridges(G))
52 [(9, 10)]
53
54 Notes
55 -----
56 This implementation uses the :func:`networkx.chain_decomposition`
57 function, so it shares its worst-case time complexity, :math:`O(m +
58 n)`, ignoring polylogarithmic factors, where *n* is the number of
59 nodes in the graph and *m* is the number of edges.
60
61 """
62 chains = nx.chain_decomposition(G, root=root)
63 chain_edges = set(chain.from_iterable(chains))
64 for u, v in G.edges():
65 if (u, v) not in chain_edges and (v, u) not in chain_edges:
66 yield u, v
67
68
69 @not_implemented_for('multigraph')
70 @not_implemented_for('directed')
71 def has_bridges(G, root=None):
72 """Decide whether a graph has any bridges.
73
74 A *bridge* in a graph is an edge whose removal causes the number of
75 connected components of the graph to increase.
76
77 Parameters
78 ----------
79 G : undirected graph
80
81 root : node (optional)
82 A node in the graph `G`. If specified, only the bridges in the
83 connected component containing this node will be considered.
84
85 Returns
86 -------
87 bool
88 Whether the graph (or the connected component containing `root`)
89 has any bridges.
90
91 Raises
92 ------
93 NodeNotFound
94 If `root` is not in the graph `G`.
95
96 Examples
97 --------
98 The barbell graph with parameter zero has a single bridge::
99
100 >>> G = nx.barbell_graph(10, 0)
101 >>> nx.has_bridges(G)
102 True
103
104 On the other hand, the cycle graph has no bridges::
105
106 >>> G = nx.cycle_graph(5)
107 >>> nx.has_bridges(G)
108 False
109
110 Notes
111 -----
112 This implementation uses the :func:`networkx.bridges` function, so
113 it shares its worst-case time complexity, :math:`O(m + n)`, ignoring
114 polylogarithmic factors, where *n* is the number of nodes in the
115 graph and *m* is the number of edges.
116
117 """
118 try:
119 next(bridges(G))
120 except StopIteration:
121 return False
122 else:
123 return True
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/networkx/algorithms/bridges.py b/networkx/algorithms/bridges.py
--- a/networkx/algorithms/bridges.py
+++ b/networkx/algorithms/bridges.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
# bridges.py - bridge-finding algorithms
#
-# Copyright 2004-2016 NetworkX developers.
+# Copyright 2004-2017 NetworkX developers.
#
# This file is part of NetworkX.
#
@@ -13,7 +13,7 @@
import networkx as nx
from networkx.utils import not_implemented_for
-__all__ = ['bridges', 'has_bridges']
+__all__ = ['bridges', 'has_bridges', 'local_bridges']
@not_implemented_for('multigraph')
@@ -121,3 +121,61 @@
return False
else:
return True
+
+
+@not_implemented_for('multigraph')
+@not_implemented_for('directed')
+def local_bridges(G, with_span=True, weight=None):
+ """Iterate over local bridges of `G` optionally computing the span
+
+ A *local bridge* is an edge whose endpoints have no common neighbors.
+ That is, the edge is not part of a triangle in the graph.
+
+ The *span* of a *local bridge* is the shortest path length between
+ the endpoints if the local bridge is removed.
+
+ Parameters
+ ----------
+ G : undirected graph
+
+ with_span : bool
+ If True, yield a 3-tuple `(u, v, span)`
+
+ weight : function, string or None (default: None)
+ If function, used to compute edge weights for the span.
+ If string, the edge data attribute used in calculating span.
+ If None, all edges have weight 1.
+
+ Yields
+ ------
+ e : edge
+ The local bridges as an edge 2-tuple of nodes `(u, v)` or
+ as a 3-tuple `(u, v, span)` when `with_span is True`.
+
+ Examples
+ --------
+ A cycle graph has every edge a local bridge with span N-1.
+
+ >>> G = nx.cycle_graph(9)
+ >>> (0, 8, 8) in set(nx.local_bridges(G))
+ True
+ """
+ if with_span is not True:
+ for u, v in G.edges:
+ if not (set(G[u]) & set(G[v])):
+ yield u, v
+ else:
+ wt = nx.weighted._weight_function(G, weight)
+ for u, v in G.edges:
+ if not (set(G[u]) & set(G[v])):
+ enodes = {u, v}
+ def hide_edge(n, nbr, d):
+ if n not in enodes or nbr not in enodes:
+ return wt(n, nbr, d)
+ return None
+
+ try:
+ span = nx.shortest_path_length(G, u, v, weight=hide_edge)
+ yield u, v, span
+ except nx.NetworkXNoPath:
+ yield u, v, float('inf')
| {"golden_diff": "diff --git a/networkx/algorithms/bridges.py b/networkx/algorithms/bridges.py\n--- a/networkx/algorithms/bridges.py\n+++ b/networkx/algorithms/bridges.py\n@@ -1,7 +1,7 @@\n # -*- coding: utf-8 -*-\n # bridges.py - bridge-finding algorithms\n #\n-# Copyright 2004-2016 NetworkX developers.\n+# Copyright 2004-2017 NetworkX developers.\n #\n # This file is part of NetworkX.\n #\n@@ -13,7 +13,7 @@\n import networkx as nx\n from networkx.utils import not_implemented_for\n \n-__all__ = ['bridges', 'has_bridges']\n+__all__ = ['bridges', 'has_bridges', 'local_bridges']\n \n \n @not_implemented_for('multigraph')\n@@ -121,3 +121,61 @@\n return False\n else:\n return True\n+\n+\n+@not_implemented_for('multigraph')\n+@not_implemented_for('directed')\n+def local_bridges(G, with_span=True, weight=None):\n+ \"\"\"Iterate over local bridges of `G` optionally computing the span\n+\n+ A *local bridge* is an edge whose endpoints have no common neighbors.\n+ That is, the edge is not part of a triangle in the graph.\n+\n+ The *span* of a *local bridge* is the shortest path length between\n+ the endpoints if the local bridge is removed.\n+\n+ Parameters\n+ ----------\n+ G : undirected graph\n+\n+ with_span : bool\n+ If True, yield a 3-tuple `(u, v, span)`\n+\n+ weight : function, string or None (default: None)\n+ If function, used to compute edge weights for the span.\n+ If string, the edge data attribute used in calculating span.\n+ If None, all edges have weight 1.\n+\n+ Yields\n+ ------\n+ e : edge\n+ The local bridges as an edge 2-tuple of nodes `(u, v)` or\n+ as a 3-tuple `(u, v, span)` when `with_span is True`.\n+\n+ Examples\n+ --------\n+ A cycle graph has every edge a local bridge with span N-1.\n+\n+ >>> G = nx.cycle_graph(9)\n+ >>> (0, 8, 8) in set(nx.local_bridges(G))\n+ True\n+ \"\"\"\n+ if with_span is not True:\n+ for u, v in G.edges:\n+ if not (set(G[u]) & set(G[v])):\n+ yield u, v\n+ else:\n+ wt = nx.weighted._weight_function(G, weight)\n+ for u, v in G.edges:\n+ if not (set(G[u]) & set(G[v])):\n+ enodes = {u, v}\n+ def hide_edge(n, nbr, d):\n+ if n not in enodes or nbr not in enodes:\n+ return wt(n, nbr, d)\n+ return None\n+\n+ try:\n+ span = nx.shortest_path_length(G, u, v, weight=hide_edge)\n+ yield u, v, span\n+ except nx.NetworkXNoPath:\n+ yield u, v, float('inf')\n", "issue": "Create bridges.py\nContains three simple functions for detection and retrieval of bridges or local bridges in a undirected network.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# bridges.py - bridge-finding algorithms\n#\n# Copyright 2004-2016 NetworkX developers.\n#\n# This file is part of NetworkX.\n#\n# NetworkX is distributed under a BSD license; see LICENSE.txt for more\n# information.\n\"\"\"Bridge-finding algorithms.\"\"\"\nfrom itertools import chain\n\nimport networkx as nx\nfrom networkx.utils import not_implemented_for\n\n__all__ = ['bridges', 'has_bridges']\n\n\n@not_implemented_for('multigraph')\n@not_implemented_for('directed')\ndef bridges(G, root=None):\n \"\"\"Generate all bridges in a graph.\n\n A *bridge* in a graph is an edge whose removal causes the number of\n connected components of the graph to increase.\n\n Parameters\n ----------\n G : undirected graph\n\n root : node (optional)\n A node in the graph `G`. If specified, only the bridges in the\n connected component containing this node will be returned.\n\n Yields\n ------\n e : edge\n An edge in the graph whose removal disconnects the graph (or\n causes the number of connected components to increase).\n\n Raises\n ------\n NodeNotFound\n If `root` is not in the graph `G`.\n\n Examples\n --------\n The barbell graph with parameter zero has a single bridge::\n\n >>> G = nx.barbell_graph(10, 0)\n >>> list(nx.bridges(G))\n [(9, 10)]\n\n Notes\n -----\n This implementation uses the :func:`networkx.chain_decomposition`\n function, so it shares its worst-case time complexity, :math:`O(m +\n n)`, ignoring polylogarithmic factors, where *n* is the number of\n nodes in the graph and *m* is the number of edges.\n\n \"\"\"\n chains = nx.chain_decomposition(G, root=root)\n chain_edges = set(chain.from_iterable(chains))\n for u, v in G.edges():\n if (u, v) not in chain_edges and (v, u) not in chain_edges:\n yield u, v\n\n\n@not_implemented_for('multigraph')\n@not_implemented_for('directed')\ndef has_bridges(G, root=None):\n \"\"\"Decide whether a graph has any bridges.\n\n A *bridge* in a graph is an edge whose removal causes the number of\n connected components of the graph to increase.\n\n Parameters\n ----------\n G : undirected graph\n\n root : node (optional)\n A node in the graph `G`. If specified, only the bridges in the\n connected component containing this node will be considered.\n\n Returns\n -------\n bool\n Whether the graph (or the connected component containing `root`)\n has any bridges.\n\n Raises\n ------\n NodeNotFound\n If `root` is not in the graph `G`.\n\n Examples\n --------\n The barbell graph with parameter zero has a single bridge::\n\n >>> G = nx.barbell_graph(10, 0)\n >>> nx.has_bridges(G)\n True\n\n On the other hand, the cycle graph has no bridges::\n\n >>> G = nx.cycle_graph(5)\n >>> nx.has_bridges(G)\n False\n\n Notes\n -----\n This implementation uses the :func:`networkx.bridges` function, so\n it shares its worst-case time complexity, :math:`O(m + n)`, ignoring\n polylogarithmic factors, where *n* is the number of nodes in the\n graph and *m* is the number of edges.\n\n \"\"\"\n try:\n next(bridges(G))\n except StopIteration:\n return False\n else:\n return True\n", "path": "networkx/algorithms/bridges.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# bridges.py - bridge-finding algorithms\n#\n# Copyright 2004-2017 NetworkX developers.\n#\n# This file is part of NetworkX.\n#\n# NetworkX is distributed under a BSD license; see LICENSE.txt for more\n# information.\n\"\"\"Bridge-finding algorithms.\"\"\"\nfrom itertools import chain\n\nimport networkx as nx\nfrom networkx.utils import not_implemented_for\n\n__all__ = ['bridges', 'has_bridges', 'local_bridges']\n\n\n@not_implemented_for('multigraph')\n@not_implemented_for('directed')\ndef bridges(G, root=None):\n \"\"\"Generate all bridges in a graph.\n\n A *bridge* in a graph is an edge whose removal causes the number of\n connected components of the graph to increase.\n\n Parameters\n ----------\n G : undirected graph\n\n root : node (optional)\n A node in the graph `G`. If specified, only the bridges in the\n connected component containing this node will be returned.\n\n Yields\n ------\n e : edge\n An edge in the graph whose removal disconnects the graph (or\n causes the number of connected components to increase).\n\n Raises\n ------\n NodeNotFound\n If `root` is not in the graph `G`.\n\n Examples\n --------\n The barbell graph with parameter zero has a single bridge::\n\n >>> G = nx.barbell_graph(10, 0)\n >>> list(nx.bridges(G))\n [(9, 10)]\n\n Notes\n -----\n This implementation uses the :func:`networkx.chain_decomposition`\n function, so it shares its worst-case time complexity, :math:`O(m +\n n)`, ignoring polylogarithmic factors, where *n* is the number of\n nodes in the graph and *m* is the number of edges.\n\n \"\"\"\n chains = nx.chain_decomposition(G, root=root)\n chain_edges = set(chain.from_iterable(chains))\n for u, v in G.edges():\n if (u, v) not in chain_edges and (v, u) not in chain_edges:\n yield u, v\n\n\n@not_implemented_for('multigraph')\n@not_implemented_for('directed')\ndef has_bridges(G, root=None):\n \"\"\"Decide whether a graph has any bridges.\n\n A *bridge* in a graph is an edge whose removal causes the number of\n connected components of the graph to increase.\n\n Parameters\n ----------\n G : undirected graph\n\n root : node (optional)\n A node in the graph `G`. If specified, only the bridges in the\n connected component containing this node will be considered.\n\n Returns\n -------\n bool\n Whether the graph (or the connected component containing `root`)\n has any bridges.\n\n Raises\n ------\n NodeNotFound\n If `root` is not in the graph `G`.\n\n Examples\n --------\n The barbell graph with parameter zero has a single bridge::\n\n >>> G = nx.barbell_graph(10, 0)\n >>> nx.has_bridges(G)\n True\n\n On the other hand, the cycle graph has no bridges::\n\n >>> G = nx.cycle_graph(5)\n >>> nx.has_bridges(G)\n False\n\n Notes\n -----\n This implementation uses the :func:`networkx.bridges` function, so\n it shares its worst-case time complexity, :math:`O(m + n)`, ignoring\n polylogarithmic factors, where *n* is the number of nodes in the\n graph and *m* is the number of edges.\n\n \"\"\"\n try:\n next(bridges(G))\n except StopIteration:\n return False\n else:\n return True\n\n\n@not_implemented_for('multigraph')\n@not_implemented_for('directed')\ndef local_bridges(G, with_span=True, weight=None):\n \"\"\"Iterate over local bridges of `G` optionally computing the span\n\n A *local bridge* is an edge whose endpoints have no common neighbors.\n That is, the edge is not part of a triangle in the graph.\n\n The *span* of a *local bridge* is the shortest path length between\n the endpoints if the local bridge is removed.\n\n Parameters\n ----------\n G : undirected graph\n\n with_span : bool\n If True, yield a 3-tuple `(u, v, span)`\n\n weight : function, string or None (default: None)\n If function, used to compute edge weights for the span.\n If string, the edge data attribute used in calculating span.\n If None, all edges have weight 1.\n\n Yields\n ------\n e : edge\n The local bridges as an edge 2-tuple of nodes `(u, v)` or\n as a 3-tuple `(u, v, span)` when `with_span is True`.\n\n Examples\n --------\n A cycle graph has every edge a local bridge with span N-1.\n\n >>> G = nx.cycle_graph(9)\n >>> (0, 8, 8) in set(nx.local_bridges(G))\n True\n \"\"\"\n if with_span is not True:\n for u, v in G.edges:\n if not (set(G[u]) & set(G[v])):\n yield u, v\n else:\n wt = nx.weighted._weight_function(G, weight)\n for u, v in G.edges:\n if not (set(G[u]) & set(G[v])):\n enodes = {u, v}\n def hide_edge(n, nbr, d):\n if n not in enodes or nbr not in enodes:\n return wt(n, nbr, d)\n return None\n\n try:\n span = nx.shortest_path_length(G, u, v, weight=hide_edge)\n yield u, v, span\n except nx.NetworkXNoPath:\n yield u, v, float('inf')\n", "path": "networkx/algorithms/bridges.py"}]} | 1,377 | 744 |
gh_patches_debug_37021 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add nullable attribute to the Team Model
Following changes are required in the Team model:
- [x] Convert the URLFields to CharField with URL
- [x] Change the fields `email`, `github_url`, `linkedin_url`, `personal_website` to nullable fields
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/web/models.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from django.db import models
4
5 from base.models import (TimeStampedModel, )
6
7
8 class Contact(TimeStampedModel):
9 """Model representing details of User submitting queries."""
10 name = models.CharField(max_length=100,)
11 email = models.EmailField(max_length=70,)
12 message = models.CharField(max_length=500,)
13
14 def __unicode__(self):
15 return "%s: %s: %s" % (self.name, self.email, self.message)
16
17 class Meta:
18 app_label = 'web'
19 db_table = 'contact'
20
21
22 class Team(models.Model):
23 """Model representing details of Team"""
24
25 # Team Type Options
26 CORE_TEAM = 'Core Team'
27 CONTRIBUTOR = 'Contributor'
28
29 TEAM_TYPE_OPTIONS = (
30 (CORE_TEAM, CORE_TEAM),
31 (CONTRIBUTOR, CONTRIBUTOR),
32 )
33
34 name = models.CharField(max_length=100)
35 email = models.EmailField(max_length=70, null=True)
36 description = models.TextField(null=True)
37 headshot = models.ImageField(upload_to="headshots", null=True, blank=True)
38 visible = models.BooleanField(default=True)
39 github_url = models.CharField(max_length=200, null=True)
40 linkedin_url = models.CharField(max_length=200, null=True)
41 personal_website = models.CharField(max_length=200, null=True)
42 background_image = models.ImageField(upload_to="bg-images", null=True, blank=True)
43 team_type = models.CharField(choices=TEAM_TYPE_OPTIONS, max_length=50)
44
45 def __unicode__(self):
46 return self.name
47
48 class Meta:
49 app_label = 'web'
50 db_table = 'teams'
51
```
Path: `apps/web/migrations/0004_change_team_model_field_type.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Generated by Django 1.10.2 on 2017-03-19 21:06
3 from __future__ import unicode_literals
4
5 from django.db import migrations, models
6
7
8 class Migration(migrations.Migration):
9
10 dependencies = [
11 ('web', '0003_added_description_and_background_image_to_team_model'),
12 ]
13
14 operations = [
15 migrations.AlterField(
16 model_name='team',
17 name='email',
18 field=models.EmailField(max_length=70, null=True),
19 ),
20 migrations.AlterField(
21 model_name='team',
22 name='github_url',
23 field=models.CharField(max_length=200, null=True),
24 ),
25 migrations.AlterField(
26 model_name='team',
27 name='linkedin_url',
28 field=models.CharField(max_length=200, null=True),
29 ),
30 migrations.AlterField(
31 model_name='team',
32 name='personal_website',
33 field=models.CharField(max_length=200, null=True),
34 ),
35 ]
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/web/migrations/0004_change_team_model_field_type.py b/apps/web/migrations/0004_change_team_model_field_type.py
--- a/apps/web/migrations/0004_change_team_model_field_type.py
+++ b/apps/web/migrations/0004_change_team_model_field_type.py
@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
-# Generated by Django 1.10.2 on 2017-03-19 21:06
+# Generated by Django 1.10.2 on 2017-03-19 21:58
from __future__ import unicode_literals
from django.db import migrations, models
@@ -15,21 +15,21 @@
migrations.AlterField(
model_name='team',
name='email',
- field=models.EmailField(max_length=70, null=True),
+ field=models.EmailField(blank=True, max_length=70, null=True),
),
migrations.AlterField(
model_name='team',
name='github_url',
- field=models.CharField(max_length=200, null=True),
+ field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='team',
name='linkedin_url',
- field=models.CharField(max_length=200, null=True),
+ field=models.CharField(blank=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='team',
name='personal_website',
- field=models.CharField(max_length=200, null=True),
+ field=models.CharField(blank=True, max_length=200, null=True),
),
]
diff --git a/apps/web/models.py b/apps/web/models.py
--- a/apps/web/models.py
+++ b/apps/web/models.py
@@ -32,13 +32,13 @@
)
name = models.CharField(max_length=100)
- email = models.EmailField(max_length=70, null=True)
+ email = models.EmailField(max_length=70, null=True, blank=True)
description = models.TextField(null=True)
headshot = models.ImageField(upload_to="headshots", null=True, blank=True)
visible = models.BooleanField(default=True)
- github_url = models.CharField(max_length=200, null=True)
- linkedin_url = models.CharField(max_length=200, null=True)
- personal_website = models.CharField(max_length=200, null=True)
+ github_url = models.CharField(max_length=200, null=True, blank=True)
+ linkedin_url = models.CharField(max_length=200, null=True, blank=True)
+ personal_website = models.CharField(max_length=200, null=True, blank=True)
background_image = models.ImageField(upload_to="bg-images", null=True, blank=True)
team_type = models.CharField(choices=TEAM_TYPE_OPTIONS, max_length=50)
| {"golden_diff": "diff --git a/apps/web/migrations/0004_change_team_model_field_type.py b/apps/web/migrations/0004_change_team_model_field_type.py\n--- a/apps/web/migrations/0004_change_team_model_field_type.py\n+++ b/apps/web/migrations/0004_change_team_model_field_type.py\n@@ -1,5 +1,5 @@\n # -*- coding: utf-8 -*-\n-# Generated by Django 1.10.2 on 2017-03-19 21:06\n+# Generated by Django 1.10.2 on 2017-03-19 21:58\n from __future__ import unicode_literals\n \n from django.db import migrations, models\n@@ -15,21 +15,21 @@\n migrations.AlterField(\n model_name='team',\n name='email',\n- field=models.EmailField(max_length=70, null=True),\n+ field=models.EmailField(blank=True, max_length=70, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='github_url',\n- field=models.CharField(max_length=200, null=True),\n+ field=models.CharField(blank=True, max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='linkedin_url',\n- field=models.CharField(max_length=200, null=True),\n+ field=models.CharField(blank=True, max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='personal_website',\n- field=models.CharField(max_length=200, null=True),\n+ field=models.CharField(blank=True, max_length=200, null=True),\n ),\n ]\ndiff --git a/apps/web/models.py b/apps/web/models.py\n--- a/apps/web/models.py\n+++ b/apps/web/models.py\n@@ -32,13 +32,13 @@\n )\n \n name = models.CharField(max_length=100)\n- email = models.EmailField(max_length=70, null=True)\n+ email = models.EmailField(max_length=70, null=True, blank=True)\n description = models.TextField(null=True)\n headshot = models.ImageField(upload_to=\"headshots\", null=True, blank=True)\n visible = models.BooleanField(default=True)\n- github_url = models.CharField(max_length=200, null=True)\n- linkedin_url = models.CharField(max_length=200, null=True)\n- personal_website = models.CharField(max_length=200, null=True)\n+ github_url = models.CharField(max_length=200, null=True, blank=True)\n+ linkedin_url = models.CharField(max_length=200, null=True, blank=True)\n+ personal_website = models.CharField(max_length=200, null=True, blank=True)\n background_image = models.ImageField(upload_to=\"bg-images\", null=True, blank=True)\n team_type = models.CharField(choices=TEAM_TYPE_OPTIONS, max_length=50)\n", "issue": "Add nullable attribute to the Team Model\nFollowing changes are required in the Team model: \r\n\r\n- [x] Convert the URLFields to CharField with URL\r\n\r\n- [x] Change the fields `email`, `github_url`, `linkedin_url`, `personal_website` to nullable fields\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.db import models\n\nfrom base.models import (TimeStampedModel, )\n\n\nclass Contact(TimeStampedModel):\n \"\"\"Model representing details of User submitting queries.\"\"\"\n name = models.CharField(max_length=100,)\n email = models.EmailField(max_length=70,)\n message = models.CharField(max_length=500,)\n\n def __unicode__(self):\n return \"%s: %s: %s\" % (self.name, self.email, self.message)\n\n class Meta:\n app_label = 'web'\n db_table = 'contact'\n\n\nclass Team(models.Model):\n \"\"\"Model representing details of Team\"\"\"\n\n # Team Type Options\n CORE_TEAM = 'Core Team'\n CONTRIBUTOR = 'Contributor'\n\n TEAM_TYPE_OPTIONS = (\n (CORE_TEAM, CORE_TEAM),\n (CONTRIBUTOR, CONTRIBUTOR),\n )\n\n name = models.CharField(max_length=100)\n email = models.EmailField(max_length=70, null=True)\n description = models.TextField(null=True)\n headshot = models.ImageField(upload_to=\"headshots\", null=True, blank=True)\n visible = models.BooleanField(default=True)\n github_url = models.CharField(max_length=200, null=True)\n linkedin_url = models.CharField(max_length=200, null=True)\n personal_website = models.CharField(max_length=200, null=True)\n background_image = models.ImageField(upload_to=\"bg-images\", null=True, blank=True)\n team_type = models.CharField(choices=TEAM_TYPE_OPTIONS, max_length=50)\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n app_label = 'web'\n db_table = 'teams'\n", "path": "apps/web/models.py"}, {"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.10.2 on 2017-03-19 21:06\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('web', '0003_added_description_and_background_image_to_team_model'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='team',\n name='email',\n field=models.EmailField(max_length=70, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='github_url',\n field=models.CharField(max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='linkedin_url',\n field=models.CharField(max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='personal_website',\n field=models.CharField(max_length=200, null=True),\n ),\n ]\n", "path": "apps/web/migrations/0004_change_team_model_field_type.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.db import models\n\nfrom base.models import (TimeStampedModel, )\n\n\nclass Contact(TimeStampedModel):\n \"\"\"Model representing details of User submitting queries.\"\"\"\n name = models.CharField(max_length=100,)\n email = models.EmailField(max_length=70,)\n message = models.CharField(max_length=500,)\n\n def __unicode__(self):\n return \"%s: %s: %s\" % (self.name, self.email, self.message)\n\n class Meta:\n app_label = 'web'\n db_table = 'contact'\n\n\nclass Team(models.Model):\n \"\"\"Model representing details of Team\"\"\"\n\n # Team Type Options\n CORE_TEAM = 'Core Team'\n CONTRIBUTOR = 'Contributor'\n\n TEAM_TYPE_OPTIONS = (\n (CORE_TEAM, CORE_TEAM),\n (CONTRIBUTOR, CONTRIBUTOR),\n )\n\n name = models.CharField(max_length=100)\n email = models.EmailField(max_length=70, null=True, blank=True)\n description = models.TextField(null=True)\n headshot = models.ImageField(upload_to=\"headshots\", null=True, blank=True)\n visible = models.BooleanField(default=True)\n github_url = models.CharField(max_length=200, null=True, blank=True)\n linkedin_url = models.CharField(max_length=200, null=True, blank=True)\n personal_website = models.CharField(max_length=200, null=True, blank=True)\n background_image = models.ImageField(upload_to=\"bg-images\", null=True, blank=True)\n team_type = models.CharField(choices=TEAM_TYPE_OPTIONS, max_length=50)\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n app_label = 'web'\n db_table = 'teams'\n", "path": "apps/web/models.py"}, {"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.10.2 on 2017-03-19 21:58\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('web', '0003_added_description_and_background_image_to_team_model'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='team',\n name='email',\n field=models.EmailField(blank=True, max_length=70, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='github_url',\n field=models.CharField(blank=True, max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='linkedin_url',\n field=models.CharField(blank=True, max_length=200, null=True),\n ),\n migrations.AlterField(\n model_name='team',\n name='personal_website',\n field=models.CharField(blank=True, max_length=200, null=True),\n ),\n ]\n", "path": "apps/web/migrations/0004_change_team_model_field_type.py"}]} | 1,098 | 665 |
gh_patches_debug_15190 | rasdani/github-patches | git_diff | mirumee__ariadne-490 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release Ariadne 0.13
# TODO
- [x] Update dependencies
- [x] Fix linters errors on repo
- [x] #483
- [x] Update changelog
- [x] Write [release notes](https://github.com/mirumee/ariadne-website/pull/75)
- [x] Reach to our amazing art team for tweet graphics
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Programming Language :: Python :: 3.8",
14 "Topic :: Software Development :: Libraries :: Python Modules",
15 ]
16
17 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
18 with open(README_PATH, "r", encoding="utf8") as f:
19 README = f.read()
20
21 setup(
22 name="ariadne",
23 author="Mirumee Software",
24 author_email="[email protected]",
25 description="Ariadne is a Python library for implementing GraphQL servers.",
26 long_description=README,
27 long_description_content_type="text/markdown",
28 license="BSD",
29 version="0.12.0",
30 url="https://github.com/mirumee/ariadne",
31 packages=["ariadne"],
32 include_package_data=True,
33 install_requires=[
34 "graphql-core>=3.1.0",
35 "starlette<0.15",
36 "typing_extensions>=3.6.0",
37 ],
38 extras_require={"asgi-file-uploads": ["python-multipart>=0.0.5"]},
39 classifiers=CLASSIFIERS,
40 platforms=["any"],
41 zip_safe=False,
42 )
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,6 +11,7 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Topic :: Software Development :: Libraries :: Python Modules",
]
@@ -26,7 +27,7 @@
long_description=README,
long_description_content_type="text/markdown",
license="BSD",
- version="0.12.0",
+ version="0.13.0",
url="https://github.com/mirumee/ariadne",
packages=["ariadne"],
include_package_data=True,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,6 +11,7 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ]\n \n@@ -26,7 +27,7 @@\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n- version=\"0.12.0\",\n+ version=\"0.13.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n", "issue": "Release Ariadne 0.13\n# TODO\r\n\r\n- [x] Update dependencies\r\n- [x] Fix linters errors on repo\r\n- [x] #483 \r\n- [x] Update changelog\r\n- [x] Write [release notes](https://github.com/mirumee/ariadne-website/pull/75)\r\n- [x] Reach to our amazing art team for tweet graphics\n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\", encoding=\"utf8\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.12.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.1.0\",\n \"starlette<0.15\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\", encoding=\"utf8\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.13.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n include_package_data=True,\n install_requires=[\n \"graphql-core>=3.1.0\",\n \"starlette<0.15\",\n \"typing_extensions>=3.6.0\",\n ],\n extras_require={\"asgi-file-uploads\": [\"python-multipart>=0.0.5\"]},\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 764 | 179 |
gh_patches_debug_25353 | rasdani/github-patches | git_diff | OpenMined__PySyft-3759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement Negation operation for FV HE Scheme
## Feature Description
Negation operations of FV Scheme need to be implemented.
It should Negate a ciphertext object and return the result in ciphertext form.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `syft/frameworks/torch/he/fv/evaluator.py`
Content:
```
1 import copy
2
3 from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
4 from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta
5 from syft.frameworks.torch.he.fv.ciphertext import CipherText
6 from syft.frameworks.torch.he.fv.plaintext import PlainText
7
8
9 class Evaluator:
10 def __init__(self, context):
11 self.context = context
12 self.coeff_modulus = context.param.coeff_modulus
13 self.plain_modulus = context.param.plain_modulus
14
15 def add(self, op1, op2):
16 """Adds two operands using FV scheme.
17
18 Args:
19 op1 (Ciphertext/Plaintext): First argument.
20 op2 (Ciphertext/Plaintext): Second argument.
21
22 Returns:
23 If both arguments are Plaintext elements then the result will be a Plaintext object
24 otherwise a Ciphertext object with value equivalent to the result of addition
25 operation of two provided arguments.
26 """
27 if isinstance(op1, CipherText) and isinstance(op2, CipherText):
28 return self._add_cipher_cipher(op1, op2)
29
30 elif isinstance(op1, PlainText) and isinstance(op2, PlainText):
31 return self._add_plain_plain(op1, op2)
32
33 elif isinstance(op1, PlainText) and isinstance(op2, CipherText):
34 return self._add_plain_cipher(op1, op2)
35
36 elif isinstance(op1, CipherText) and isinstance(op2, PlainText):
37 return self._add_plain_cipher(op2, op1)
38
39 else:
40 raise TypeError(f"Addition Operation not supported between {type(op1)} and {type(op2)}")
41
42 def _add_cipher_cipher(self, ct1, ct2):
43 """Adds two ciphertexts.
44
45 Args:
46 ct1 (Ciphertext): First argument.
47 ct2 (Ciphertext): Second argument.
48
49 Returns:
50 A Ciphertext object with value equivalent to result of addition of two provided
51 arguments.
52 """
53 ct1, ct2 = copy.deepcopy(ct1.data), copy.deepcopy(ct2.data)
54 result = ct2 if len(ct2) > len(ct1) else ct1
55
56 for i in range(min(len(ct1), len(ct2))):
57 for j in range(len(self.coeff_modulus)):
58 result[i][j] = poly_add_mod(ct1[i][j], ct2[i][j], self.coeff_modulus[j])
59
60 return CipherText(result)
61
62 def _add_plain_cipher(self, pt, ct):
63 """Adds a ciphertext and a plaintext.
64
65 Args:
66 pt (Plaintext): First argument.
67 ct (Ciphertext): Second argument.
68 Returns:
69 A Ciphertext object with value equivalent to result of addition of two provided
70 arguments.
71 """
72 ct = copy.deepcopy(ct)
73 return multiply_add_plain_with_delta(ct, pt, self.context)
74
75 def _add_plain_plain(self, pt1, pt2):
76 """Adds two plaintexts object.
77
78 Args:
79 pt1 (Plaintext): First argument.
80 pt2 (Plaintext): Second argument.
81
82 Returns:
83 A Plaintext object with value equivalent to result of addition of two provided
84 arguments.
85 """
86 pt1, pt2 = copy.deepcopy(pt1), copy.deepcopy(pt2)
87 return PlainText(poly_add_mod(pt1.data, pt2.data, self.plain_modulus))
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py
--- a/syft/frameworks/torch/he/fv/evaluator.py
+++ b/syft/frameworks/torch/he/fv/evaluator.py
@@ -1,6 +1,7 @@
import copy
from syft.frameworks.torch.he.fv.util.operations import poly_add_mod
+from syft.frameworks.torch.he.fv.util.operations import negate_mod
from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta
from syft.frameworks.torch.he.fv.ciphertext import CipherText
from syft.frameworks.torch.he.fv.plaintext import PlainText
@@ -39,6 +40,24 @@
else:
raise TypeError(f"Addition Operation not supported between {type(op1)} and {type(op2)}")
+ def negate(self, ct):
+ """Negate a cipher i.e -(ct_value)
+
+ Args:
+ ct (Ciphertext): Ciphertext to be negated.
+
+ Returns:
+ A Ciphertext object with value equivalent to result of -(ct_value).
+ """
+ result = copy.deepcopy(ct.data)
+
+ for i in range(len(result)):
+ for j in range(len(result[i])):
+ for k in range(len(result[i][j])):
+ result[i][j][k] = negate_mod(ct.data[i][j][k], self.coeff_modulus[j])
+
+ return CipherText(result)
+
def _add_cipher_cipher(self, ct1, ct2):
"""Adds two ciphertexts.
| {"golden_diff": "diff --git a/syft/frameworks/torch/he/fv/evaluator.py b/syft/frameworks/torch/he/fv/evaluator.py\n--- a/syft/frameworks/torch/he/fv/evaluator.py\n+++ b/syft/frameworks/torch/he/fv/evaluator.py\n@@ -1,6 +1,7 @@\n import copy\n \n from syft.frameworks.torch.he.fv.util.operations import poly_add_mod\n+from syft.frameworks.torch.he.fv.util.operations import negate_mod\n from syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta\n from syft.frameworks.torch.he.fv.ciphertext import CipherText\n from syft.frameworks.torch.he.fv.plaintext import PlainText\n@@ -39,6 +40,24 @@\n else:\n raise TypeError(f\"Addition Operation not supported between {type(op1)} and {type(op2)}\")\n \n+ def negate(self, ct):\n+ \"\"\"Negate a cipher i.e -(ct_value)\n+\n+ Args:\n+ ct (Ciphertext): Ciphertext to be negated.\n+\n+ Returns:\n+ A Ciphertext object with value equivalent to result of -(ct_value).\n+ \"\"\"\n+ result = copy.deepcopy(ct.data)\n+\n+ for i in range(len(result)):\n+ for j in range(len(result[i])):\n+ for k in range(len(result[i][j])):\n+ result[i][j][k] = negate_mod(ct.data[i][j][k], self.coeff_modulus[j])\n+\n+ return CipherText(result)\n+\n def _add_cipher_cipher(self, ct1, ct2):\n \"\"\"Adds two ciphertexts.\n", "issue": "Implement Negation operation for FV HE Scheme\n## Feature Description\r\nNegation operations of FV Scheme need to be implemented.\r\n\r\nIt should Negate a ciphertext object and return the result in ciphertext form.\n", "before_files": [{"content": "import copy\n\nfrom syft.frameworks.torch.he.fv.util.operations import poly_add_mod\nfrom syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta\nfrom syft.frameworks.torch.he.fv.ciphertext import CipherText\nfrom syft.frameworks.torch.he.fv.plaintext import PlainText\n\n\nclass Evaluator:\n def __init__(self, context):\n self.context = context\n self.coeff_modulus = context.param.coeff_modulus\n self.plain_modulus = context.param.plain_modulus\n\n def add(self, op1, op2):\n \"\"\"Adds two operands using FV scheme.\n\n Args:\n op1 (Ciphertext/Plaintext): First argument.\n op2 (Ciphertext/Plaintext): Second argument.\n\n Returns:\n If both arguments are Plaintext elements then the result will be a Plaintext object\n otherwise a Ciphertext object with value equivalent to the result of addition\n operation of two provided arguments.\n \"\"\"\n if isinstance(op1, CipherText) and isinstance(op2, CipherText):\n return self._add_cipher_cipher(op1, op2)\n\n elif isinstance(op1, PlainText) and isinstance(op2, PlainText):\n return self._add_plain_plain(op1, op2)\n\n elif isinstance(op1, PlainText) and isinstance(op2, CipherText):\n return self._add_plain_cipher(op1, op2)\n\n elif isinstance(op1, CipherText) and isinstance(op2, PlainText):\n return self._add_plain_cipher(op2, op1)\n\n else:\n raise TypeError(f\"Addition Operation not supported between {type(op1)} and {type(op2)}\")\n\n def _add_cipher_cipher(self, ct1, ct2):\n \"\"\"Adds two ciphertexts.\n\n Args:\n ct1 (Ciphertext): First argument.\n ct2 (Ciphertext): Second argument.\n\n Returns:\n A Ciphertext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n ct1, ct2 = copy.deepcopy(ct1.data), copy.deepcopy(ct2.data)\n result = ct2 if len(ct2) > len(ct1) else ct1\n\n for i in range(min(len(ct1), len(ct2))):\n for j in range(len(self.coeff_modulus)):\n result[i][j] = poly_add_mod(ct1[i][j], ct2[i][j], self.coeff_modulus[j])\n\n return CipherText(result)\n\n def _add_plain_cipher(self, pt, ct):\n \"\"\"Adds a ciphertext and a plaintext.\n\n Args:\n pt (Plaintext): First argument.\n ct (Ciphertext): Second argument.\n Returns:\n A Ciphertext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n ct = copy.deepcopy(ct)\n return multiply_add_plain_with_delta(ct, pt, self.context)\n\n def _add_plain_plain(self, pt1, pt2):\n \"\"\"Adds two plaintexts object.\n\n Args:\n pt1 (Plaintext): First argument.\n pt2 (Plaintext): Second argument.\n\n Returns:\n A Plaintext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n pt1, pt2 = copy.deepcopy(pt1), copy.deepcopy(pt2)\n return PlainText(poly_add_mod(pt1.data, pt2.data, self.plain_modulus))\n", "path": "syft/frameworks/torch/he/fv/evaluator.py"}], "after_files": [{"content": "import copy\n\nfrom syft.frameworks.torch.he.fv.util.operations import poly_add_mod\nfrom syft.frameworks.torch.he.fv.util.operations import negate_mod\nfrom syft.frameworks.torch.he.fv.util.operations import multiply_add_plain_with_delta\nfrom syft.frameworks.torch.he.fv.ciphertext import CipherText\nfrom syft.frameworks.torch.he.fv.plaintext import PlainText\n\n\nclass Evaluator:\n def __init__(self, context):\n self.context = context\n self.coeff_modulus = context.param.coeff_modulus\n self.plain_modulus = context.param.plain_modulus\n\n def add(self, op1, op2):\n \"\"\"Adds two operands using FV scheme.\n\n Args:\n op1 (Ciphertext/Plaintext): First argument.\n op2 (Ciphertext/Plaintext): Second argument.\n\n Returns:\n If both arguments are Plaintext elements then the result will be a Plaintext object\n otherwise a Ciphertext object with value equivalent to the result of addition\n operation of two provided arguments.\n \"\"\"\n if isinstance(op1, CipherText) and isinstance(op2, CipherText):\n return self._add_cipher_cipher(op1, op2)\n\n elif isinstance(op1, PlainText) and isinstance(op2, PlainText):\n return self._add_plain_plain(op1, op2)\n\n elif isinstance(op1, PlainText) and isinstance(op2, CipherText):\n return self._add_plain_cipher(op1, op2)\n\n elif isinstance(op1, CipherText) and isinstance(op2, PlainText):\n return self._add_plain_cipher(op2, op1)\n\n else:\n raise TypeError(f\"Addition Operation not supported between {type(op1)} and {type(op2)}\")\n\n def negate(self, ct):\n \"\"\"Negate a cipher i.e -(ct_value)\n\n Args:\n ct (Ciphertext): Ciphertext to be negated.\n\n Returns:\n A Ciphertext object with value equivalent to result of -(ct_value).\n \"\"\"\n result = copy.deepcopy(ct.data)\n\n for i in range(len(result)):\n for j in range(len(result[i])):\n for k in range(len(result[i][j])):\n result[i][j][k] = negate_mod(ct.data[i][j][k], self.coeff_modulus[j])\n\n return CipherText(result)\n\n def _add_cipher_cipher(self, ct1, ct2):\n \"\"\"Adds two ciphertexts.\n\n Args:\n ct1 (Ciphertext): First argument.\n ct2 (Ciphertext): Second argument.\n\n Returns:\n A Ciphertext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n ct1, ct2 = copy.deepcopy(ct1.data), copy.deepcopy(ct2.data)\n result = ct2 if len(ct2) > len(ct1) else ct1\n\n for i in range(min(len(ct1), len(ct2))):\n for j in range(len(self.coeff_modulus)):\n result[i][j] = poly_add_mod(ct1[i][j], ct2[i][j], self.coeff_modulus[j])\n\n return CipherText(result)\n\n def _add_plain_cipher(self, pt, ct):\n \"\"\"Adds a ciphertext and a plaintext.\n\n Args:\n pt (Plaintext): First argument.\n ct (Ciphertext): Second argument.\n Returns:\n A Ciphertext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n ct = copy.deepcopy(ct)\n return multiply_add_plain_with_delta(ct, pt, self.context)\n\n def _add_plain_plain(self, pt1, pt2):\n \"\"\"Adds two plaintexts object.\n\n Args:\n pt1 (Plaintext): First argument.\n pt2 (Plaintext): Second argument.\n\n Returns:\n A Plaintext object with value equivalent to result of addition of two provided\n arguments.\n \"\"\"\n pt1, pt2 = copy.deepcopy(pt1), copy.deepcopy(pt2)\n return PlainText(poly_add_mod(pt1.data, pt2.data, self.plain_modulus))\n", "path": "syft/frameworks/torch/he/fv/evaluator.py"}]} | 1,217 | 368 |
gh_patches_debug_28241 | rasdani/github-patches | git_diff | svthalia__concrexit-2589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: PromotionRequestAdmin.mark_finished() takes 2 positional arguments but 3 were given
Sentry Issue: [CONCREXIT-JD](https://sentry.io/organizations/thalia/issues/3668103253/?referrer=github_integration)
```
TypeError: PromotionRequestAdmin.mark_finished() takes 2 positional arguments but 3 were given
(5 additional frame(s) were not displayed)
...
File "django/contrib/admin/sites.py", line 242, in inner
return view(request, *args, **kwargs)
File "django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
File "django/utils/decorators.py", line 133, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "django/contrib/admin/options.py", line 1959, in changelist_view
response = self.response_action(
File "django/contrib/admin/options.py", line 1588, in response_action
response = func(self, request, queryset)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/promotion/admin.py`
Content:
```
1 """Registers admin interfaces for the models defined in this module."""
2 from django.contrib import admin
3 from django.contrib.admin import ModelAdmin
4
5 from promotion.forms import PromotionRequestForm
6 from events.services import is_organiser
7
8 from .models import PromotionChannel, PromotionRequest
9
10
11 @admin.register(PromotionRequest)
12 class PromotionRequestAdmin(admin.ModelAdmin):
13 """This manages the admin interface for the model items."""
14
15 list_display = ("event", "publish_date", "channel", "assigned_to", "status")
16 list_filter = (
17 "publish_date",
18 "assigned_to",
19 "status",
20 )
21 date_hierarchy = "publish_date"
22 form = PromotionRequestForm
23 actions = ["mark_not_started", "mark_started", "mark_finished", "mark_published"]
24
25 def has_change_permission(self, request, obj=None):
26 if obj is not None and not is_organiser(request.member, obj.event):
27 return False
28 return super().has_change_permission(request, obj)
29
30 def mark_not_started(self, queryset):
31 """Change the status of the event to published."""
32 self._change_published(queryset, PromotionRequest.NOT_STARTED)
33
34 mark_not_started.short_description = "Mark requests as not started"
35
36 def mark_started(self, queryset):
37 """Change the status of the event to published."""
38 self._change_published(queryset, PromotionRequest.STARTED)
39
40 mark_started.short_description = "Mark requests as started"
41
42 def mark_finished(self, queryset):
43 """Change the status of the event to published."""
44 self._change_published(queryset, PromotionRequest.FINISHED)
45
46 mark_finished.short_description = "Mark requests as finished"
47
48 def mark_published(self, queryset):
49 """Change the status of the event to published."""
50 self._change_published(queryset, PromotionRequest.PUBLISHED)
51
52 mark_published.short_description = "Mark requests as published"
53
54 @staticmethod
55 def _change_published(queryset, status):
56 queryset.update(status=status)
57
58
59 @admin.register(PromotionChannel)
60 class PromotionChannelAdmin(ModelAdmin):
61 pass
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/promotion/admin.py b/website/promotion/admin.py
--- a/website/promotion/admin.py
+++ b/website/promotion/admin.py
@@ -27,25 +27,25 @@
return False
return super().has_change_permission(request, obj)
- def mark_not_started(self, queryset):
+ def mark_not_started(self, request, queryset):
"""Change the status of the event to published."""
self._change_published(queryset, PromotionRequest.NOT_STARTED)
mark_not_started.short_description = "Mark requests as not started"
- def mark_started(self, queryset):
+ def mark_started(self, request, queryset):
"""Change the status of the event to published."""
self._change_published(queryset, PromotionRequest.STARTED)
mark_started.short_description = "Mark requests as started"
- def mark_finished(self, queryset):
+ def mark_finished(self, request, queryset):
"""Change the status of the event to published."""
self._change_published(queryset, PromotionRequest.FINISHED)
mark_finished.short_description = "Mark requests as finished"
- def mark_published(self, queryset):
+ def mark_published(self, request, queryset):
"""Change the status of the event to published."""
self._change_published(queryset, PromotionRequest.PUBLISHED)
| {"golden_diff": "diff --git a/website/promotion/admin.py b/website/promotion/admin.py\n--- a/website/promotion/admin.py\n+++ b/website/promotion/admin.py\n@@ -27,25 +27,25 @@\n return False\n return super().has_change_permission(request, obj)\n \n- def mark_not_started(self, queryset):\n+ def mark_not_started(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.NOT_STARTED)\n \n mark_not_started.short_description = \"Mark requests as not started\"\n \n- def mark_started(self, queryset):\n+ def mark_started(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.STARTED)\n \n mark_started.short_description = \"Mark requests as started\"\n \n- def mark_finished(self, queryset):\n+ def mark_finished(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.FINISHED)\n \n mark_finished.short_description = \"Mark requests as finished\"\n \n- def mark_published(self, queryset):\n+ def mark_published(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.PUBLISHED)\n", "issue": "TypeError: PromotionRequestAdmin.mark_finished() takes 2 positional arguments but 3 were given\nSentry Issue: [CONCREXIT-JD](https://sentry.io/organizations/thalia/issues/3668103253/?referrer=github_integration)\n\n```\nTypeError: PromotionRequestAdmin.mark_finished() takes 2 positional arguments but 3 were given\n(5 additional frame(s) were not displayed)\n...\n File \"django/contrib/admin/sites.py\", line 242, in inner\n return view(request, *args, **kwargs)\n File \"django/utils/decorators.py\", line 46, in _wrapper\n return bound_method(*args, **kwargs)\n File \"django/utils/decorators.py\", line 133, in _wrapped_view\n response = view_func(request, *args, **kwargs)\n File \"django/contrib/admin/options.py\", line 1959, in changelist_view\n response = self.response_action(\n File \"django/contrib/admin/options.py\", line 1588, in response_action\n response = func(self, request, queryset)\n```\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the models defined in this module.\"\"\"\nfrom django.contrib import admin\nfrom django.contrib.admin import ModelAdmin\n\nfrom promotion.forms import PromotionRequestForm\nfrom events.services import is_organiser\n\nfrom .models import PromotionChannel, PromotionRequest\n\n\[email protected](PromotionRequest)\nclass PromotionRequestAdmin(admin.ModelAdmin):\n \"\"\"This manages the admin interface for the model items.\"\"\"\n\n list_display = (\"event\", \"publish_date\", \"channel\", \"assigned_to\", \"status\")\n list_filter = (\n \"publish_date\",\n \"assigned_to\",\n \"status\",\n )\n date_hierarchy = \"publish_date\"\n form = PromotionRequestForm\n actions = [\"mark_not_started\", \"mark_started\", \"mark_finished\", \"mark_published\"]\n\n def has_change_permission(self, request, obj=None):\n if obj is not None and not is_organiser(request.member, obj.event):\n return False\n return super().has_change_permission(request, obj)\n\n def mark_not_started(self, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.NOT_STARTED)\n\n mark_not_started.short_description = \"Mark requests as not started\"\n\n def mark_started(self, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.STARTED)\n\n mark_started.short_description = \"Mark requests as started\"\n\n def mark_finished(self, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.FINISHED)\n\n mark_finished.short_description = \"Mark requests as finished\"\n\n def mark_published(self, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.PUBLISHED)\n\n mark_published.short_description = \"Mark requests as published\"\n\n @staticmethod\n def _change_published(queryset, status):\n queryset.update(status=status)\n\n\[email protected](PromotionChannel)\nclass PromotionChannelAdmin(ModelAdmin):\n pass\n", "path": "website/promotion/admin.py"}], "after_files": [{"content": "\"\"\"Registers admin interfaces for the models defined in this module.\"\"\"\nfrom django.contrib import admin\nfrom django.contrib.admin import ModelAdmin\n\nfrom promotion.forms import PromotionRequestForm\nfrom events.services import is_organiser\n\nfrom .models import PromotionChannel, PromotionRequest\n\n\[email protected](PromotionRequest)\nclass PromotionRequestAdmin(admin.ModelAdmin):\n \"\"\"This manages the admin interface for the model items.\"\"\"\n\n list_display = (\"event\", \"publish_date\", \"channel\", \"assigned_to\", \"status\")\n list_filter = (\n \"publish_date\",\n \"assigned_to\",\n \"status\",\n )\n date_hierarchy = \"publish_date\"\n form = PromotionRequestForm\n actions = [\"mark_not_started\", \"mark_started\", \"mark_finished\", \"mark_published\"]\n\n def has_change_permission(self, request, obj=None):\n if obj is not None and not is_organiser(request.member, obj.event):\n return False\n return super().has_change_permission(request, obj)\n\n def mark_not_started(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.NOT_STARTED)\n\n mark_not_started.short_description = \"Mark requests as not started\"\n\n def mark_started(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.STARTED)\n\n mark_started.short_description = \"Mark requests as started\"\n\n def mark_finished(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.FINISHED)\n\n mark_finished.short_description = \"Mark requests as finished\"\n\n def mark_published(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(queryset, PromotionRequest.PUBLISHED)\n\n mark_published.short_description = \"Mark requests as published\"\n\n @staticmethod\n def _change_published(queryset, status):\n queryset.update(status=status)\n\n\[email protected](PromotionChannel)\nclass PromotionChannelAdmin(ModelAdmin):\n pass\n", "path": "website/promotion/admin.py"}]} | 1,057 | 290 |
gh_patches_debug_13577 | rasdani/github-patches | git_diff | localstack__localstack-1397 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Service "s3" not yet available, retrying...
Hello there
After installing localstack and trying to start several services on my machine, s3 always failed to start
The command I am using to start them up is
`SERVICES=sqs,sns,s3,lambda DEBUG=1 localstack start`
(With `DEBUG=1` in place already for debugging)
First few lines of the output are:
```
2018-06-19T10:05:57:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:00:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:05:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:08:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:12:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:15:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:19:WARNING:infra.py: Service "s3" not yet available, retrying...
2018-06-19T10:06:22:ERROR:localstack.services.s3.s3_starter: S3 health check failed: An error occurred (ExpiredToken) when calling the AssumeRole operation: The security token included in the request is expired Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/localstack/services/infra.py", line 344, in check_infra
raise e
File "/usr/local/lib/python3.6/site-packages/localstack/services/infra.py", line 341, in check_infra
plugin.check(expect_shutdown=expect_shutdown, print_error=print_error)
File "/usr/local/lib/python3.6/site-packages/localstack/services/infra.py", line 80, in check
return self.check_function(expect_shutdown=expect_shutdown, print_error=print_error)
File "/usr/local/lib/python3.6/site-packages/localstack/services/s3/s3_starter.py", line 23, in check_s3
assert isinstance(out['Buckets'], list)
TypeError: 'NoneType' object is not subscriptable
During handling of the above exception, another exception occurred:
...
```
I have been trying to tackle this problem for a few hours already, without any success, I tried the latest verion of localstack as well as 0.8.6.1 which works on another machine
I am installing it trough pip (`pip install localstack`)
Thanks for any help in advance!
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-309) by [Unito](https://www.unito.io/learn-more)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/services/s3/s3_starter.py`
Content:
```
1 import sys
2 import logging
3 import traceback
4 from moto.s3 import models as s3_models
5 from moto.server import main as moto_main
6 from localstack import config
7 from localstack.constants import DEFAULT_PORT_S3_BACKEND
8 from localstack.utils.aws import aws_stack
9 from localstack.utils.common import wait_for_port_open
10 from localstack.services.infra import (
11 get_service_protocol, start_proxy_for_service, do_run, setup_logging)
12
13 LOGGER = logging.getLogger(__name__)
14
15 # max file size for S3 objects (in MB)
16 S3_MAX_FILE_SIZE_MB = 128
17
18
19 def check_s3(expect_shutdown=False, print_error=False):
20 out = None
21 try:
22 # wait for port to be opened
23 wait_for_port_open(DEFAULT_PORT_S3_BACKEND)
24 # check S3
25 out = aws_stack.connect_to_service(service_name='s3').list_buckets()
26 except Exception as e:
27 if print_error:
28 LOGGER.error('S3 health check failed: %s %s' % (e, traceback.format_exc()))
29 if expect_shutdown:
30 assert out is None
31 else:
32 assert isinstance(out['Buckets'], list)
33
34
35 def start_s3(port=None, backend_port=None, asynchronous=None, update_listener=None):
36 port = port or config.PORT_S3
37 backend_port = DEFAULT_PORT_S3_BACKEND
38 cmd = 'python "%s" s3 -p %s -H 0.0.0.0' % (__file__, backend_port)
39 print('Starting mock S3 (%s port %s)...' % (get_service_protocol(), port))
40 start_proxy_for_service('s3', port, backend_port, update_listener)
41 env_vars = {'PYTHONPATH': ':'.join(sys.path)}
42 return do_run(cmd, asynchronous, env_vars=env_vars)
43
44
45 def apply_patches():
46 s3_models.DEFAULT_KEY_BUFFER_SIZE = S3_MAX_FILE_SIZE_MB * 1024 * 1024
47
48 def init(self, name, value, storage='STANDARD', etag=None, is_versioned=False, version_id=0, max_buffer_size=None):
49 return original_init(self, name, value, storage=storage, etag=etag, is_versioned=is_versioned,
50 version_id=version_id, max_buffer_size=s3_models.DEFAULT_KEY_BUFFER_SIZE)
51
52 original_init = s3_models.FakeKey.__init__
53 s3_models.FakeKey.__init__ = init
54
55
56 def main():
57 setup_logging()
58 # patch moto implementation
59 apply_patches()
60 # start API
61 sys.exit(moto_main())
62
63
64 if __name__ == '__main__':
65 main()
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/localstack/services/s3/s3_starter.py b/localstack/services/s3/s3_starter.py
--- a/localstack/services/s3/s3_starter.py
+++ b/localstack/services/s3/s3_starter.py
@@ -35,7 +35,7 @@
def start_s3(port=None, backend_port=None, asynchronous=None, update_listener=None):
port = port or config.PORT_S3
backend_port = DEFAULT_PORT_S3_BACKEND
- cmd = 'python "%s" s3 -p %s -H 0.0.0.0' % (__file__, backend_port)
+ cmd = '%s "%s" s3 -p %s -H 0.0.0.0' % (sys.executable, __file__, backend_port)
print('Starting mock S3 (%s port %s)...' % (get_service_protocol(), port))
start_proxy_for_service('s3', port, backend_port, update_listener)
env_vars = {'PYTHONPATH': ':'.join(sys.path)}
| {"golden_diff": "diff --git a/localstack/services/s3/s3_starter.py b/localstack/services/s3/s3_starter.py\n--- a/localstack/services/s3/s3_starter.py\n+++ b/localstack/services/s3/s3_starter.py\n@@ -35,7 +35,7 @@\n def start_s3(port=None, backend_port=None, asynchronous=None, update_listener=None):\n port = port or config.PORT_S3\n backend_port = DEFAULT_PORT_S3_BACKEND\n- cmd = 'python \"%s\" s3 -p %s -H 0.0.0.0' % (__file__, backend_port)\n+ cmd = '%s \"%s\" s3 -p %s -H 0.0.0.0' % (sys.executable, __file__, backend_port)\n print('Starting mock S3 (%s port %s)...' % (get_service_protocol(), port))\n start_proxy_for_service('s3', port, backend_port, update_listener)\n env_vars = {'PYTHONPATH': ':'.join(sys.path)}\n", "issue": "Service \"s3\" not yet available, retrying...\nHello there\n\nAfter installing localstack and trying to start several services on my machine, s3 always failed to start\nThe command I am using to start them up is\n\n`SERVICES=sqs,sns,s3,lambda DEBUG=1 localstack start`\n(With `DEBUG=1` in place already for debugging)\n\nFirst few lines of the output are:\n\n```\n2018-06-19T10:05:57:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:00:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:05:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:08:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:12:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:15:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:19:WARNING:infra.py: Service \"s3\" not yet available, retrying...\n2018-06-19T10:06:22:ERROR:localstack.services.s3.s3_starter: S3 health check failed: An error occurred (ExpiredToken) when calling the AssumeRole operation: The security token included in the request is expired Traceback (most recent call last):\n File \"/usr/local/lib/python3.6/site-packages/localstack/services/infra.py\", line 344, in check_infra\n raise e\n File \"/usr/local/lib/python3.6/site-packages/localstack/services/infra.py\", line 341, in check_infra\n plugin.check(expect_shutdown=expect_shutdown, print_error=print_error)\n File \"/usr/local/lib/python3.6/site-packages/localstack/services/infra.py\", line 80, in check\n return self.check_function(expect_shutdown=expect_shutdown, print_error=print_error)\n File \"/usr/local/lib/python3.6/site-packages/localstack/services/s3/s3_starter.py\", line 23, in check_s3\n assert isinstance(out['Buckets'], list)\nTypeError: 'NoneType' object is not subscriptable\n\nDuring handling of the above exception, another exception occurred:\n...\n```\n\nI have been trying to tackle this problem for a few hours already, without any success, I tried the latest verion of localstack as well as 0.8.6.1 which works on another machine\n\nI am installing it trough pip (`pip install localstack`)\n\nThanks for any help in advance!\n\n\n\n\u2506Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-309) by [Unito](https://www.unito.io/learn-more)\n\n", "before_files": [{"content": "import sys\nimport logging\nimport traceback\nfrom moto.s3 import models as s3_models\nfrom moto.server import main as moto_main\nfrom localstack import config\nfrom localstack.constants import DEFAULT_PORT_S3_BACKEND\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.common import wait_for_port_open\nfrom localstack.services.infra import (\n get_service_protocol, start_proxy_for_service, do_run, setup_logging)\n\nLOGGER = logging.getLogger(__name__)\n\n# max file size for S3 objects (in MB)\nS3_MAX_FILE_SIZE_MB = 128\n\n\ndef check_s3(expect_shutdown=False, print_error=False):\n out = None\n try:\n # wait for port to be opened\n wait_for_port_open(DEFAULT_PORT_S3_BACKEND)\n # check S3\n out = aws_stack.connect_to_service(service_name='s3').list_buckets()\n except Exception as e:\n if print_error:\n LOGGER.error('S3 health check failed: %s %s' % (e, traceback.format_exc()))\n if expect_shutdown:\n assert out is None\n else:\n assert isinstance(out['Buckets'], list)\n\n\ndef start_s3(port=None, backend_port=None, asynchronous=None, update_listener=None):\n port = port or config.PORT_S3\n backend_port = DEFAULT_PORT_S3_BACKEND\n cmd = 'python \"%s\" s3 -p %s -H 0.0.0.0' % (__file__, backend_port)\n print('Starting mock S3 (%s port %s)...' % (get_service_protocol(), port))\n start_proxy_for_service('s3', port, backend_port, update_listener)\n env_vars = {'PYTHONPATH': ':'.join(sys.path)}\n return do_run(cmd, asynchronous, env_vars=env_vars)\n\n\ndef apply_patches():\n s3_models.DEFAULT_KEY_BUFFER_SIZE = S3_MAX_FILE_SIZE_MB * 1024 * 1024\n\n def init(self, name, value, storage='STANDARD', etag=None, is_versioned=False, version_id=0, max_buffer_size=None):\n return original_init(self, name, value, storage=storage, etag=etag, is_versioned=is_versioned,\n version_id=version_id, max_buffer_size=s3_models.DEFAULT_KEY_BUFFER_SIZE)\n\n original_init = s3_models.FakeKey.__init__\n s3_models.FakeKey.__init__ = init\n\n\ndef main():\n setup_logging()\n # patch moto implementation\n apply_patches()\n # start API\n sys.exit(moto_main())\n\n\nif __name__ == '__main__':\n main()\n", "path": "localstack/services/s3/s3_starter.py"}], "after_files": [{"content": "import sys\nimport logging\nimport traceback\nfrom moto.s3 import models as s3_models\nfrom moto.server import main as moto_main\nfrom localstack import config\nfrom localstack.constants import DEFAULT_PORT_S3_BACKEND\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.common import wait_for_port_open\nfrom localstack.services.infra import (\n get_service_protocol, start_proxy_for_service, do_run, setup_logging)\n\nLOGGER = logging.getLogger(__name__)\n\n# max file size for S3 objects (in MB)\nS3_MAX_FILE_SIZE_MB = 128\n\n\ndef check_s3(expect_shutdown=False, print_error=False):\n out = None\n try:\n # wait for port to be opened\n wait_for_port_open(DEFAULT_PORT_S3_BACKEND)\n # check S3\n out = aws_stack.connect_to_service(service_name='s3').list_buckets()\n except Exception as e:\n if print_error:\n LOGGER.error('S3 health check failed: %s %s' % (e, traceback.format_exc()))\n if expect_shutdown:\n assert out is None\n else:\n assert isinstance(out['Buckets'], list)\n\n\ndef start_s3(port=None, backend_port=None, asynchronous=None, update_listener=None):\n port = port or config.PORT_S3\n backend_port = DEFAULT_PORT_S3_BACKEND\n cmd = '%s \"%s\" s3 -p %s -H 0.0.0.0' % (sys.executable, __file__, backend_port)\n print('Starting mock S3 (%s port %s)...' % (get_service_protocol(), port))\n start_proxy_for_service('s3', port, backend_port, update_listener)\n env_vars = {'PYTHONPATH': ':'.join(sys.path)}\n return do_run(cmd, asynchronous, env_vars=env_vars)\n\n\ndef apply_patches():\n s3_models.DEFAULT_KEY_BUFFER_SIZE = S3_MAX_FILE_SIZE_MB * 1024 * 1024\n\n def init(self, name, value, storage='STANDARD', etag=None, is_versioned=False, version_id=0, max_buffer_size=None):\n return original_init(self, name, value, storage=storage, etag=etag, is_versioned=is_versioned,\n version_id=version_id, max_buffer_size=s3_models.DEFAULT_KEY_BUFFER_SIZE)\n\n original_init = s3_models.FakeKey.__init__\n s3_models.FakeKey.__init__ = init\n\n\ndef main():\n setup_logging()\n # patch moto implementation\n apply_patches()\n # start API\n sys.exit(moto_main())\n\n\nif __name__ == '__main__':\n main()\n", "path": "localstack/services/s3/s3_starter.py"}]} | 1,669 | 226 |
gh_patches_debug_6625 | rasdani/github-patches | git_diff | ray-project__ray-2784 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[xray] Users get no warning for infeasible tasks.
Start Ray with
```
RAY_USE_XRAY=1 ray start --head --redis-port=6379 --num-gpus=0
```
Then start `RAY_USE_XRAY=1 ipython` and run
```python
import ray
ray.init(redis_address='localhost:6379')
@ray.remote(num_gpus=1)
def f():
return 1
f.remote()
```
`f` will never execute because it is infeasible, and yet the user will get no warning.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/ray_constants.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4 """Ray constants used in the Python code."""
5
6 import os
7
8 import ray
9
10
11 def env_integer(key, default):
12 if key in os.environ:
13 return int(os.environ[key])
14 return default
15
16
17 ID_SIZE = 20
18 NIL_JOB_ID = ray.ObjectID(ID_SIZE * b"\x00")
19
20 # If a remote function or actor (or some other export) has serialized size
21 # greater than this quantity, print an warning.
22 PICKLE_OBJECT_WARNING_SIZE = 10**7
23
24 # The maximum resource quantity that is allowed. TODO(rkn): This could be
25 # relaxed, but the current implementation of the node manager will be slower
26 # for large resource quantities due to bookkeeping of specific resource IDs.
27 MAX_RESOURCE_QUANTITY = 512
28
29 # Different types of Ray errors that can be pushed to the driver.
30 # TODO(rkn): These should be defined in flatbuffers and must be synced with
31 # the existing C++ definitions.
32 WAIT_FOR_CLASS_PUSH_ERROR = "wait_for_class"
33 PICKLING_LARGE_OBJECT_PUSH_ERROR = "pickling_large_object"
34 WAIT_FOR_FUNCTION_PUSH_ERROR = "wait_for_function"
35 TASK_PUSH_ERROR = "task"
36 REGISTER_REMOTE_FUNCTION_PUSH_ERROR = "register_remote_function"
37 FUNCTION_TO_RUN_PUSH_ERROR = "function_to_run"
38 VERSION_MISMATCH_PUSH_ERROR = "version_mismatch"
39 CHECKPOINT_PUSH_ERROR = "checkpoint"
40 REGISTER_ACTOR_PUSH_ERROR = "register_actor"
41 WORKER_CRASH_PUSH_ERROR = "worker_crash"
42 WORKER_DIED_PUSH_ERROR = "worker_died"
43 PUT_RECONSTRUCTION_PUSH_ERROR = "put_reconstruction"
44 HASH_MISMATCH_PUSH_ERROR = "object_hash_mismatch"
45
46 # Abort autoscaling if more than this number of errors are encountered. This
47 # is a safety feature to prevent e.g. runaway node launches.
48 AUTOSCALER_MAX_NUM_FAILURES = env_integer("AUTOSCALER_MAX_NUM_FAILURES", 5)
49
50 # The maximum number of nodes to launch in a single request.
51 # Multiple requests may be made for this batch size, up to
52 # the limit of AUTOSCALER_MAX_CONCURRENT_LAUNCHES.
53 AUTOSCALER_MAX_LAUNCH_BATCH = env_integer("AUTOSCALER_MAX_LAUNCH_BATCH", 5)
54
55 # Max number of nodes to launch at a time.
56 AUTOSCALER_MAX_CONCURRENT_LAUNCHES = env_integer(
57 "AUTOSCALER_MAX_CONCURRENT_LAUNCHES", 10)
58
59 # Interval at which to perform autoscaling updates.
60 AUTOSCALER_UPDATE_INTERVAL_S = env_integer("AUTOSCALER_UPDATE_INTERVAL_S", 5)
61
62 # The autoscaler will attempt to restart Ray on nodes it hasn't heard from
63 # in more than this interval.
64 AUTOSCALER_HEARTBEAT_TIMEOUT_S = env_integer("AUTOSCALER_HEARTBEAT_TIMEOUT_S",
65 30)
66
67 # Max number of retries to AWS (default is 5, time increases exponentially)
68 BOTO_MAX_RETRIES = env_integer("BOTO_MAX_RETRIES", 12)
69
70 # Default logger format: only contains the message.
71 LOGGER_FORMAT = "%(message)s"
72 LOGGER_FORMAT_HELP = "The logging format. default='%(message)s'"
73 LOGGER_LEVEL = "info"
74 LOGGER_LEVEL_CHOICES = ['debug', 'info', 'warning', 'error', 'critical']
75 LOGGER_LEVEL_HELP = ("The logging level threshold, choices=['debug', 'info',"
76 " 'warning', 'error', 'critical'], default='info'")
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/ray_constants.py b/python/ray/ray_constants.py
--- a/python/ray/ray_constants.py
+++ b/python/ray/ray_constants.py
@@ -42,6 +42,7 @@
WORKER_DIED_PUSH_ERROR = "worker_died"
PUT_RECONSTRUCTION_PUSH_ERROR = "put_reconstruction"
HASH_MISMATCH_PUSH_ERROR = "object_hash_mismatch"
+INFEASIBLE_TASK_ERROR = "infeasible_task"
# Abort autoscaling if more than this number of errors are encountered. This
# is a safety feature to prevent e.g. runaway node launches.
| {"golden_diff": "diff --git a/python/ray/ray_constants.py b/python/ray/ray_constants.py\n--- a/python/ray/ray_constants.py\n+++ b/python/ray/ray_constants.py\n@@ -42,6 +42,7 @@\n WORKER_DIED_PUSH_ERROR = \"worker_died\"\n PUT_RECONSTRUCTION_PUSH_ERROR = \"put_reconstruction\"\n HASH_MISMATCH_PUSH_ERROR = \"object_hash_mismatch\"\n+INFEASIBLE_TASK_ERROR = \"infeasible_task\"\n \n # Abort autoscaling if more than this number of errors are encountered. This\n # is a safety feature to prevent e.g. runaway node launches.\n", "issue": "[xray] Users get no warning for infeasible tasks.\nStart Ray with \r\n\r\n```\r\nRAY_USE_XRAY=1 ray start --head --redis-port=6379 --num-gpus=0\r\n```\r\n\r\nThen start `RAY_USE_XRAY=1 ipython` and run\r\n\r\n```python\r\nimport ray\r\n\r\nray.init(redis_address='localhost:6379')\r\n\r\[email protected](num_gpus=1)\r\ndef f():\r\n return 1\r\n\r\nf.remote()\r\n```\r\n\r\n`f` will never execute because it is infeasible, and yet the user will get no warning.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\"\"\"Ray constants used in the Python code.\"\"\"\n\nimport os\n\nimport ray\n\n\ndef env_integer(key, default):\n if key in os.environ:\n return int(os.environ[key])\n return default\n\n\nID_SIZE = 20\nNIL_JOB_ID = ray.ObjectID(ID_SIZE * b\"\\x00\")\n\n# If a remote function or actor (or some other export) has serialized size\n# greater than this quantity, print an warning.\nPICKLE_OBJECT_WARNING_SIZE = 10**7\n\n# The maximum resource quantity that is allowed. TODO(rkn): This could be\n# relaxed, but the current implementation of the node manager will be slower\n# for large resource quantities due to bookkeeping of specific resource IDs.\nMAX_RESOURCE_QUANTITY = 512\n\n# Different types of Ray errors that can be pushed to the driver.\n# TODO(rkn): These should be defined in flatbuffers and must be synced with\n# the existing C++ definitions.\nWAIT_FOR_CLASS_PUSH_ERROR = \"wait_for_class\"\nPICKLING_LARGE_OBJECT_PUSH_ERROR = \"pickling_large_object\"\nWAIT_FOR_FUNCTION_PUSH_ERROR = \"wait_for_function\"\nTASK_PUSH_ERROR = \"task\"\nREGISTER_REMOTE_FUNCTION_PUSH_ERROR = \"register_remote_function\"\nFUNCTION_TO_RUN_PUSH_ERROR = \"function_to_run\"\nVERSION_MISMATCH_PUSH_ERROR = \"version_mismatch\"\nCHECKPOINT_PUSH_ERROR = \"checkpoint\"\nREGISTER_ACTOR_PUSH_ERROR = \"register_actor\"\nWORKER_CRASH_PUSH_ERROR = \"worker_crash\"\nWORKER_DIED_PUSH_ERROR = \"worker_died\"\nPUT_RECONSTRUCTION_PUSH_ERROR = \"put_reconstruction\"\nHASH_MISMATCH_PUSH_ERROR = \"object_hash_mismatch\"\n\n# Abort autoscaling if more than this number of errors are encountered. This\n# is a safety feature to prevent e.g. runaway node launches.\nAUTOSCALER_MAX_NUM_FAILURES = env_integer(\"AUTOSCALER_MAX_NUM_FAILURES\", 5)\n\n# The maximum number of nodes to launch in a single request.\n# Multiple requests may be made for this batch size, up to\n# the limit of AUTOSCALER_MAX_CONCURRENT_LAUNCHES.\nAUTOSCALER_MAX_LAUNCH_BATCH = env_integer(\"AUTOSCALER_MAX_LAUNCH_BATCH\", 5)\n\n# Max number of nodes to launch at a time.\nAUTOSCALER_MAX_CONCURRENT_LAUNCHES = env_integer(\n \"AUTOSCALER_MAX_CONCURRENT_LAUNCHES\", 10)\n\n# Interval at which to perform autoscaling updates.\nAUTOSCALER_UPDATE_INTERVAL_S = env_integer(\"AUTOSCALER_UPDATE_INTERVAL_S\", 5)\n\n# The autoscaler will attempt to restart Ray on nodes it hasn't heard from\n# in more than this interval.\nAUTOSCALER_HEARTBEAT_TIMEOUT_S = env_integer(\"AUTOSCALER_HEARTBEAT_TIMEOUT_S\",\n 30)\n\n# Max number of retries to AWS (default is 5, time increases exponentially)\nBOTO_MAX_RETRIES = env_integer(\"BOTO_MAX_RETRIES\", 12)\n\n# Default logger format: only contains the message.\nLOGGER_FORMAT = \"%(message)s\"\nLOGGER_FORMAT_HELP = \"The logging format. default='%(message)s'\"\nLOGGER_LEVEL = \"info\"\nLOGGER_LEVEL_CHOICES = ['debug', 'info', 'warning', 'error', 'critical']\nLOGGER_LEVEL_HELP = (\"The logging level threshold, choices=['debug', 'info',\"\n \" 'warning', 'error', 'critical'], default='info'\")\n", "path": "python/ray/ray_constants.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\"\"\"Ray constants used in the Python code.\"\"\"\n\nimport os\n\nimport ray\n\n\ndef env_integer(key, default):\n if key in os.environ:\n return int(os.environ[key])\n return default\n\n\nID_SIZE = 20\nNIL_JOB_ID = ray.ObjectID(ID_SIZE * b\"\\x00\")\n\n# If a remote function or actor (or some other export) has serialized size\n# greater than this quantity, print an warning.\nPICKLE_OBJECT_WARNING_SIZE = 10**7\n\n# The maximum resource quantity that is allowed. TODO(rkn): This could be\n# relaxed, but the current implementation of the node manager will be slower\n# for large resource quantities due to bookkeeping of specific resource IDs.\nMAX_RESOURCE_QUANTITY = 512\n\n# Different types of Ray errors that can be pushed to the driver.\n# TODO(rkn): These should be defined in flatbuffers and must be synced with\n# the existing C++ definitions.\nWAIT_FOR_CLASS_PUSH_ERROR = \"wait_for_class\"\nPICKLING_LARGE_OBJECT_PUSH_ERROR = \"pickling_large_object\"\nWAIT_FOR_FUNCTION_PUSH_ERROR = \"wait_for_function\"\nTASK_PUSH_ERROR = \"task\"\nREGISTER_REMOTE_FUNCTION_PUSH_ERROR = \"register_remote_function\"\nFUNCTION_TO_RUN_PUSH_ERROR = \"function_to_run\"\nVERSION_MISMATCH_PUSH_ERROR = \"version_mismatch\"\nCHECKPOINT_PUSH_ERROR = \"checkpoint\"\nREGISTER_ACTOR_PUSH_ERROR = \"register_actor\"\nWORKER_CRASH_PUSH_ERROR = \"worker_crash\"\nWORKER_DIED_PUSH_ERROR = \"worker_died\"\nPUT_RECONSTRUCTION_PUSH_ERROR = \"put_reconstruction\"\nHASH_MISMATCH_PUSH_ERROR = \"object_hash_mismatch\"\nINFEASIBLE_TASK_ERROR = \"infeasible_task\"\n\n# Abort autoscaling if more than this number of errors are encountered. This\n# is a safety feature to prevent e.g. runaway node launches.\nAUTOSCALER_MAX_NUM_FAILURES = env_integer(\"AUTOSCALER_MAX_NUM_FAILURES\", 5)\n\n# The maximum number of nodes to launch in a single request.\n# Multiple requests may be made for this batch size, up to\n# the limit of AUTOSCALER_MAX_CONCURRENT_LAUNCHES.\nAUTOSCALER_MAX_LAUNCH_BATCH = env_integer(\"AUTOSCALER_MAX_LAUNCH_BATCH\", 5)\n\n# Max number of nodes to launch at a time.\nAUTOSCALER_MAX_CONCURRENT_LAUNCHES = env_integer(\n \"AUTOSCALER_MAX_CONCURRENT_LAUNCHES\", 10)\n\n# Interval at which to perform autoscaling updates.\nAUTOSCALER_UPDATE_INTERVAL_S = env_integer(\"AUTOSCALER_UPDATE_INTERVAL_S\", 5)\n\n# The autoscaler will attempt to restart Ray on nodes it hasn't heard from\n# in more than this interval.\nAUTOSCALER_HEARTBEAT_TIMEOUT_S = env_integer(\"AUTOSCALER_HEARTBEAT_TIMEOUT_S\",\n 30)\n\n# Max number of retries to AWS (default is 5, time increases exponentially)\nBOTO_MAX_RETRIES = env_integer(\"BOTO_MAX_RETRIES\", 12)\n\n# Default logger format: only contains the message.\nLOGGER_FORMAT = \"%(message)s\"\nLOGGER_FORMAT_HELP = \"The logging format. default='%(message)s'\"\nLOGGER_LEVEL = \"info\"\nLOGGER_LEVEL_CHOICES = ['debug', 'info', 'warning', 'error', 'critical']\nLOGGER_LEVEL_HELP = (\"The logging level threshold, choices=['debug', 'info',\"\n \" 'warning', 'error', 'critical'], default='info'\")\n", "path": "python/ray/ray_constants.py"}]} | 1,281 | 133 |
gh_patches_debug_17784 | rasdani/github-patches | git_diff | lmfit__lmfit-py-150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot deploy to pypi repo dues to tuples in the `setup.py` attributes
Due to a python-bug (http://bugs.python.org/issue19610) i cannot install and deploy lmfit with `python setup install`
I discovered this issue while trying to fix #149
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # from distutils.core import setup
3 from setuptools import setup
4
5 import lmfit as lmfit
6 import numpy, scipy
7
8 long_desc = """A library for least-squares minimization and data fitting in
9 Python. Built on top of scipy.optimize, lmfit provides a Parameter object
10 which can be set as fixed or free, can have upper and/or lower bounds, or
11 can be written in terms of algebraic constraints of other Parameters. The
12 user writes a function to be minimized as a function of these Parameters,
13 and the scipy.optimize methods are used to find the optimal values for the
14 Parameters. The Levenberg-Marquardt (leastsq) is the default minimization
15 algorithm, and provides estimated standard errors and correlations between
16 varied Parameters. Other minimization methods, including Nelder-Mead's
17 downhill simplex, Powell's method, BFGS, Sequential Least Squares, and
18 others are also supported. Bounds and contraints can be placed on
19 Parameters for all of these methods.
20
21 In addition, methods for explicitly calculating confidence intervals are
22 provided for exploring minmization problems where the approximation of
23 estimating Parameter uncertainties from the covariance matrix is
24 questionable. """
25
26
27 setup(name = 'lmfit',
28 version = lmfit.__version__,
29 author = 'LMFit Development Team',
30 author_email = '[email protected]',
31 url = 'http://lmfit.github.io/lmfit-py/',
32 download_url = 'http://lmfit.github.io//lmfit-py/',
33 requires = ('numpy', 'scipy'),
34 license = 'BSD',
35 description = "Least-Squares Minimization with Bounds and Constraints",
36 long_description = long_desc,
37 platforms = ('Windows', 'Linux', 'Mac OS X'),
38 classifiers=['Intended Audience :: Science/Research',
39 'Operating System :: OS Independent',
40 'Programming Language :: Python',
41 'Topic :: Scientific/Engineering',
42 ],
43 # test_suite='nose.collector',
44 # test_requires=['Nose'],
45 package_dir = {'lmfit': 'lmfit'},
46 packages = ['lmfit', 'lmfit.ui', 'lmfit.uncertainties'],
47 )
48
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,11 +30,11 @@
author_email = '[email protected]',
url = 'http://lmfit.github.io/lmfit-py/',
download_url = 'http://lmfit.github.io//lmfit-py/',
- requires = ('numpy', 'scipy'),
+ requires = ['numpy', 'scipy'],
license = 'BSD',
description = "Least-Squares Minimization with Bounds and Constraints",
long_description = long_desc,
- platforms = ('Windows', 'Linux', 'Mac OS X'),
+ platforms = ['Windows', 'Linux', 'Mac OS X'],
classifiers=['Intended Audience :: Science/Research',
'Operating System :: OS Independent',
'Programming Language :: Python',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,11 +30,11 @@\n author_email = '[email protected]',\n url = 'http://lmfit.github.io/lmfit-py/',\n download_url = 'http://lmfit.github.io//lmfit-py/',\n- requires = ('numpy', 'scipy'),\n+ requires = ['numpy', 'scipy'],\n license = 'BSD',\n description = \"Least-Squares Minimization with Bounds and Constraints\",\n long_description = long_desc,\n- platforms = ('Windows', 'Linux', 'Mac OS X'),\n+ platforms = ['Windows', 'Linux', 'Mac OS X'],\n classifiers=['Intended Audience :: Science/Research',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n", "issue": "Cannot deploy to pypi repo dues to tuples in the `setup.py` attributes\nDue to a python-bug (http://bugs.python.org/issue19610) i cannot install and deploy lmfit with `python setup install`\n\nI discovered this issue while trying to fix #149 \n\n", "before_files": [{"content": "#!/usr/bin/env python\n# from distutils.core import setup\nfrom setuptools import setup\n\nimport lmfit as lmfit\nimport numpy, scipy\n\nlong_desc = \"\"\"A library for least-squares minimization and data fitting in\nPython. Built on top of scipy.optimize, lmfit provides a Parameter object\nwhich can be set as fixed or free, can have upper and/or lower bounds, or\ncan be written in terms of algebraic constraints of other Parameters. The\nuser writes a function to be minimized as a function of these Parameters,\nand the scipy.optimize methods are used to find the optimal values for the\nParameters. The Levenberg-Marquardt (leastsq) is the default minimization\nalgorithm, and provides estimated standard errors and correlations between\nvaried Parameters. Other minimization methods, including Nelder-Mead's\ndownhill simplex, Powell's method, BFGS, Sequential Least Squares, and\nothers are also supported. Bounds and contraints can be placed on\nParameters for all of these methods.\n\nIn addition, methods for explicitly calculating confidence intervals are\nprovided for exploring minmization problems where the approximation of\nestimating Parameter uncertainties from the covariance matrix is\nquestionable. \"\"\"\n\n\nsetup(name = 'lmfit',\n version = lmfit.__version__,\n author = 'LMFit Development Team',\n author_email = '[email protected]',\n url = 'http://lmfit.github.io/lmfit-py/',\n download_url = 'http://lmfit.github.io//lmfit-py/',\n requires = ('numpy', 'scipy'),\n license = 'BSD',\n description = \"Least-Squares Minimization with Bounds and Constraints\",\n long_description = long_desc,\n platforms = ('Windows', 'Linux', 'Mac OS X'),\n classifiers=['Intended Audience :: Science/Research',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Topic :: Scientific/Engineering',\n ],\n # test_suite='nose.collector',\n # test_requires=['Nose'],\n package_dir = {'lmfit': 'lmfit'},\n packages = ['lmfit', 'lmfit.ui', 'lmfit.uncertainties'],\n )\n\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# from distutils.core import setup\nfrom setuptools import setup\n\nimport lmfit as lmfit\nimport numpy, scipy\n\nlong_desc = \"\"\"A library for least-squares minimization and data fitting in\nPython. Built on top of scipy.optimize, lmfit provides a Parameter object\nwhich can be set as fixed or free, can have upper and/or lower bounds, or\ncan be written in terms of algebraic constraints of other Parameters. The\nuser writes a function to be minimized as a function of these Parameters,\nand the scipy.optimize methods are used to find the optimal values for the\nParameters. The Levenberg-Marquardt (leastsq) is the default minimization\nalgorithm, and provides estimated standard errors and correlations between\nvaried Parameters. Other minimization methods, including Nelder-Mead's\ndownhill simplex, Powell's method, BFGS, Sequential Least Squares, and\nothers are also supported. Bounds and contraints can be placed on\nParameters for all of these methods.\n\nIn addition, methods for explicitly calculating confidence intervals are\nprovided for exploring minmization problems where the approximation of\nestimating Parameter uncertainties from the covariance matrix is\nquestionable. \"\"\"\n\n\nsetup(name = 'lmfit',\n version = lmfit.__version__,\n author = 'LMFit Development Team',\n author_email = '[email protected]',\n url = 'http://lmfit.github.io/lmfit-py/',\n download_url = 'http://lmfit.github.io//lmfit-py/',\n requires = ['numpy', 'scipy'],\n license = 'BSD',\n description = \"Least-Squares Minimization with Bounds and Constraints\",\n long_description = long_desc,\n platforms = ['Windows', 'Linux', 'Mac OS X'],\n classifiers=['Intended Audience :: Science/Research',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Topic :: Scientific/Engineering',\n ],\n # test_suite='nose.collector',\n # test_requires=['Nose'],\n package_dir = {'lmfit': 'lmfit'},\n packages = ['lmfit', 'lmfit.ui', 'lmfit.uncertainties'],\n )\n\n", "path": "setup.py"}]} | 878 | 185 |
gh_patches_debug_22010 | rasdani/github-patches | git_diff | ckan__ckan-561 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Postgresql 8.4 error when running paster db init
When running the paster db init command with the CKAN 2.0 beta, there is an error encountered that appears to be related to use of the left() string function in ckan/migration/versions/067_turn_extras_to_strings.py. According to the documentation and my own simple test, this function is not support in Postgresql 8.4. For a stack trace, see: https://gist.github.com/thriuin/5067819.
Is there a new minimum version of Postgresql required -- documentation still says 8.4 which unfortunately is what comes with RedHat Enterprise.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/migration/versions/067_turn_extras_to_strings.py`
Content:
```
1 import json
2
3 def upgrade(migrate_engine):
4
5 with migrate_engine.begin() as connection:
6 tables = 'package_extra group_extra'
7 revision_tables = 'package_extra_revision group_extra_revision'
8
9 for table in tables.split():
10 sql = """select id, value from {table} where left(value,1) = '"' """.format(table=table)
11 results = connection.execute(sql)
12 for result in results:
13 id, value = result
14 update_sql = 'update {table} set value = %s where id = %s'
15 connection.execute(update_sql.format(table=table),
16 json.loads(value), id)
17
18 for table in revision_tables.split():
19 sql = """select id, revision_id, value from {table} where left(value,1) = '"' """.format(table=table)
20
21 results = connection.execute(sql)
22 for result in results:
23 id, revision_id, value = result
24 update_sql = 'update {table} set value = %s where id = %s and revision_id = %s'
25 connection.execute(update_sql.format(table=table),
26 json.loads(value), id, revision_id)
27
28
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckan/migration/versions/067_turn_extras_to_strings.py b/ckan/migration/versions/067_turn_extras_to_strings.py
--- a/ckan/migration/versions/067_turn_extras_to_strings.py
+++ b/ckan/migration/versions/067_turn_extras_to_strings.py
@@ -7,7 +7,7 @@
revision_tables = 'package_extra_revision group_extra_revision'
for table in tables.split():
- sql = """select id, value from {table} where left(value,1) = '"' """.format(table=table)
+ sql = """select id, value from {table} where substr(value,0,1) = '"' """.format(table=table)
results = connection.execute(sql)
for result in results:
id, value = result
@@ -16,7 +16,7 @@
json.loads(value), id)
for table in revision_tables.split():
- sql = """select id, revision_id, value from {table} where left(value,1) = '"' """.format(table=table)
+ sql = """select id, revision_id, value from {table} where substr(value,0,1) = '"' """.format(table=table)
results = connection.execute(sql)
for result in results:
| {"golden_diff": "diff --git a/ckan/migration/versions/067_turn_extras_to_strings.py b/ckan/migration/versions/067_turn_extras_to_strings.py\n--- a/ckan/migration/versions/067_turn_extras_to_strings.py\n+++ b/ckan/migration/versions/067_turn_extras_to_strings.py\n@@ -7,7 +7,7 @@\n revision_tables = 'package_extra_revision group_extra_revision'\n \n for table in tables.split():\n- sql = \"\"\"select id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n+ sql = \"\"\"select id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n results = connection.execute(sql)\n for result in results:\n id, value = result\n@@ -16,7 +16,7 @@\n json.loads(value), id)\n \n for table in revision_tables.split():\n- sql = \"\"\"select id, revision_id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n+ sql = \"\"\"select id, revision_id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n \n results = connection.execute(sql)\n for result in results:\n", "issue": "Postgresql 8.4 error when running paster db init\nWhen running the paster db init command with the CKAN 2.0 beta, there is an error encountered that appears to be related to use of the left() string function in ckan/migration/versions/067_turn_extras_to_strings.py. According to the documentation and my own simple test, this function is not support in Postgresql 8.4. For a stack trace, see: https://gist.github.com/thriuin/5067819.\n\nIs there a new minimum version of Postgresql required -- documentation still says 8.4 which unfortunately is what comes with RedHat Enterprise.\n\n", "before_files": [{"content": "import json\n\ndef upgrade(migrate_engine):\n\n with migrate_engine.begin() as connection:\n tables = 'package_extra group_extra'\n revision_tables = 'package_extra_revision group_extra_revision'\n\n for table in tables.split():\n sql = \"\"\"select id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n results = connection.execute(sql)\n for result in results:\n id, value = result\n update_sql = 'update {table} set value = %s where id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id)\n\n for table in revision_tables.split():\n sql = \"\"\"select id, revision_id, value from {table} where left(value,1) = '\"' \"\"\".format(table=table)\n\n results = connection.execute(sql)\n for result in results:\n id, revision_id, value = result\n update_sql = 'update {table} set value = %s where id = %s and revision_id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id, revision_id)\n\n\n", "path": "ckan/migration/versions/067_turn_extras_to_strings.py"}], "after_files": [{"content": "import json\n\ndef upgrade(migrate_engine):\n\n with migrate_engine.begin() as connection:\n tables = 'package_extra group_extra'\n revision_tables = 'package_extra_revision group_extra_revision'\n\n for table in tables.split():\n sql = \"\"\"select id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n results = connection.execute(sql)\n for result in results:\n id, value = result\n update_sql = 'update {table} set value = %s where id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id)\n\n for table in revision_tables.split():\n sql = \"\"\"select id, revision_id, value from {table} where substr(value,0,1) = '\"' \"\"\".format(table=table)\n\n results = connection.execute(sql)\n for result in results:\n id, revision_id, value = result\n update_sql = 'update {table} set value = %s where id = %s and revision_id = %s'\n connection.execute(update_sql.format(table=table),\n json.loads(value), id, revision_id)\n\n\n", "path": "ckan/migration/versions/067_turn_extras_to_strings.py"}]} | 704 | 290 |
gh_patches_debug_5351 | rasdani/github-patches | git_diff | coala__coala-2795 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make exception tracebacks default
Instead of asking the user to run coala with `-L DEBUG`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/output/printers/LogPrinter.py`
Content:
```
1 import traceback
2
3 from pyprint.ColorPrinter import ColorPrinter
4
5 from coalib.output.printers.LOG_LEVEL import LOG_LEVEL, LOG_LEVEL_COLORS
6 from coalib.processes.communication.LogMessage import LogMessage
7
8
9 class LogPrinter:
10 """
11 The LogPrinter class allows to print log messages to an underlying Printer.
12
13 This class is an adapter, means you can create a LogPrinter from every
14 existing Printer instance.
15 """
16
17 def __init__(self,
18 printer,
19 log_level=LOG_LEVEL.INFO,
20 timestamp_format="%X"):
21 """
22 Creates a new log printer from an existing Printer.
23
24 :param printer: The underlying Printer where log messages
25 shall be written to. If you inherit from
26 LogPrinter, set it to self.
27 :param log_level: The minimum log level, everything below will
28 not be logged.
29 :param timestamp_format: The format string for the
30 datetime.today().strftime(format) method.
31 """
32 self._printer = printer
33 self.log_level = log_level
34 self.timestamp_format = timestamp_format
35
36 @property
37 def printer(self):
38 """
39 Returns the underlying printer where logs are printed to.
40 """
41 return self._printer
42
43 def _get_log_prefix(self, log_level, timestamp):
44 datetime_string = timestamp.strftime(self.timestamp_format)
45
46 if datetime_string != "":
47 datetime_string = "[" + datetime_string + "]"
48
49 return '[{}]{}'.format(LOG_LEVEL.reverse.get(log_level, "ERROR"),
50 datetime_string)
51
52 def debug(self, *messages, delimiter=" ", timestamp=None, **kwargs):
53 self.log_message(LogMessage(LOG_LEVEL.DEBUG,
54 *messages,
55 delimiter=delimiter,
56 timestamp=timestamp),
57 **kwargs)
58
59 def info(self, *messages, delimiter=" ", timestamp=None, **kwargs):
60 self.log_message(LogMessage(LOG_LEVEL.INFO,
61 *messages,
62 delimiter=delimiter,
63 timestamp=timestamp),
64 **kwargs)
65
66 def warn(self, *messages, delimiter=" ", timestamp=None, **kwargs):
67 self.log_message(LogMessage(LOG_LEVEL.WARNING,
68 *messages,
69 delimiter=delimiter,
70 timestamp=timestamp),
71 **kwargs)
72
73 def err(self, *messages, delimiter=" ", timestamp=None, **kwargs):
74 self.log_message(LogMessage(LOG_LEVEL.ERROR,
75 *messages,
76 delimiter=delimiter,
77 timestamp=timestamp),
78 **kwargs)
79
80 def log(self, log_level, message, timestamp=None, **kwargs):
81 self.log_message(LogMessage(log_level,
82 message,
83 timestamp=timestamp),
84 **kwargs)
85
86 def log_exception(self,
87 message,
88 exception,
89 log_level=LOG_LEVEL.ERROR,
90 timestamp=None,
91 **kwargs):
92 """
93 If the log_level of the printer is greater than DEBUG, it prints
94 only the message. If it is DEBUG or lower, it shows the message
95 along with the traceback of the exception.
96
97 :param message: The message to print.
98 :param exception: The exception to print.
99 :param log_level: The log_level of this message (not used when
100 logging the traceback. Tracebacks always have
101 a level of DEBUG).
102 :param timestamp: The time at which this log occurred. Defaults to
103 the current time.
104 :param kwargs: Keyword arguments to be passed when logging the
105 message (not used when logging the traceback).
106 """
107 if not isinstance(exception, BaseException):
108 raise TypeError("log_exception can only log derivatives of "
109 "BaseException.")
110
111 traceback_str = "\n".join(
112 traceback.format_exception(type(exception),
113 exception,
114 exception.__traceback__))
115
116 self.log(log_level, message, timestamp=timestamp, **kwargs)
117 self.log_message(
118 LogMessage(LOG_LEVEL.DEBUG,
119 "Exception was:" + "\n" + traceback_str,
120 timestamp=timestamp),
121 **kwargs)
122
123 def log_message(self, log_message, **kwargs):
124 if not isinstance(log_message, LogMessage):
125 raise TypeError("log_message should be of type LogMessage.")
126
127 if log_message.log_level < self.log_level:
128 return
129
130 self._print_log_message(
131 self._get_log_prefix(log_message.log_level, log_message.timestamp),
132 log_message,
133 **kwargs)
134
135 def _print_log_message(self, prefix, log_message, **kwargs):
136 """
137 Override this if you want to influence how the log message is printed.
138
139 If the underlying printer is a ColorPrinter, then colored logging is
140 used. You can turn it off in the underlying ColorPrinter if you want to
141 print uncolored.
142
143 :param prefix: The prefix to print (as string).
144 :param log_message: The LogMessage object to print.
145 :param kwargs: Any other keyword arguments.
146 """
147 if isinstance(self._printer, ColorPrinter):
148 self.printer.print(prefix,
149 end=" ",
150 color=LOG_LEVEL_COLORS[log_message.log_level],
151 **kwargs)
152 self.printer.print(log_message.message, **kwargs)
153 else:
154 self.printer.print(prefix, log_message.message, **kwargs)
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py
--- a/coalib/output/printers/LogPrinter.py
+++ b/coalib/output/printers/LogPrinter.py
@@ -115,7 +115,7 @@
self.log(log_level, message, timestamp=timestamp, **kwargs)
self.log_message(
- LogMessage(LOG_LEVEL.DEBUG,
+ LogMessage(LOG_LEVEL.INFO,
"Exception was:" + "\n" + traceback_str,
timestamp=timestamp),
**kwargs)
| {"golden_diff": "diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py\n--- a/coalib/output/printers/LogPrinter.py\n+++ b/coalib/output/printers/LogPrinter.py\n@@ -115,7 +115,7 @@\n \n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n- LogMessage(LOG_LEVEL.DEBUG,\n+ LogMessage(LOG_LEVEL.INFO,\n \"Exception was:\" + \"\\n\" + traceback_str,\n timestamp=timestamp),\n **kwargs)\n", "issue": "Make exception tracebacks default\nInstead of asking the user to run coala with `-L DEBUG`\n\n", "before_files": [{"content": "import traceback\n\nfrom pyprint.ColorPrinter import ColorPrinter\n\nfrom coalib.output.printers.LOG_LEVEL import LOG_LEVEL, LOG_LEVEL_COLORS\nfrom coalib.processes.communication.LogMessage import LogMessage\n\n\nclass LogPrinter:\n \"\"\"\n The LogPrinter class allows to print log messages to an underlying Printer.\n\n This class is an adapter, means you can create a LogPrinter from every\n existing Printer instance.\n \"\"\"\n\n def __init__(self,\n printer,\n log_level=LOG_LEVEL.INFO,\n timestamp_format=\"%X\"):\n \"\"\"\n Creates a new log printer from an existing Printer.\n\n :param printer: The underlying Printer where log messages\n shall be written to. If you inherit from\n LogPrinter, set it to self.\n :param log_level: The minimum log level, everything below will\n not be logged.\n :param timestamp_format: The format string for the\n datetime.today().strftime(format) method.\n \"\"\"\n self._printer = printer\n self.log_level = log_level\n self.timestamp_format = timestamp_format\n\n @property\n def printer(self):\n \"\"\"\n Returns the underlying printer where logs are printed to.\n \"\"\"\n return self._printer\n\n def _get_log_prefix(self, log_level, timestamp):\n datetime_string = timestamp.strftime(self.timestamp_format)\n\n if datetime_string != \"\":\n datetime_string = \"[\" + datetime_string + \"]\"\n\n return '[{}]{}'.format(LOG_LEVEL.reverse.get(log_level, \"ERROR\"),\n datetime_string)\n\n def debug(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.DEBUG,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def info(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.INFO,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def warn(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.WARNING,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def err(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.ERROR,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def log(self, log_level, message, timestamp=None, **kwargs):\n self.log_message(LogMessage(log_level,\n message,\n timestamp=timestamp),\n **kwargs)\n\n def log_exception(self,\n message,\n exception,\n log_level=LOG_LEVEL.ERROR,\n timestamp=None,\n **kwargs):\n \"\"\"\n If the log_level of the printer is greater than DEBUG, it prints\n only the message. If it is DEBUG or lower, it shows the message\n along with the traceback of the exception.\n\n :param message: The message to print.\n :param exception: The exception to print.\n :param log_level: The log_level of this message (not used when\n logging the traceback. Tracebacks always have\n a level of DEBUG).\n :param timestamp: The time at which this log occurred. Defaults to\n the current time.\n :param kwargs: Keyword arguments to be passed when logging the\n message (not used when logging the traceback).\n \"\"\"\n if not isinstance(exception, BaseException):\n raise TypeError(\"log_exception can only log derivatives of \"\n \"BaseException.\")\n\n traceback_str = \"\\n\".join(\n traceback.format_exception(type(exception),\n exception,\n exception.__traceback__))\n\n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n LogMessage(LOG_LEVEL.DEBUG,\n \"Exception was:\" + \"\\n\" + traceback_str,\n timestamp=timestamp),\n **kwargs)\n\n def log_message(self, log_message, **kwargs):\n if not isinstance(log_message, LogMessage):\n raise TypeError(\"log_message should be of type LogMessage.\")\n\n if log_message.log_level < self.log_level:\n return\n\n self._print_log_message(\n self._get_log_prefix(log_message.log_level, log_message.timestamp),\n log_message,\n **kwargs)\n\n def _print_log_message(self, prefix, log_message, **kwargs):\n \"\"\"\n Override this if you want to influence how the log message is printed.\n\n If the underlying printer is a ColorPrinter, then colored logging is\n used. You can turn it off in the underlying ColorPrinter if you want to\n print uncolored.\n\n :param prefix: The prefix to print (as string).\n :param log_message: The LogMessage object to print.\n :param kwargs: Any other keyword arguments.\n \"\"\"\n if isinstance(self._printer, ColorPrinter):\n self.printer.print(prefix,\n end=\" \",\n color=LOG_LEVEL_COLORS[log_message.log_level],\n **kwargs)\n self.printer.print(log_message.message, **kwargs)\n else:\n self.printer.print(prefix, log_message.message, **kwargs)\n", "path": "coalib/output/printers/LogPrinter.py"}], "after_files": [{"content": "import traceback\n\nfrom pyprint.ColorPrinter import ColorPrinter\n\nfrom coalib.output.printers.LOG_LEVEL import LOG_LEVEL, LOG_LEVEL_COLORS\nfrom coalib.processes.communication.LogMessage import LogMessage\n\n\nclass LogPrinter:\n \"\"\"\n The LogPrinter class allows to print log messages to an underlying Printer.\n\n This class is an adapter, means you can create a LogPrinter from every\n existing Printer instance.\n \"\"\"\n\n def __init__(self,\n printer,\n log_level=LOG_LEVEL.INFO,\n timestamp_format=\"%X\"):\n \"\"\"\n Creates a new log printer from an existing Printer.\n\n :param printer: The underlying Printer where log messages\n shall be written to. If you inherit from\n LogPrinter, set it to self.\n :param log_level: The minimum log level, everything below will\n not be logged.\n :param timestamp_format: The format string for the\n datetime.today().strftime(format) method.\n \"\"\"\n self._printer = printer\n self.log_level = log_level\n self.timestamp_format = timestamp_format\n\n @property\n def printer(self):\n \"\"\"\n Returns the underlying printer where logs are printed to.\n \"\"\"\n return self._printer\n\n def _get_log_prefix(self, log_level, timestamp):\n datetime_string = timestamp.strftime(self.timestamp_format)\n\n if datetime_string != \"\":\n datetime_string = \"[\" + datetime_string + \"]\"\n\n return '[{}]{}'.format(LOG_LEVEL.reverse.get(log_level, \"ERROR\"),\n datetime_string)\n\n def debug(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.DEBUG,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def info(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.INFO,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def warn(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.WARNING,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def err(self, *messages, delimiter=\" \", timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.ERROR,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def log(self, log_level, message, timestamp=None, **kwargs):\n self.log_message(LogMessage(log_level,\n message,\n timestamp=timestamp),\n **kwargs)\n\n def log_exception(self,\n message,\n exception,\n log_level=LOG_LEVEL.ERROR,\n timestamp=None,\n **kwargs):\n \"\"\"\n If the log_level of the printer is greater than DEBUG, it prints\n only the message. If it is DEBUG or lower, it shows the message\n along with the traceback of the exception.\n\n :param message: The message to print.\n :param exception: The exception to print.\n :param log_level: The log_level of this message (not used when\n logging the traceback. Tracebacks always have\n a level of DEBUG).\n :param timestamp: The time at which this log occurred. Defaults to\n the current time.\n :param kwargs: Keyword arguments to be passed when logging the\n message (not used when logging the traceback).\n \"\"\"\n if not isinstance(exception, BaseException):\n raise TypeError(\"log_exception can only log derivatives of \"\n \"BaseException.\")\n\n traceback_str = \"\\n\".join(\n traceback.format_exception(type(exception),\n exception,\n exception.__traceback__))\n\n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n LogMessage(LOG_LEVEL.INFO,\n \"Exception was:\" + \"\\n\" + traceback_str,\n timestamp=timestamp),\n **kwargs)\n\n def log_message(self, log_message, **kwargs):\n if not isinstance(log_message, LogMessage):\n raise TypeError(\"log_message should be of type LogMessage.\")\n\n if log_message.log_level < self.log_level:\n return\n\n self._print_log_message(\n self._get_log_prefix(log_message.log_level, log_message.timestamp),\n log_message,\n **kwargs)\n\n def _print_log_message(self, prefix, log_message, **kwargs):\n \"\"\"\n Override this if you want to influence how the log message is printed.\n\n If the underlying printer is a ColorPrinter, then colored logging is\n used. You can turn it off in the underlying ColorPrinter if you want to\n print uncolored.\n\n :param prefix: The prefix to print (as string).\n :param log_message: The LogMessage object to print.\n :param kwargs: Any other keyword arguments.\n \"\"\"\n if isinstance(self._printer, ColorPrinter):\n self.printer.print(prefix,\n end=\" \",\n color=LOG_LEVEL_COLORS[log_message.log_level],\n **kwargs)\n self.printer.print(log_message.message, **kwargs)\n else:\n self.printer.print(prefix, log_message.message, **kwargs)\n", "path": "coalib/output/printers/LogPrinter.py"}]} | 1,727 | 125 |
gh_patches_debug_21305 | rasdani/github-patches | git_diff | pre-commit__pre-commit-335 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest virtualenv breaks pre-commit
See also #299
Failure looks like:
```
17:00:19 hookid: sort-simple-yaml
17:00:19
17:00:19 bash: /nail/home/push/.pre-commit/reposkzFrD//tmp/tmp.cEk6TCoZOS/srv-configs/py_env-default/bin/activate: No such file or directory
```
```
$ pip install virtualenv --upgrade
Downloading/unpacking virtualenv
Downloading virtualenv-14.0.0-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded
Installing collected packages: virtualenv
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('/nail/home/asottile/foo', '/nail/home/asottile/foo/lib/python2.6', '/nail/home/asottile/foo/include/python2.6', '/nail/home/asottile/foo/bin')
>>>
$ pip install virtualenv==1.11.5
Downloading/unpacking virtualenv==1.11.5
Downloading virtualenv-1.11.5.tar.gz (1.8MB): 1.8MB downloaded
Running setup.py (path:/nail/home/asottile/venv/build/virtualenv/setup.py) egg_info for package virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing collected packages: virtualenv
Found existing installation: virtualenv 14.0.0
Uninstalling virtualenv:
Successfully uninstalled virtualenv
Running setup.py install for virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing virtualenv script to /nail/home/asottile/venv/bin
Installing virtualenv-2.6 script to /nail/home/asottile/venv/bin
Successfully installed virtualenv
Cleaning up...
$ python
Python 2.6.7 (r267:88850, Dec 2 2011, 20:27:26)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import virtualenv
>>> virtualenv.path_locations('foo')
('foo', 'foo/lib/python2.6', 'foo/include/python2.6', 'foo/bin')
>>>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/python.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 import virtualenv
9
10 from pre_commit.languages import helpers
11 from pre_commit.util import clean_path_on_failure
12 from pre_commit.util import shell_escape
13
14
15 ENVIRONMENT_DIR = 'py_env'
16
17
18 class PythonEnv(helpers.Environment):
19 @property
20 def env_prefix(self):
21 return ". '{{prefix}}{0}activate' &&".format(
22 virtualenv.path_locations(
23 helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
24 )[-1].rstrip(os.sep) + os.sep,
25 )
26
27
28 @contextlib.contextmanager
29 def in_env(repo_cmd_runner, language_version):
30 yield PythonEnv(repo_cmd_runner, language_version)
31
32
33 def norm_version(version):
34 if os.name == 'nt': # pragma: no cover (windows)
35 # Try looking up by name
36 if distutils.spawn.find_executable(version):
37 return version
38
39 # If it is in the form pythonx.x search in the default
40 # place on windows
41 if version.startswith('python'):
42 return r'C:\{0}\python.exe'.format(version.replace('.', ''))
43
44 # Otherwise assume it is a path
45 return os.path.expanduser(version)
46
47
48 def install_environment(
49 repo_cmd_runner,
50 version='default',
51 additional_dependencies=None,
52 ):
53 assert repo_cmd_runner.exists('setup.py')
54 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
55
56 # Install a virtualenv
57 with clean_path_on_failure(repo_cmd_runner.path(directory)):
58 venv_cmd = [
59 sys.executable, '-m', 'virtualenv',
60 '{{prefix}}{0}'.format(directory)
61 ]
62 if version != 'default':
63 venv_cmd.extend(['-p', norm_version(version)])
64 repo_cmd_runner.run(venv_cmd)
65 with in_env(repo_cmd_runner, version) as env:
66 env.run("cd '{prefix}' && pip install .", encoding=None)
67 if additional_dependencies:
68 env.run(
69 "cd '{prefix}' && pip install " +
70 ' '.join(
71 shell_escape(dep) for dep in additional_dependencies
72 ),
73 encoding=None,
74 )
75
76
77 def run_hook(repo_cmd_runner, hook, file_args):
78 with in_env(repo_cmd_runner, hook['language_version']) as env:
79 return helpers.run_hook(env, hook, file_args)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -5,8 +5,6 @@
import os
import sys
-import virtualenv
-
from pre_commit.languages import helpers
from pre_commit.util import clean_path_on_failure
from pre_commit.util import shell_escape
@@ -15,13 +13,22 @@
ENVIRONMENT_DIR = 'py_env'
+def bin_dir(venv):
+ """On windows there's a different directory for the virtualenv"""
+ if os.name == 'nt': # pragma: no cover (windows)
+ return os.path.join(venv, 'Scripts')
+ else:
+ return os.path.join(venv, 'bin')
+
+
class PythonEnv(helpers.Environment):
@property
def env_prefix(self):
- return ". '{{prefix}}{0}activate' &&".format(
- virtualenv.path_locations(
+ return ". '{{prefix}}{0}{1}activate' &&".format(
+ bin_dir(
helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
- )[-1].rstrip(os.sep) + os.sep,
+ ),
+ os.sep,
)
| {"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -5,8 +5,6 @@\n import os\n import sys\n \n-import virtualenv\n-\n from pre_commit.languages import helpers\n from pre_commit.util import clean_path_on_failure\n from pre_commit.util import shell_escape\n@@ -15,13 +13,22 @@\n ENVIRONMENT_DIR = 'py_env'\n \n \n+def bin_dir(venv):\n+ \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n+ if os.name == 'nt': # pragma: no cover (windows)\n+ return os.path.join(venv, 'Scripts')\n+ else:\n+ return os.path.join(venv, 'bin')\n+\n+\n class PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n- return \". '{{prefix}}{0}activate' &&\".format(\n- virtualenv.path_locations(\n+ return \". '{{prefix}}{0}{1}activate' &&\".format(\n+ bin_dir(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n- )[-1].rstrip(os.sep) + os.sep,\n+ ),\n+ os.sep,\n )\n", "issue": "Latest virtualenv breaks pre-commit\nSee also #299 \n\nFailure looks like:\n\n```\n17:00:19 hookid: sort-simple-yaml\n17:00:19 \n17:00:19 bash: /nail/home/push/.pre-commit/reposkzFrD//tmp/tmp.cEk6TCoZOS/srv-configs/py_env-default/bin/activate: No such file or directory\n```\n\n```\n$ pip install virtualenv --upgrade\nDownloading/unpacking virtualenv\n Downloading virtualenv-14.0.0-py2.py3-none-any.whl (1.8MB): 1.8MB downloaded\nInstalling collected packages: virtualenv\nSuccessfully installed virtualenv\nCleaning up...\n$ python\nPython 2.6.7 (r267:88850, Dec 2 2011, 20:27:26) \n[GCC 4.4.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import virtualenv\n>>> virtualenv.path_locations('foo')\n('/nail/home/asottile/foo', '/nail/home/asottile/foo/lib/python2.6', '/nail/home/asottile/foo/include/python2.6', '/nail/home/asottile/foo/bin')\n>>> \n$ pip install virtualenv==1.11.5\nDownloading/unpacking virtualenv==1.11.5\n Downloading virtualenv-1.11.5.tar.gz (1.8MB): 1.8MB downloaded\n Running setup.py (path:/nail/home/asottile/venv/build/virtualenv/setup.py) egg_info for package virtualenv\n warning: no previously-included files matching '*' found under directory 'docs/_templates'\n warning: no previously-included files matching '*' found under directory 'docs/_build'\nInstalling collected packages: virtualenv\n Found existing installation: virtualenv 14.0.0\n Uninstalling virtualenv:\n Successfully uninstalled virtualenv\n Running setup.py install for virtualenv\n warning: no previously-included files matching '*' found under directory 'docs/_templates'\n warning: no previously-included files matching '*' found under directory 'docs/_build'\n Installing virtualenv script to /nail/home/asottile/venv/bin\n Installing virtualenv-2.6 script to /nail/home/asottile/venv/bin\nSuccessfully installed virtualenv\nCleaning up...\n$ python\nPython 2.6.7 (r267:88850, Dec 2 2011, 20:27:26) \n[GCC 4.4.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import virtualenv\n>>> virtualenv.path_locations('foo')\n('foo', 'foo/lib/python2.6', 'foo/include/python2.6', 'foo/bin')\n>>>\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import shell_escape\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n )[-1].rstrip(os.sep) + os.sep,\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n yield PythonEnv(repo_cmd_runner, language_version)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=None,\n):\n assert repo_cmd_runner.exists('setup.py')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version) as env:\n env.run(\"cd '{prefix}' && pip install .\", encoding=None)\n if additional_dependencies:\n env.run(\n \"cd '{prefix}' && pip install \" +\n ' '.join(\n shell_escape(dep) for dep in additional_dependencies\n ),\n encoding=None,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import shell_escape\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n if os.name == 'nt': # pragma: no cover (windows)\n return os.path.join(venv, 'Scripts')\n else:\n return os.path.join(venv, 'bin')\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}{1}activate' &&\".format(\n bin_dir(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n ),\n os.sep,\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n yield PythonEnv(repo_cmd_runner, language_version)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=None,\n):\n assert repo_cmd_runner.exists('setup.py')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version) as env:\n env.run(\"cd '{prefix}' && pip install .\", encoding=None)\n if additional_dependencies:\n env.run(\n \"cd '{prefix}' && pip install \" +\n ' '.join(\n shell_escape(dep) for dep in additional_dependencies\n ),\n encoding=None,\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}]} | 1,585 | 278 |
gh_patches_debug_4069 | rasdani/github-patches | git_diff | goauthentik__authentik-7454 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Patreon login doesn't work/setup is not intuitive
**Describe the bug**
While trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an "Invalid Scope" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.
To Reproduce
Steps to reproduce the behavior:
1. Go to the social integration settings.
2. Click on the Patreon integration.
3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)
4. Get an invalid_scope error when trying to sign in
Expected behavior
Should allow users to log in via Patreon.
Screenshots
N/A
Logs
N/A
Version and Deployment (please complete the following information):
authentik version: 2023.6.1
Deployment: TrueNAS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/sources/oauth/types/patreon.py`
Content:
```
1 """Patreon OAuth Views"""
2 from typing import Any
3
4 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
5 from authentik.sources.oauth.models import OAuthSource
6 from authentik.sources.oauth.types.registry import SourceType, registry
7 from authentik.sources.oauth.views.callback import OAuthCallback
8 from authentik.sources.oauth.views.redirect import OAuthRedirect
9
10
11 class PatreonOAuthRedirect(OAuthRedirect):
12 """Patreon OAuth2 Redirect"""
13
14 def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
15 return {
16 "scope": ["openid", "email", "profile"],
17 }
18
19
20 class PatreonOAuthCallback(OAuthCallback):
21 """Patreon OAuth2 Callback"""
22
23 client_class: UserprofileHeaderAuthClient
24
25 def get_user_id(self, info: dict[str, str]) -> str:
26 return info.get("data", {}).get("id")
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 return {
33 "username": info.get("data", {}).get("attributes", {}).get("vanity"),
34 "email": info.get("data", {}).get("attributes", {}).get("email"),
35 "name": info.get("data", {}).get("attributes", {}).get("full_name"),
36 }
37
38
39 @registry.register()
40 class PatreonType(SourceType):
41 """OpenIDConnect Type definition"""
42
43 callback_view = PatreonOAuthCallback
44 redirect_view = PatreonOAuthRedirect
45 name = "Patreon"
46 slug = "patreon"
47
48 authorization_url = "https://www.patreon.com/oauth2/authorize"
49 access_token_url = "https://www.patreon.com/api/oauth2/token" # nosec
50 profile_url = "https://www.patreon.com/api/oauth2/api/current_user"
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py
--- a/authentik/sources/oauth/types/patreon.py
+++ b/authentik/sources/oauth/types/patreon.py
@@ -12,8 +12,9 @@
"""Patreon OAuth2 Redirect"""
def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
+ # https://docs.patreon.com/#scopes
return {
- "scope": ["openid", "email", "profile"],
+ "scope": ["identity", "identity[email]"],
}
| {"golden_diff": "diff --git a/authentik/sources/oauth/types/patreon.py b/authentik/sources/oauth/types/patreon.py\n--- a/authentik/sources/oauth/types/patreon.py\n+++ b/authentik/sources/oauth/types/patreon.py\n@@ -12,8 +12,9 @@\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n \r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n+ # https://docs.patreon.com/#scopes\r\n return {\r\n- \"scope\": [\"openid\", \"email\", \"profile\"],\r\n+ \"scope\": [\"identity\", \"identity[email]\"],\r\n }\n", "issue": "Patreon login doesn't work/setup is not intuitive\n**Describe the bug**\r\nWhile trying to set up the Patreon social integration, I realised that the required fields of Consumer Key and Consumer Secret don't seem to apply to the data that Patreon provides with its API - or at least the terminology is confusing. But outside of that, the default scopes that it seems to be presenting Patreon with upon setup are not part of Patreon's API, and will always error out with an \"Invalid Scope\" unless manually replaced with the correct ones. If this social integration is working and I'm mistaken, it is missing documentation that would definitely make it easier on new users.\r\n\r\nTo Reproduce\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to the social integration settings.\r\n2. Click on the Patreon integration.\r\n3. Enter the Client ID and Secret into the Key and Secret fields (assuming that's what you're supposed to use)\r\n4. Get an invalid_scope error when trying to sign in\r\n\r\nExpected behavior\r\nShould allow users to log in via Patreon.\r\n\r\nScreenshots\r\nN/A\r\n\r\nLogs\r\nN/A\r\n\r\nVersion and Deployment (please complete the following information):\r\n\r\nauthentik version: 2023.6.1\r\nDeployment: TrueNAS\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Patreon OAuth Views\"\"\"\r\nfrom typing import Any\r\n\r\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\r\nfrom authentik.sources.oauth.models import OAuthSource\r\nfrom authentik.sources.oauth.types.registry import SourceType, registry\r\nfrom authentik.sources.oauth.views.callback import OAuthCallback\r\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\r\n\r\n\r\nclass PatreonOAuthRedirect(OAuthRedirect):\r\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n\r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n return {\r\n \"scope\": [\"openid\", \"email\", \"profile\"],\r\n }\r\n\r\n\r\nclass PatreonOAuthCallback(OAuthCallback):\r\n \"\"\"Patreon OAuth2 Callback\"\"\"\r\n\r\n client_class: UserprofileHeaderAuthClient\r\n\r\n def get_user_id(self, info: dict[str, str]) -> str:\r\n return info.get(\"data\", {}).get(\"id\")\r\n\r\n def get_user_enroll_context(\r\n self,\r\n info: dict[str, Any],\r\n ) -> dict[str, Any]:\r\n return {\r\n \"username\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"vanity\"),\r\n \"email\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"email\"),\r\n \"name\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"full_name\"),\r\n }\r\n\r\n\r\[email protected]()\r\nclass PatreonType(SourceType):\r\n \"\"\"OpenIDConnect Type definition\"\"\"\r\n\r\n callback_view = PatreonOAuthCallback\r\n redirect_view = PatreonOAuthRedirect\r\n name = \"Patreon\"\r\n slug = \"patreon\"\r\n\r\n authorization_url = \"https://www.patreon.com/oauth2/authorize\"\r\n access_token_url = \"https://www.patreon.com/api/oauth2/token\" # nosec\r\n profile_url = \"https://www.patreon.com/api/oauth2/api/current_user\"\r\n", "path": "authentik/sources/oauth/types/patreon.py"}], "after_files": [{"content": "\"\"\"Patreon OAuth Views\"\"\"\r\nfrom typing import Any\r\n\r\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\r\nfrom authentik.sources.oauth.models import OAuthSource\r\nfrom authentik.sources.oauth.types.registry import SourceType, registry\r\nfrom authentik.sources.oauth.views.callback import OAuthCallback\r\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\r\n\r\n\r\nclass PatreonOAuthRedirect(OAuthRedirect):\r\n \"\"\"Patreon OAuth2 Redirect\"\"\"\r\n\r\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\r\n # https://docs.patreon.com/#scopes\r\n return {\r\n \"scope\": [\"identity\", \"identity[email]\"],\r\n }\r\n\r\n\r\nclass PatreonOAuthCallback(OAuthCallback):\r\n \"\"\"Patreon OAuth2 Callback\"\"\"\r\n\r\n client_class: UserprofileHeaderAuthClient\r\n\r\n def get_user_id(self, info: dict[str, str]) -> str:\r\n return info.get(\"data\", {}).get(\"id\")\r\n\r\n def get_user_enroll_context(\r\n self,\r\n info: dict[str, Any],\r\n ) -> dict[str, Any]:\r\n return {\r\n \"username\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"vanity\"),\r\n \"email\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"email\"),\r\n \"name\": info.get(\"data\", {}).get(\"attributes\", {}).get(\"full_name\"),\r\n }\r\n\r\n\r\[email protected]()\r\nclass PatreonType(SourceType):\r\n \"\"\"OpenIDConnect Type definition\"\"\"\r\n\r\n callback_view = PatreonOAuthCallback\r\n redirect_view = PatreonOAuthRedirect\r\n name = \"Patreon\"\r\n slug = \"patreon\"\r\n\r\n authorization_url = \"https://www.patreon.com/oauth2/authorize\"\r\n access_token_url = \"https://www.patreon.com/api/oauth2/token\" # nosec\r\n profile_url = \"https://www.patreon.com/api/oauth2/api/current_user\"\r\n", "path": "authentik/sources/oauth/types/patreon.py"}]} | 1,009 | 138 |
gh_patches_debug_1861 | rasdani/github-patches | git_diff | carpentries__amy-690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No reverse match for rest_framework namespace
The error for a very strange reason shows when accessing these URLs:
https://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57
I wasn't able to get rid of it; it's not being used at all, so maybe it should be removed…?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/urls.py`
Content:
```
1 from django.conf.urls import url, include
2 from rest_framework_nested import routers
3 from rest_framework.urlpatterns import format_suffix_patterns
4
5 from . import views
6
7 # new in Django 1.9: this defines a namespace for URLs; there's no need for
8 # `namespace='api'` in the include()
9 app_name = 'api'
10
11 # routers generate URLs for methods like `.list` or `.retrieve`
12 router = routers.SimpleRouter()
13 router.register('reports', views.ReportsViewSet, base_name='reports')
14 router.register('persons', views.PersonViewSet)
15 awards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')
16 awards_router.register('awards', views.AwardViewSet, base_name='person-awards')
17 person_task_router = routers.NestedSimpleRouter(router, 'persons',
18 lookup='person')
19 person_task_router.register('tasks', views.PersonTaskViewSet,
20 base_name='person-tasks')
21 router.register('events', views.EventViewSet)
22 tasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
23 tasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')
24 todos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')
25 todos_router.register('todos', views.TodoViewSet, base_name='event-todos')
26 router.register('hosts', views.HostViewSet)
27 router.register('airports', views.AirportViewSet)
28
29 urlpatterns = [
30 url('^$', views.ApiRoot.as_view(), name='root'),
31 # TODO: turn these export views into ViewSets and add them to the router
32 url('^export/badges/$',
33 views.ExportBadgesView.as_view(),
34 name='export-badges'),
35 url('^export/instructors/$',
36 views.ExportInstructorLocationsView.as_view(),
37 name='export-instructors'),
38 url('^export/members/$',
39 views.ExportMembersView.as_view(),
40 name='export-members'),
41 url('^events/published/$',
42 views.PublishedEvents.as_view(),
43 name='events-published'),
44 url('^todos/user/$',
45 views.UserTodoItems.as_view(),
46 name='user-todos'),
47
48 url('^', include(router.urls)),
49 url('^', include(awards_router.urls)),
50 url('^', include(person_task_router.urls)),
51 url('^', include(tasks_router.urls)),
52 url('^', include(todos_router.urls)),
53 ]
54
55 # for login-logout functionality
56 urlpatterns += [
57 url(r'^api-auth/',
58 include('rest_framework.urls', namespace='rest_framework')),
59 ]
60
61 urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/urls.py b/api/urls.py
--- a/api/urls.py
+++ b/api/urls.py
@@ -52,10 +52,4 @@
url('^', include(todos_router.urls)),
]
-# for login-logout functionality
-urlpatterns += [
- url(r'^api-auth/',
- include('rest_framework.urls', namespace='rest_framework')),
-]
-
urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format
| {"golden_diff": "diff --git a/api/urls.py b/api/urls.py\n--- a/api/urls.py\n+++ b/api/urls.py\n@@ -52,10 +52,4 @@\n url('^', include(todos_router.urls)),\n ]\n \n-# for login-logout functionality\n-urlpatterns += [\n- url(r'^api-auth/',\n- include('rest_framework.urls', namespace='rest_framework')),\n-]\n-\n urlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "issue": "No reverse match for rest_framework namespace\nThe error for a very strange reason shows when accessing these URLs:\nhttps://github.com/swcarpentry/amy/blob/develop/api/urls.py#L57\n\nI wasn't able to get rid of it; it's not being used at all, so maybe it should be removed\u2026?\n\n", "before_files": [{"content": "from django.conf.urls import url, include\nfrom rest_framework_nested import routers\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nfrom . import views\n\n# new in Django 1.9: this defines a namespace for URLs; there's no need for\n# `namespace='api'` in the include()\napp_name = 'api'\n\n# routers generate URLs for methods like `.list` or `.retrieve`\nrouter = routers.SimpleRouter()\nrouter.register('reports', views.ReportsViewSet, base_name='reports')\nrouter.register('persons', views.PersonViewSet)\nawards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')\nawards_router.register('awards', views.AwardViewSet, base_name='person-awards')\nperson_task_router = routers.NestedSimpleRouter(router, 'persons',\n lookup='person')\nperson_task_router.register('tasks', views.PersonTaskViewSet,\n base_name='person-tasks')\nrouter.register('events', views.EventViewSet)\ntasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')\ntodos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntodos_router.register('todos', views.TodoViewSet, base_name='event-todos')\nrouter.register('hosts', views.HostViewSet)\nrouter.register('airports', views.AirportViewSet)\n\nurlpatterns = [\n url('^$', views.ApiRoot.as_view(), name='root'),\n # TODO: turn these export views into ViewSets and add them to the router\n url('^export/badges/$',\n views.ExportBadgesView.as_view(),\n name='export-badges'),\n url('^export/instructors/$',\n views.ExportInstructorLocationsView.as_view(),\n name='export-instructors'),\n url('^export/members/$',\n views.ExportMembersView.as_view(),\n name='export-members'),\n url('^events/published/$',\n views.PublishedEvents.as_view(),\n name='events-published'),\n url('^todos/user/$',\n views.UserTodoItems.as_view(),\n name='user-todos'),\n\n url('^', include(router.urls)),\n url('^', include(awards_router.urls)),\n url('^', include(person_task_router.urls)),\n url('^', include(tasks_router.urls)),\n url('^', include(todos_router.urls)),\n]\n\n# for login-logout functionality\nurlpatterns += [\n url(r'^api-auth/',\n include('rest_framework.urls', namespace='rest_framework')),\n]\n\nurlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "path": "api/urls.py"}], "after_files": [{"content": "from django.conf.urls import url, include\nfrom rest_framework_nested import routers\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nfrom . import views\n\n# new in Django 1.9: this defines a namespace for URLs; there's no need for\n# `namespace='api'` in the include()\napp_name = 'api'\n\n# routers generate URLs for methods like `.list` or `.retrieve`\nrouter = routers.SimpleRouter()\nrouter.register('reports', views.ReportsViewSet, base_name='reports')\nrouter.register('persons', views.PersonViewSet)\nawards_router = routers.NestedSimpleRouter(router, 'persons', lookup='person')\nawards_router.register('awards', views.AwardViewSet, base_name='person-awards')\nperson_task_router = routers.NestedSimpleRouter(router, 'persons',\n lookup='person')\nperson_task_router.register('tasks', views.PersonTaskViewSet,\n base_name='person-tasks')\nrouter.register('events', views.EventViewSet)\ntasks_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntasks_router.register('tasks', views.TaskViewSet, base_name='event-tasks')\ntodos_router = routers.NestedSimpleRouter(router, 'events', lookup='event')\ntodos_router.register('todos', views.TodoViewSet, base_name='event-todos')\nrouter.register('hosts', views.HostViewSet)\nrouter.register('airports', views.AirportViewSet)\n\nurlpatterns = [\n url('^$', views.ApiRoot.as_view(), name='root'),\n # TODO: turn these export views into ViewSets and add them to the router\n url('^export/badges/$',\n views.ExportBadgesView.as_view(),\n name='export-badges'),\n url('^export/instructors/$',\n views.ExportInstructorLocationsView.as_view(),\n name='export-instructors'),\n url('^export/members/$',\n views.ExportMembersView.as_view(),\n name='export-members'),\n url('^events/published/$',\n views.PublishedEvents.as_view(),\n name='events-published'),\n url('^todos/user/$',\n views.UserTodoItems.as_view(),\n name='user-todos'),\n\n url('^', include(router.urls)),\n url('^', include(awards_router.urls)),\n url('^', include(person_task_router.urls)),\n url('^', include(tasks_router.urls)),\n url('^', include(todos_router.urls)),\n]\n\nurlpatterns = format_suffix_patterns(urlpatterns) # allow to specify format\n", "path": "api/urls.py"}]} | 985 | 104 |
gh_patches_debug_8186 | rasdani/github-patches | git_diff | saleor__saleor-5117 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow to search for products by SKU in admin dashboard
### What I'm trying to achieve
I'm looking to find a product by its SKU
### Describe a proposed solution
Tweak search engine settings to return products with full/partial SKU match.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/search/backends/postgresql_storefront.py`
Content:
```
1 from django.contrib.postgres.search import TrigramSimilarity
2 from django.db.models import Q
3
4 from ...product.models import Product
5
6
7 def search(phrase):
8 """Return matching products for storefront views.
9
10 Fuzzy storefront search that is resistant to small typing errors made
11 by user. Name is matched using trigram similarity, description uses
12 standard postgres full text search.
13
14 Args:
15 phrase (str): searched phrase
16
17 """
18 name_sim = TrigramSimilarity("name", phrase)
19 published = Q(is_published=True)
20 ft_in_description = Q(description__search=phrase)
21 name_similar = Q(name_sim__gt=0.2)
22 return Product.objects.annotate(name_sim=name_sim).filter(
23 (ft_in_description | name_similar) & published
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py
--- a/saleor/search/backends/postgresql_storefront.py
+++ b/saleor/search/backends/postgresql_storefront.py
@@ -16,9 +16,9 @@
"""
name_sim = TrigramSimilarity("name", phrase)
- published = Q(is_published=True)
ft_in_description = Q(description__search=phrase)
+ ft_by_sku = Q(variants__sku__search=phrase)
name_similar = Q(name_sim__gt=0.2)
return Product.objects.annotate(name_sim=name_sim).filter(
- (ft_in_description | name_similar) & published
+ (ft_in_description | name_similar | ft_by_sku)
)
| {"golden_diff": "diff --git a/saleor/search/backends/postgresql_storefront.py b/saleor/search/backends/postgresql_storefront.py\n--- a/saleor/search/backends/postgresql_storefront.py\n+++ b/saleor/search/backends/postgresql_storefront.py\n@@ -16,9 +16,9 @@\n \n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n- published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n+ ft_by_sku = Q(variants__sku__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n- (ft_in_description | name_similar) & published\n+ (ft_in_description | name_similar | ft_by_sku)\n )\n", "issue": "Allow to search for products by SKU in admin dashboard\n### What I'm trying to achieve\r\nI'm looking to find a product by its SKU\r\n\r\n### Describe a proposed solution\r\nTweak search engine settings to return products with full/partial SKU match.\r\n\r\n\n", "before_files": [{"content": "from django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models import Q\n\nfrom ...product.models import Product\n\n\ndef search(phrase):\n \"\"\"Return matching products for storefront views.\n\n Fuzzy storefront search that is resistant to small typing errors made\n by user. Name is matched using trigram similarity, description uses\n standard postgres full text search.\n\n Args:\n phrase (str): searched phrase\n\n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n published = Q(is_published=True)\n ft_in_description = Q(description__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n (ft_in_description | name_similar) & published\n )\n", "path": "saleor/search/backends/postgresql_storefront.py"}], "after_files": [{"content": "from django.contrib.postgres.search import TrigramSimilarity\nfrom django.db.models import Q\n\nfrom ...product.models import Product\n\n\ndef search(phrase):\n \"\"\"Return matching products for storefront views.\n\n Fuzzy storefront search that is resistant to small typing errors made\n by user. Name is matched using trigram similarity, description uses\n standard postgres full text search.\n\n Args:\n phrase (str): searched phrase\n\n \"\"\"\n name_sim = TrigramSimilarity(\"name\", phrase)\n ft_in_description = Q(description__search=phrase)\n ft_by_sku = Q(variants__sku__search=phrase)\n name_similar = Q(name_sim__gt=0.2)\n return Product.objects.annotate(name_sim=name_sim).filter(\n (ft_in_description | name_similar | ft_by_sku)\n )\n", "path": "saleor/search/backends/postgresql_storefront.py"}]} | 528 | 179 |
gh_patches_debug_17752 | rasdani/github-patches | git_diff | nf-core__tools-1590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lint warning on Nextflow minimum version badge
### Description of the bug
`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.
It occurred after the `template-merge-2.4`
It appears to be a bug.
### Command used and terminal output
```console
(nextflow2) rnavar$ nf-core lint
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 2.4.1 - https://nf-co.re
INFO Testing pipeline: . __init__.py:244
╭─ [!] 1 Pipeline Test Warning ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ readme: README did not have a Nextflow minimum version badge. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
### System information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nf_core/lint/readme.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 import re
5
6
7 def readme(self):
8 """Repository ``README.md`` tests
9
10 The ``README.md`` files for a project are very important and must meet some requirements:
11
12 * Nextflow badge
13
14 * If no Nextflow badge is found, a warning is given
15 * If a badge is found but the version doesn't match the minimum version in the config file, the test fails
16 * Example badge code:
17
18 .. code-block:: md
19
20 [](https://www.nextflow.io/)
21
22 * Bioconda badge
23
24 * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required
25 * Required badge code:
26
27 .. code-block:: md
28
29 [](https://bioconda.github.io/)
30
31 .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.
32 """
33 passed = []
34 warned = []
35 failed = []
36
37 with open(os.path.join(self.wf_path, "README.md"), "r") as fh:
38 content = fh.read()
39
40 # Check that there is a readme badge showing the minimum required version of Nextflow
41 # [](https://www.nextflow.io/)
42 # and that it has the correct version
43 nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
44 match = re.search(nf_badge_re, content)
45 if match:
46 nf_badge_version = match.group(1).strip("'\"")
47 try:
48 assert nf_badge_version == self.minNextflowVersion
49 except (AssertionError, KeyError):
50 failed.append(
51 "README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`".format(
52 nf_badge_version, self.minNextflowVersion
53 )
54 )
55 else:
56 passed.append(
57 "README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`".format(
58 nf_badge_version, self.minNextflowVersion
59 )
60 )
61 else:
62 warned.append("README did not have a Nextflow minimum version badge.")
63
64 # Check that the minimum version mentioned in the quick start section is consistent
65 # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
66 nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
67 match = re.search(nf_version_re, content)
68 if match:
69 nf_quickstart_version = match.group(1)
70 try:
71 assert nf_quickstart_version == self.minNextflowVersion
72 except (AssertionError, KeyError):
73 failed.append(
74 f"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`"
75 )
76 else:
77 passed.append(
78 f"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`"
79 )
80 else:
81 warned.append("README did not have a Nextflow minimum version mentioned in Quick Start section.")
82
83 return {"passed": passed, "warned": warned, "failed": failed}
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py
--- a/nf_core/lint/readme.py
+++ b/nf_core/lint/readme.py
@@ -38,9 +38,9 @@
content = fh.read()
# Check that there is a readme badge showing the minimum required version of Nextflow
- # [](https://www.nextflow.io/)
+ # [](https://www.nextflow.io/)
# and that it has the correct version
- nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\?labelColor=000000\)\]\(https://www\.nextflow\.io/\)"
+ nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-%E2%89%A5([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
match = re.search(nf_badge_re, content)
if match:
nf_badge_version = match.group(1).strip("'\"")
| {"golden_diff": "diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py\n--- a/nf_core/lint/readme.py\n+++ b/nf_core/lint/readme.py\n@@ -38,9 +38,9 @@\n content = fh.read()\n \n # Check that there is a readme badge showing the minimum required version of Nextflow\n- # [](https://www.nextflow.io/)\n+ # [](https://www.nextflow.io/)\n # and that it has the correct version\n- nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n+ nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n", "issue": "Lint warning on Nextflow minimum version badge\n### Description of the bug\n\n`nf-core lint` complains that the minimum version badge for Nextflow could not found, however it was present in the `README.md`.\r\nIt occurred after the `template-merge-2.4`\r\nIt appears to be a bug.\r\n\r\n\n\n### Command used and terminal output\n\n```console\n(nextflow2) rnavar$ nf-core lint\r\n\r\n\r\n\r\n ,--./,-.\r\n\r\n ___ __ __ __ ___ /,-._.--~\\\r\n\r\n |\\ | |__ __ / ` / \\ |__) |__ } {\r\n\r\n | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-,\r\n\r\n `._,._,'\r\n\r\n\r\n\r\n nf-core/tools version 2.4.1 - https://nf-co.re\r\n\r\n\r\n\r\n\r\n\r\nINFO Testing pipeline: . __init__.py:244\r\n\r\n\r\n\r\n\u256d\u2500 [!] 1 Pipeline Test Warning \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\r\n\u2502 \u2502\r\n\r\n\u2502 readme: README did not have a Nextflow minimum version badge. \u2502\r\n\r\n\u2502 \u2502\r\n\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n```\n\n\n### System information\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\?labelColor=000000\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n assert nf_badge_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n \"README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n passed.append(\n \"README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n assert nf_quickstart_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-%E2%89%A5([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n assert nf_badge_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n \"README Nextflow minimum version badge does not match config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n passed.append(\n \"README Nextflow minimum version badge matched config. Badge: `{}`, Config: `{}`\".format(\n nf_badge_version, self.minNextflowVersion\n )\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n assert nf_quickstart_version == self.minNextflowVersion\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}]} | 1,665 | 392 |
gh_patches_debug_8873 | rasdani/github-patches | git_diff | kubeflow__pipelines-4132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 REQUIRES = [
23 'PyYAML',
24 'google-cloud-storage>=1.13.0',
25 'kubernetes>=8.0.0, <12.0.0',
26 'google-auth>=1.6.1',
27 'requests_toolbelt>=0.8.0',
28 'cloudpickle',
29 # Update the upper version whenever a new major version of the
30 # kfp-server-api package is released.
31 # Update the lower version when kfp sdk depends on new apis/fields in
32 # kfp-server-api.
33 # Note, please also update ./requirements.in
34 'kfp-server-api>=0.2.5, <2.0.0',
35 'jsonschema >= 3.0.1',
36 'tabulate',
37 'click',
38 'Deprecated',
39 'strip-hints',
40 ]
41
42
43 def find_version(*file_path_parts):
44 here = os.path.abspath(os.path.dirname(__file__))
45 with open(os.path.join(here, *file_path_parts), 'r') as fp:
46 version_file_text = fp.read()
47
48 version_match = re.search(
49 r"^__version__ = ['\"]([^'\"]*)['\"]",
50 version_file_text,
51 re.M,
52 )
53 if version_match:
54 return version_match.group(1)
55
56 raise RuntimeError('Unable to find version string.')
57
58
59 setup(
60 name=NAME,
61 version=find_version('kfp', '__init__.py'),
62 description='KubeFlow Pipelines SDK',
63 author='google',
64 install_requires=REQUIRES,
65 packages=[
66 'kfp',
67 'kfp.cli',
68 'kfp.cli.diagnose_me',
69 'kfp.compiler',
70 'kfp.components',
71 'kfp.components.structures',
72 'kfp.components.structures.kubernetes',
73 'kfp.containers',
74 'kfp.dsl',
75 'kfp.dsl.extensions',
76 'kfp.notebook',
77 ],
78 classifiers=[
79 'Intended Audience :: Developers',
80 'Intended Audience :: Education',
81 'Intended Audience :: Science/Research',
82 'License :: OSI Approved :: Apache Software License',
83 'Programming Language :: Python :: 3',
84 'Programming Language :: Python :: 3.5',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Topic :: Scientific/Engineering',
88 'Topic :: Scientific/Engineering :: Artificial Intelligence',
89 'Topic :: Software Development',
90 'Topic :: Software Development :: Libraries',
91 'Topic :: Software Development :: Libraries :: Python Modules',
92 ],
93 python_requires='>=3.5.3',
94 include_package_data=True,
95 entry_points={
96 'console_scripts': [
97 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'
98 ]
99 })
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -39,6 +39,10 @@
'strip-hints',
]
+TESTS_REQUIRE = [
+ 'mock',
+]
+
def find_version(*file_path_parts):
here = os.path.abspath(os.path.dirname(__file__))
@@ -62,6 +66,7 @@
description='KubeFlow Pipelines SDK',
author='google',
install_requires=REQUIRES,
+ tests_require=TESTS_REQUIRE,
packages=[
'kfp',
'kfp.cli',
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -39,6 +39,10 @@\n 'strip-hints',\n ]\n \n+TESTS_REQUIRE = [\n+ 'mock',\n+]\n+\n \n def find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n@@ -62,6 +66,7 @@\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n+ tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <12.0.0',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=0.2.5, <2.0.0',\n 'jsonschema >= 3.0.1',\n 'tabulate',\n 'click',\n 'Deprecated',\n 'strip-hints',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <12.0.0',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=0.2.5, <2.0.0',\n 'jsonschema >= 3.0.1',\n 'tabulate',\n 'click',\n 'Deprecated',\n 'strip-hints',\n]\n\nTESTS_REQUIRE = [\n 'mock',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]} | 1,575 | 141 |
gh_patches_debug_29733 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to Evaluate Final Result from condition
**Describe the issue**
CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
**Examples**
Check: CKV_GCP_43: "Ensure KMS encryption keys are rotated within a period of 90 days"
FAILED for resource: module.kms.google_kms_crypto_key.key
File: /main.tf:11-29
Calling File: /example/production/main.tf:1-6
Guide: https://docs.bridgecrew.io/docs/bc_gcp_general_4
11 | resource "google_kms_crypto_key" "key" {
12 | count = var.prevent_destroy ? length(var.keys) : 0
13 | name = var.keys[count.index]
14 | key_ring = google_kms_key_ring.key_ring.id
15 | rotation_period = contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period
16 | #rotation_period = var.key_rotation_period
17 | purpose = var.purpose
18 |
19 | lifecycle {
20 | prevent_destroy = true
21 | }
22 |
23 | version_template {
24 | algorithm = var.key_algorithm
25 | protection_level = var.key_protection_level
26 | }
27 |
28 | labels = var.labels
29 | }
Checkov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.
**Version (please complete the following information):**
- Checkov Version 2.3.156
**Additional context**
`contains(["ASYMMETRIC_SIGN", "ASYMMETRIC_DECRYPT"], var.purpose) ? null : var.key_rotation_period`
Above line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py`
Content:
```
1 from typing import Dict, List, Any
2
3 from checkov.common.util.type_forcers import force_int
4
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
7
8 # rotation_period time unit is seconds
9 ONE_DAY = 24 * 60 * 60
10 NINETY_DAYS = 90 * ONE_DAY
11
12
13 class GoogleKMSKeyRotationPeriod(BaseResourceCheck):
14 def __init__(self) -> None:
15 name = "Ensure KMS encryption keys are rotated within a period of 90 days"
16 id = "CKV_GCP_43"
17 supported_resources = ["google_kms_crypto_key"]
18 categories = [CheckCategories.GENERAL_SECURITY]
19 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
20
21 def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
22 self.evaluated_keys = ["rotation_period"]
23 rotation = conf.get("rotation_period")
24 if rotation and rotation[0]:
25 time = force_int(rotation[0][:-1])
26 if time and ONE_DAY <= time <= NINETY_DAYS:
27 return CheckResult.PASSED
28 return CheckResult.FAILED
29
30
31 check = GoogleKMSKeyRotationPeriod()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py
@@ -5,6 +5,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
+ASYMMETRIC_KEYS = {"ASYMMETRIC_DECRYPT", "ASYMMETRIC_SIGN"}
# rotation_period time unit is seconds
ONE_DAY = 24 * 60 * 60
NINETY_DAYS = 90 * ONE_DAY
@@ -14,11 +15,17 @@
def __init__(self) -> None:
name = "Ensure KMS encryption keys are rotated within a period of 90 days"
id = "CKV_GCP_43"
- supported_resources = ["google_kms_crypto_key"]
- categories = [CheckCategories.GENERAL_SECURITY]
+ supported_resources = ("google_kms_crypto_key",)
+ categories = (CheckCategories.GENERAL_SECURITY,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ purpose = conf.get("purpose")
+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:
+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric
+ # automatic key rotation is not supported for asymmetric keys
+ return CheckResult.UNKNOWN
+
self.evaluated_keys = ["rotation_period"]
rotation = conf.get("rotation_period")
if rotation and rotation[0]:
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n--- a/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n+++ b/checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py\n@@ -5,6 +5,7 @@\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n+ASYMMETRIC_KEYS = {\"ASYMMETRIC_DECRYPT\", \"ASYMMETRIC_SIGN\"}\n # rotation_period time unit is seconds\n ONE_DAY = 24 * 60 * 60\n NINETY_DAYS = 90 * ONE_DAY\n@@ -14,11 +15,17 @@\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n- supported_resources = [\"google_kms_crypto_key\"]\n- categories = [CheckCategories.GENERAL_SECURITY]\n+ supported_resources = (\"google_kms_crypto_key\",)\n+ categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ purpose = conf.get(\"purpose\")\n+ if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:\n+ # https://cloud.google.com/kms/docs/key-rotation#asymmetric\n+ # automatic key rotation is not supported for asymmetric keys\n+ return CheckResult.UNKNOWN\n+\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n", "issue": "Unable to Evaluate Final Result from condition \n**Describe the issue**\r\nCKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\r\n**Examples**\r\nCheck: CKV_GCP_43: \"Ensure KMS encryption keys are rotated within a period of 90 days\"\r\n\tFAILED for resource: module.kms.google_kms_crypto_key.key\r\n\tFile: /main.tf:11-29\r\n\tCalling File: /example/production/main.tf:1-6\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_gcp_general_4\r\n\r\n\t\t11 | resource \"google_kms_crypto_key\" \"key\" {\r\n\t\t12 | count = var.prevent_destroy ? length(var.keys) : 0\r\n\t\t13 | name = var.keys[count.index]\r\n\t\t14 | key_ring = google_kms_key_ring.key_ring.id\r\n\t\t15 | rotation_period = contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period\r\n\t\t16 | #rotation_period = var.key_rotation_period\r\n\t\t17 | purpose = var.purpose\r\n\t\t18 |\r\n\t\t19 | lifecycle {\r\n\t\t20 | prevent_destroy = true\r\n\t\t21 | }\r\n\t\t22 |\r\n\t\t23 | version_template {\r\n\t\t24 | algorithm = var.key_algorithm\r\n\t\t25 | protection_level = var.key_protection_level\r\n\t\t26 | }\r\n\t\t27 |\r\n\t\t28 | labels = var.labels\r\n\t\t29 | }\r\n\r\nCheckov should providing error only in ASYMMETRIC key creation not the ENCRYPT_DCRYPT purpose for KMS key. Even after setting the purpose to ENCRYPT_DCRYPT and key_rotation_period variable to 90 days(7776000s), check is failing.\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.3.156\r\n\r\n**Additional context**\r\n`contains([\"ASYMMETRIC_SIGN\", \"ASYMMETRIC_DECRYPT\"], var.purpose) ? null : var.key_rotation_period`\r\nAbove line should be evaluated and marked as passed for GCP KMS as ASYMMETRIC key is not supporting Automatic rotation.\r\n\n", "before_files": [{"content": "from typing import Dict, List, Any\n\nfrom checkov.common.util.type_forcers import force_int\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n# rotation_period time unit is seconds\nONE_DAY = 24 * 60 * 60\nNINETY_DAYS = 90 * ONE_DAY\n\n\nclass GoogleKMSKeyRotationPeriod(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n supported_resources = [\"google_kms_crypto_key\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n time = force_int(rotation[0][:-1])\n if time and ONE_DAY <= time <= NINETY_DAYS:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = GoogleKMSKeyRotationPeriod()\n", "path": "checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py"}], "after_files": [{"content": "from typing import Dict, List, Any\n\nfrom checkov.common.util.type_forcers import force_int\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\nASYMMETRIC_KEYS = {\"ASYMMETRIC_DECRYPT\", \"ASYMMETRIC_SIGN\"}\n# rotation_period time unit is seconds\nONE_DAY = 24 * 60 * 60\nNINETY_DAYS = 90 * ONE_DAY\n\n\nclass GoogleKMSKeyRotationPeriod(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure KMS encryption keys are rotated within a period of 90 days\"\n id = \"CKV_GCP_43\"\n supported_resources = (\"google_kms_crypto_key\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n purpose = conf.get(\"purpose\")\n if purpose and isinstance(purpose, list) and purpose[0] in ASYMMETRIC_KEYS:\n # https://cloud.google.com/kms/docs/key-rotation#asymmetric\n # automatic key rotation is not supported for asymmetric keys\n return CheckResult.UNKNOWN\n\n self.evaluated_keys = [\"rotation_period\"]\n rotation = conf.get(\"rotation_period\")\n if rotation and rotation[0]:\n time = force_int(rotation[0][:-1])\n if time and ONE_DAY <= time <= NINETY_DAYS:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = GoogleKMSKeyRotationPeriod()\n", "path": "checkov/terraform/checks/resource/gcp/GoogleKMSRotationPeriod.py"}]} | 1,121 | 417 |
gh_patches_debug_50437 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-5470 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build List Screen Performance Issue
While working on #5464, I discovered a performance issue on the [build list screen](http://readthedocs.org/projects/requests/builds/).
There appears to be a couple duplicated queries in the build list screen. One is to get the project (probably from `Build.get_absolute_url`) for the build and the other gets the version (probably from `build_list_detailed.html`). This results in O(n) SQL queries where n is the number of builds displayed. It should be possible to get the project and version for each build using `select_related`.
<img width="1155" alt="Screen Shot 2019-03-15 at 11 00 13 AM" src="https://user-images.githubusercontent.com/185043/54452149-a0d76e80-4711-11e9-82f4-763418863f94.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/builds/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Views for builds app."""
4
5 import logging
6 import textwrap
7
8 from django.contrib import messages
9 from django.contrib.auth.decorators import login_required
10 from django.http import (
11 HttpResponseForbidden,
12 HttpResponsePermanentRedirect,
13 HttpResponseRedirect,
14 )
15 from django.shortcuts import get_object_or_404
16 from django.urls import reverse
17 from django.utils.decorators import method_decorator
18 from django.views.generic import DetailView, ListView
19 from requests.utils import quote
20 from urllib.parse import urlparse
21
22 from readthedocs.doc_builder.exceptions import BuildEnvironmentError
23 from readthedocs.builds.models import Build, Version
24 from readthedocs.core.permissions import AdminPermission
25 from readthedocs.core.utils import trigger_build
26 from readthedocs.projects.models import Project
27
28
29 log = logging.getLogger(__name__)
30
31
32 class BuildBase:
33 model = Build
34
35 def get_queryset(self):
36 self.project_slug = self.kwargs.get('project_slug', None)
37 self.project = get_object_or_404(
38 Project.objects.protected(self.request.user),
39 slug=self.project_slug,
40 )
41 queryset = Build.objects.public(
42 user=self.request.user,
43 project=self.project,
44 )
45
46 return queryset
47
48
49 class BuildTriggerMixin:
50
51 @method_decorator(login_required)
52 def post(self, request, project_slug):
53 project = get_object_or_404(Project, slug=project_slug)
54
55 if not AdminPermission.is_admin(request.user, project):
56 return HttpResponseForbidden()
57
58 version_slug = request.POST.get('version_slug')
59 version = get_object_or_404(
60 Version,
61 project=project,
62 slug=version_slug,
63 )
64
65 update_docs_task, build = trigger_build(
66 project=project,
67 version=version,
68 )
69 if (update_docs_task, build) == (None, None):
70 # Build was skipped
71 messages.add_message(
72 request,
73 messages.WARNING,
74 "This project is currently disabled and can't trigger new builds.",
75 )
76 return HttpResponseRedirect(
77 reverse('builds_project_list', args=[project.slug]),
78 )
79
80 return HttpResponseRedirect(
81 reverse('builds_detail', args=[project.slug, build.pk]),
82 )
83
84
85 class BuildList(BuildBase, BuildTriggerMixin, ListView):
86
87 def get_context_data(self, **kwargs):
88 context = super().get_context_data(**kwargs)
89
90 active_builds = self.get_queryset().exclude(
91 state='finished',
92 ).values('id')
93
94 context['project'] = self.project
95 context['active_builds'] = active_builds
96 context['versions'] = Version.objects.public(
97 user=self.request.user,
98 project=self.project,
99 )
100 context['build_qs'] = self.get_queryset()
101
102 return context
103
104
105 class BuildDetail(BuildBase, DetailView):
106 pk_url_kwarg = 'build_pk'
107
108 def get_context_data(self, **kwargs):
109 context = super().get_context_data(**kwargs)
110 context['project'] = self.project
111
112 build = self.get_object()
113 if build.error != BuildEnvironmentError.GENERIC_WITH_BUILD_ID.format(build_id=build.pk):
114 # Do not suggest to open an issue if the error is not generic
115 return context
116
117 scheme = (
118 'https://github.com/rtfd/readthedocs.org/issues/new'
119 '?title={title}{build_id}'
120 '&body={body}'
121 )
122
123 # TODO: we could use ``.github/ISSUE_TEMPLATE.md`` here, but we would
124 # need to add some variables to it which could impact in the UX when
125 # filling an issue from the web
126 body = """
127 ## Details:
128
129 * Project URL: https://readthedocs.org/projects/{project_slug}/
130 * Build URL(if applicable): https://readthedocs.org{build_path}
131 * Read the Docs username(if applicable): {username}
132
133 ## Expected Result
134
135 *A description of what you wanted to happen*
136
137 ## Actual Result
138
139 *A description of what actually happened*""".format(
140 project_slug=self.project,
141 build_path=self.request.path,
142 username=self.request.user,
143 )
144
145 scheme_dict = {
146 'title': quote('Build error with build id #'),
147 'build_id': context['build'].id,
148 'body': quote(textwrap.dedent(body)),
149 }
150
151 issue_url = scheme.format(**scheme_dict)
152 issue_url = urlparse(issue_url).geturl()
153 context['issue_url'] = issue_url
154 return context
155
156
157 # Old build view redirects
158
159
160 def builds_redirect_list(request, project_slug): # pylint: disable=unused-argument
161 return HttpResponsePermanentRedirect(
162 reverse('builds_project_list', args=[project_slug]),
163 )
164
165
166 def builds_redirect_detail(request, project_slug, pk): # pylint: disable=unused-argument
167 return HttpResponsePermanentRedirect(
168 reverse('builds_detail', args=[project_slug, pk]),
169 )
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/builds/views.py b/readthedocs/builds/views.py
--- a/readthedocs/builds/views.py
+++ b/readthedocs/builds/views.py
@@ -41,7 +41,7 @@
queryset = Build.objects.public(
user=self.request.user,
project=self.project,
- )
+ ).select_related('project', 'version')
return queryset
| {"golden_diff": "diff --git a/readthedocs/builds/views.py b/readthedocs/builds/views.py\n--- a/readthedocs/builds/views.py\n+++ b/readthedocs/builds/views.py\n@@ -41,7 +41,7 @@\n queryset = Build.objects.public(\n user=self.request.user,\n project=self.project,\n- )\n+ ).select_related('project', 'version')\n \n return queryset\n", "issue": "Build List Screen Performance Issue\nWhile working on #5464, I discovered a performance issue on the [build list screen](http://readthedocs.org/projects/requests/builds/).\r\n\r\nThere appears to be a couple duplicated queries in the build list screen. One is to get the project (probably from `Build.get_absolute_url`) for the build and the other gets the version (probably from `build_list_detailed.html`). This results in O(n) SQL queries where n is the number of builds displayed. It should be possible to get the project and version for each build using `select_related`.\r\n\r\n<img width=\"1155\" alt=\"Screen Shot 2019-03-15 at 11 00 13 AM\" src=\"https://user-images.githubusercontent.com/185043/54452149-a0d76e80-4711-11e9-82f4-763418863f94.png\">\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Views for builds app.\"\"\"\n\nimport logging\nimport textwrap\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import (\n HttpResponseForbidden,\n HttpResponsePermanentRedirect,\n HttpResponseRedirect,\n)\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import DetailView, ListView\nfrom requests.utils import quote\nfrom urllib.parse import urlparse\n\nfrom readthedocs.doc_builder.exceptions import BuildEnvironmentError\nfrom readthedocs.builds.models import Build, Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.utils import trigger_build\nfrom readthedocs.projects.models import Project\n\n\nlog = logging.getLogger(__name__)\n\n\nclass BuildBase:\n model = Build\n\n def get_queryset(self):\n self.project_slug = self.kwargs.get('project_slug', None)\n self.project = get_object_or_404(\n Project.objects.protected(self.request.user),\n slug=self.project_slug,\n )\n queryset = Build.objects.public(\n user=self.request.user,\n project=self.project,\n )\n\n return queryset\n\n\nclass BuildTriggerMixin:\n\n @method_decorator(login_required)\n def post(self, request, project_slug):\n project = get_object_or_404(Project, slug=project_slug)\n\n if not AdminPermission.is_admin(request.user, project):\n return HttpResponseForbidden()\n\n version_slug = request.POST.get('version_slug')\n version = get_object_or_404(\n Version,\n project=project,\n slug=version_slug,\n )\n\n update_docs_task, build = trigger_build(\n project=project,\n version=version,\n )\n if (update_docs_task, build) == (None, None):\n # Build was skipped\n messages.add_message(\n request,\n messages.WARNING,\n \"This project is currently disabled and can't trigger new builds.\",\n )\n return HttpResponseRedirect(\n reverse('builds_project_list', args=[project.slug]),\n )\n\n return HttpResponseRedirect(\n reverse('builds_detail', args=[project.slug, build.pk]),\n )\n\n\nclass BuildList(BuildBase, BuildTriggerMixin, ListView):\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n active_builds = self.get_queryset().exclude(\n state='finished',\n ).values('id')\n\n context['project'] = self.project\n context['active_builds'] = active_builds\n context['versions'] = Version.objects.public(\n user=self.request.user,\n project=self.project,\n )\n context['build_qs'] = self.get_queryset()\n\n return context\n\n\nclass BuildDetail(BuildBase, DetailView):\n pk_url_kwarg = 'build_pk'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n\n build = self.get_object()\n if build.error != BuildEnvironmentError.GENERIC_WITH_BUILD_ID.format(build_id=build.pk):\n # Do not suggest to open an issue if the error is not generic\n return context\n\n scheme = (\n 'https://github.com/rtfd/readthedocs.org/issues/new'\n '?title={title}{build_id}'\n '&body={body}'\n )\n\n # TODO: we could use ``.github/ISSUE_TEMPLATE.md`` here, but we would\n # need to add some variables to it which could impact in the UX when\n # filling an issue from the web\n body = \"\"\"\n ## Details:\n\n * Project URL: https://readthedocs.org/projects/{project_slug}/\n * Build URL(if applicable): https://readthedocs.org{build_path}\n * Read the Docs username(if applicable): {username}\n\n ## Expected Result\n\n *A description of what you wanted to happen*\n\n ## Actual Result\n\n *A description of what actually happened*\"\"\".format(\n project_slug=self.project,\n build_path=self.request.path,\n username=self.request.user,\n )\n\n scheme_dict = {\n 'title': quote('Build error with build id #'),\n 'build_id': context['build'].id,\n 'body': quote(textwrap.dedent(body)),\n }\n\n issue_url = scheme.format(**scheme_dict)\n issue_url = urlparse(issue_url).geturl()\n context['issue_url'] = issue_url\n return context\n\n\n# Old build view redirects\n\n\ndef builds_redirect_list(request, project_slug): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_project_list', args=[project_slug]),\n )\n\n\ndef builds_redirect_detail(request, project_slug, pk): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_detail', args=[project_slug, pk]),\n )\n", "path": "readthedocs/builds/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Views for builds app.\"\"\"\n\nimport logging\nimport textwrap\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import (\n HttpResponseForbidden,\n HttpResponsePermanentRedirect,\n HttpResponseRedirect,\n)\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import DetailView, ListView\nfrom requests.utils import quote\nfrom urllib.parse import urlparse\n\nfrom readthedocs.doc_builder.exceptions import BuildEnvironmentError\nfrom readthedocs.builds.models import Build, Version\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.utils import trigger_build\nfrom readthedocs.projects.models import Project\n\n\nlog = logging.getLogger(__name__)\n\n\nclass BuildBase:\n model = Build\n\n def get_queryset(self):\n self.project_slug = self.kwargs.get('project_slug', None)\n self.project = get_object_or_404(\n Project.objects.protected(self.request.user),\n slug=self.project_slug,\n )\n queryset = Build.objects.public(\n user=self.request.user,\n project=self.project,\n ).select_related('project', 'version')\n\n return queryset\n\n\nclass BuildTriggerMixin:\n\n @method_decorator(login_required)\n def post(self, request, project_slug):\n project = get_object_or_404(Project, slug=project_slug)\n\n if not AdminPermission.is_admin(request.user, project):\n return HttpResponseForbidden()\n\n version_slug = request.POST.get('version_slug')\n version = get_object_or_404(\n Version,\n project=project,\n slug=version_slug,\n )\n\n update_docs_task, build = trigger_build(\n project=project,\n version=version,\n )\n if (update_docs_task, build) == (None, None):\n # Build was skipped\n messages.add_message(\n request,\n messages.WARNING,\n \"This project is currently disabled and can't trigger new builds.\",\n )\n return HttpResponseRedirect(\n reverse('builds_project_list', args=[project.slug]),\n )\n\n return HttpResponseRedirect(\n reverse('builds_detail', args=[project.slug, build.pk]),\n )\n\n\nclass BuildList(BuildBase, BuildTriggerMixin, ListView):\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n active_builds = self.get_queryset().exclude(\n state='finished',\n ).values('id')\n\n context['project'] = self.project\n context['active_builds'] = active_builds\n context['versions'] = Version.objects.public(\n user=self.request.user,\n project=self.project,\n )\n context['build_qs'] = self.get_queryset()\n\n return context\n\n\nclass BuildDetail(BuildBase, DetailView):\n pk_url_kwarg = 'build_pk'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['project'] = self.project\n\n build = self.get_object()\n if build.error != BuildEnvironmentError.GENERIC_WITH_BUILD_ID.format(build_id=build.pk):\n # Do not suggest to open an issue if the error is not generic\n return context\n\n scheme = (\n 'https://github.com/rtfd/readthedocs.org/issues/new'\n '?title={title}{build_id}'\n '&body={body}'\n )\n\n # TODO: we could use ``.github/ISSUE_TEMPLATE.md`` here, but we would\n # need to add some variables to it which could impact in the UX when\n # filling an issue from the web\n body = \"\"\"\n ## Details:\n\n * Project URL: https://readthedocs.org/projects/{project_slug}/\n * Build URL(if applicable): https://readthedocs.org{build_path}\n * Read the Docs username(if applicable): {username}\n\n ## Expected Result\n\n *A description of what you wanted to happen*\n\n ## Actual Result\n\n *A description of what actually happened*\"\"\".format(\n project_slug=self.project,\n build_path=self.request.path,\n username=self.request.user,\n )\n\n scheme_dict = {\n 'title': quote('Build error with build id #'),\n 'build_id': context['build'].id,\n 'body': quote(textwrap.dedent(body)),\n }\n\n issue_url = scheme.format(**scheme_dict)\n issue_url = urlparse(issue_url).geturl()\n context['issue_url'] = issue_url\n return context\n\n\n# Old build view redirects\n\n\ndef builds_redirect_list(request, project_slug): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_project_list', args=[project_slug]),\n )\n\n\ndef builds_redirect_detail(request, project_slug, pk): # pylint: disable=unused-argument\n return HttpResponsePermanentRedirect(\n reverse('builds_detail', args=[project_slug, pk]),\n )\n", "path": "readthedocs/builds/views.py"}]} | 1,949 | 89 |
gh_patches_debug_29450 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4476 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AWS_CKV_7 False Positive on assymetric key check in Cloudformation
**Describe the issue**
In terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:
```
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
if not spec or 'SYMMETRIC_DEFAULT' in spec:
return super().scan_resource_conf(conf)
else:
return CheckResult.PASSED
```
**Examples**
```
RSASigningKey:
Type: 'AWS::KMS::Key'
Properties:
Description: RSA-3072 asymmetric KMS key for signing and verification
KeySpec: RSA_3072
KeyUsage: SIGN_VERIFY
KeyPolicy:
Version: 2012-10-17
Id: key-default-1
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:root'
Action: 'kms:*'
Resource: '*'
- Sid: Allow administration of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Admin'
Action:
- 'kms:Create*'
- 'kms:Describe*'
- 'kms:Enable*'
- 'kms:List*'
- 'kms:Put*'
- 'kms:Update*'
- 'kms:Revoke*'
- 'kms:Disable*'
- 'kms:Get*'
- 'kms:Delete*'
- 'kms:ScheduleKeyDeletion'
- 'kms:CancelKeyDeletion'
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: 'arn:aws:iam::111122223333:role/Developer'
Action:
- 'kms:Sign'
- 'kms:Verify'
- 'kms:DescribeKey'
Resource: '*'
```
**Version (please complete the following information):**
- Checkov Version [e.g. 2.3.0]
**Additional context**
This blocks checkov working for assymetric keys in CFN.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/cloudformation/checks/resource/aws/KMSRotation.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self) -> None:
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ("AWS::KMS::Key",)
10 categories = (CheckCategories.ENCRYPTION,)
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self) -> str:
14 return "Properties/EnableKeyRotation"
15
16
17 check = KMSRotation()
18
```
Path: `checkov/terraform/checks/resource/aws/KMSRotation.py`
Content:
```
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories, CheckResult
3
4
5 class KMSRotation(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure rotation for customer created CMKs is enabled"
8 id = "CKV_AWS_7"
9 supported_resources = ['aws_kms_key']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "enable_key_rotation"
15
16 def scan_resource_conf(self, conf):
17 # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
18 spec = conf.get('customer_master_key_spec')
19 if not spec or 'SYMMETRIC_DEFAULT' in spec:
20 return super().scan_resource_conf(conf)
21 else:
22 return CheckResult.PASSED
23
24
25 check = KMSRotation()
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py
+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py
@@ -1,4 +1,4 @@
-from checkov.common.models.enums import CheckCategories
+from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck
@@ -13,5 +13,15 @@
def get_inspected_key(self) -> str:
return "Properties/EnableKeyRotation"
+ def scan_resource_conf(self, conf):
+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
+ properties = conf.get("Properties")
+ if properties and isinstance(properties, dict):
+ spec = properties.get("KeySpec")
+ if spec and isinstance(spec, str):
+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:
+ return CheckResult.UNKNOWN
+ return super().scan_resource_conf(conf)
+
check = KMSRotation()
diff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py
--- a/checkov/terraform/checks/resource/aws/KMSRotation.py
+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py
@@ -16,10 +16,10 @@
def scan_resource_conf(self, conf):
# Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.
spec = conf.get('customer_master_key_spec')
- if not spec or 'SYMMETRIC_DEFAULT' in spec:
+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:
return super().scan_resource_conf(conf)
else:
- return CheckResult.PASSED
+ return CheckResult.UNKNOWN
check = KMSRotation()
| {"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/KMSRotation.py b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n--- a/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n+++ b/checkov/cloudformation/checks/resource/aws/KMSRotation.py\n@@ -1,4 +1,4 @@\n-from checkov.common.models.enums import CheckCategories\n+from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n \n \n@@ -13,5 +13,15 @@\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n \n+ def scan_resource_conf(self, conf):\n+ # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n+ properties = conf.get(\"Properties\")\n+ if properties and isinstance(properties, dict):\n+ spec = properties.get(\"KeySpec\")\n+ if spec and isinstance(spec, str):\n+ if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:\n+ return CheckResult.UNKNOWN\n+ return super().scan_resource_conf(conf)\n+\n \n check = KMSRotation()\ndiff --git a/checkov/terraform/checks/resource/aws/KMSRotation.py b/checkov/terraform/checks/resource/aws/KMSRotation.py\n--- a/checkov/terraform/checks/resource/aws/KMSRotation.py\n+++ b/checkov/terraform/checks/resource/aws/KMSRotation.py\n@@ -16,10 +16,10 @@\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n- if not spec or 'SYMMETRIC_DEFAULT' in spec:\n+ if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:\n return super().scan_resource_conf(conf)\n else:\n- return CheckResult.PASSED\n+ return CheckResult.UNKNOWN\n \n \n check = KMSRotation()\n", "issue": "AWS_CKV_7 False Positive on assymetric key check in Cloudformation\n**Describe the issue**\r\nIn terraform, the check avoids false positives with an extra check against symmetric keys before checking whether rotation is enabled. This same check hasn't been configured for cloudformation:\r\n\r\n```\r\ndef scan_resource_conf(self, conf):\r\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\r\n spec = conf.get('customer_master_key_spec')\r\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\r\n return super().scan_resource_conf(conf)\r\n else:\r\n return CheckResult.PASSED\r\n```\r\n\r\n**Examples**\r\n\r\n```\r\nRSASigningKey:\r\n Type: 'AWS::KMS::Key'\r\n Properties:\r\n Description: RSA-3072 asymmetric KMS key for signing and verification\r\n KeySpec: RSA_3072\r\n KeyUsage: SIGN_VERIFY\r\n KeyPolicy:\r\n Version: 2012-10-17\r\n Id: key-default-1\r\n Statement:\r\n - Sid: Enable IAM User Permissions\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:root'\r\n Action: 'kms:*'\r\n Resource: '*'\r\n - Sid: Allow administration of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Admin'\r\n Action:\r\n - 'kms:Create*'\r\n - 'kms:Describe*'\r\n - 'kms:Enable*'\r\n - 'kms:List*'\r\n - 'kms:Put*'\r\n - 'kms:Update*'\r\n - 'kms:Revoke*'\r\n - 'kms:Disable*'\r\n - 'kms:Get*'\r\n - 'kms:Delete*'\r\n - 'kms:ScheduleKeyDeletion'\r\n - 'kms:CancelKeyDeletion'\r\n Resource: '*'\r\n - Sid: Allow use of the key\r\n Effect: Allow\r\n Principal:\r\n AWS: 'arn:aws:iam::111122223333:role/Developer'\r\n Action:\r\n - 'kms:Sign'\r\n - 'kms:Verify'\r\n - 'kms:DescribeKey'\r\n Resource: '*'\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version [e.g. 2.3.0]\r\n\r\n**Additional context**\r\n\r\nThis blocks checkov working for assymetric keys in CFN.\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = (\"AWS::KMS::Key\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n\n\ncheck = KMSRotation()\n", "path": "checkov/cloudformation/checks/resource/aws/KMSRotation.py"}, {"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = ['aws_kms_key']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"enable_key_rotation\"\n\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n if not spec or 'SYMMETRIC_DEFAULT' in spec:\n return super().scan_resource_conf(conf)\n else:\n return CheckResult.PASSED\n\n\ncheck = KMSRotation()\n", "path": "checkov/terraform/checks/resource/aws/KMSRotation.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.cloudformation.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = (\"AWS::KMS::Key\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return \"Properties/EnableKeyRotation\"\n\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n properties = conf.get(\"Properties\")\n if properties and isinstance(properties, dict):\n spec = properties.get(\"KeySpec\")\n if spec and isinstance(spec, str):\n if 'SYMMETRIC_DEFAULT' not in spec and 'HMAC' not in spec:\n return CheckResult.UNKNOWN\n return super().scan_resource_conf(conf)\n\n\ncheck = KMSRotation()\n", "path": "checkov/cloudformation/checks/resource/aws/KMSRotation.py"}, {"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass KMSRotation(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure rotation for customer created CMKs is enabled\"\n id = \"CKV_AWS_7\"\n supported_resources = ['aws_kms_key']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"enable_key_rotation\"\n\n def scan_resource_conf(self, conf):\n # Only symmetric keys support auto rotation. The attribute is optional and defaults to symmetric.\n spec = conf.get('customer_master_key_spec')\n if not spec or 'SYMMETRIC_DEFAULT' in spec or 'HMAC' in spec:\n return super().scan_resource_conf(conf)\n else:\n return CheckResult.UNKNOWN\n\n\ncheck = KMSRotation()\n", "path": "checkov/terraform/checks/resource/aws/KMSRotation.py"}]} | 1,291 | 447 |
gh_patches_debug_3608 | rasdani/github-patches | git_diff | bokeh__bokeh-5620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Correctly handle data values <= 0 on a log scale
This is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.
As is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it "could not set initial ranges", probably because `log(n)` for `n<=0` is not defined.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py`
Content:
```
1 from bokeh.plotting import figure, output_file, show
2
3 x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
4 y = [10**xx for xx in x]
5
6 output_file("log.html")
7
8 # create a new plot with a log axis type
9 p = figure(plot_width=400, plot_height=400,
10 y_axis_type="log", y_range=(10**-1, 10**4))
11
12 p.line(x, y, line_width=2)
13 p.circle(x, y, fill_color="white", size=8)
14
15 show(p)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py
@@ -6,8 +6,7 @@
output_file("log.html")
# create a new plot with a log axis type
-p = figure(plot_width=400, plot_height=400,
- y_axis_type="log", y_range=(10**-1, 10**4))
+p = figure(plot_width=400, plot_height=400, y_axis_type="log")
p.line(x, y, line_width=2)
p.circle(x, y, fill_color="white", size=8)
| {"golden_diff": "diff --git a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n--- a/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n+++ b/sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py\n@@ -6,8 +6,7 @@\n output_file(\"log.html\")\n \n # create a new plot with a log axis type\n-p = figure(plot_width=400, plot_height=400,\n- y_axis_type=\"log\", y_range=(10**-1, 10**4))\n+p = figure(plot_width=400, plot_height=400, y_axis_type=\"log\")\n \n p.line(x, y, line_width=2)\n p.circle(x, y, fill_color=\"white\", size=8)\n", "issue": "Correctly handle data values <= 0 on a log scale\nThis is a continuation from issue #5389, partially adressed by PR #5477. There persists an issue where negative data is not handled correctly. All data <= 0 should be discarded before generating the plot.\r\n\r\nAs is, if `values = np.linspace(-0.1, 0.9), a JS error complains that it \"could not set initial ranges\", probably because `log(n)` for `n<=0` is not defined.\n", "before_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\nx = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]\ny = [10**xx for xx in x]\n\noutput_file(\"log.html\")\n\n# create a new plot with a log axis type\np = figure(plot_width=400, plot_height=400,\n y_axis_type=\"log\", y_range=(10**-1, 10**4))\n\np.line(x, y, line_width=2)\np.circle(x, y, fill_color=\"white\", size=8)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py"}], "after_files": [{"content": "from bokeh.plotting import figure, output_file, show\n\nx = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]\ny = [10**xx for xx in x]\n\noutput_file(\"log.html\")\n\n# create a new plot with a log axis type\np = figure(plot_width=400, plot_height=400, y_axis_type=\"log\")\n\np.line(x, y, line_width=2)\np.circle(x, y, fill_color=\"white\", size=8)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/examples/plotting_log_scale_axis.py"}]} | 554 | 186 |
gh_patches_debug_26426 | rasdani/github-patches | git_diff | weni-ai__bothub-engine-106 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Disallow samples without intent or entities
Disallow samples creation without an intent or one entity at least.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bothub/api/validators.py`
Content:
```
1 from django.utils.translation import gettext as _
2 from rest_framework.exceptions import PermissionDenied
3 from rest_framework.exceptions import ValidationError
4
5 from bothub.common.models import RepositoryTranslatedExample
6
7
8 class CanContributeInRepositoryValidator(object):
9 def __call__(self, value):
10 user_authorization = value.get_user_authorization(
11 self.request.user)
12 if not user_authorization.can_contribute:
13 raise PermissionDenied(
14 _('You can\'t contribute in this repository'))
15
16 def set_context(self, serializer):
17 self.request = serializer.context.get('request')
18
19
20 class CanContributeInRepositoryExampleValidator(object):
21 def __call__(self, value):
22 repository = value.repository_update.repository
23 user_authorization = repository.get_user_authorization(
24 self.request.user)
25 if not user_authorization.can_contribute:
26 raise PermissionDenied(
27 _('You can\'t contribute in this repository'))
28
29 def set_context(self, serializer):
30 self.request = serializer.context.get('request')
31
32
33 class CanContributeInRepositoryTranslatedExampleValidator(object):
34 def __call__(self, value):
35 repository = value.original_example.repository_update.repository
36 user_authorization = repository.get_user_authorization(
37 self.request.user)
38 if not user_authorization.can_contribute:
39 raise PermissionDenied(
40 _('You can\'t contribute in this repository'))
41
42 def set_context(self, serializer):
43 self.request = serializer.context.get('request')
44
45
46 class TranslatedExampleEntitiesValidator(object):
47 def __call__(self, attrs):
48 original_example = attrs.get('original_example')
49 entities_valid = RepositoryTranslatedExample.same_entities_validator(
50 list(map(lambda x: dict(x), attrs.get('entities'))),
51 list(map(lambda x: x.to_dict, original_example.entities.all())))
52 if not entities_valid:
53 raise ValidationError({'entities': _('Invalid entities')})
54
55
56 class TranslatedExampleLanguageValidator(object):
57 def __call__(self, attrs):
58 original_example = attrs.get('original_example')
59 language = attrs.get('language')
60 if original_example.repository_update.language == language:
61 raise ValidationError({'language': _(
62 'Can\'t translate to the same language')})
63
```
Path: `bothub/api/serializers/example.py`
Content:
```
1 from rest_framework import serializers
2
3 from django.utils.translation import gettext as _
4
5 from bothub.common.models import Repository
6 from bothub.common.models import RepositoryExample
7 from bothub.common.models import RepositoryExampleEntity
8
9 from ..fields import EntityText
10 from ..validators import CanContributeInRepositoryExampleValidator
11 from ..validators import CanContributeInRepositoryValidator
12 from .translate import RepositoryTranslatedExampleSerializer
13
14
15 class RepositoryExampleEntitySerializer(serializers.ModelSerializer):
16 class Meta:
17 model = RepositoryExampleEntity
18 fields = [
19 'id',
20 'repository_example',
21 'start',
22 'end',
23 'entity',
24 'created_at',
25 'value',
26 ]
27
28 repository_example = serializers.PrimaryKeyRelatedField(
29 queryset=RepositoryExample.objects,
30 validators=[
31 CanContributeInRepositoryExampleValidator(),
32 ],
33 help_text=_('Example\'s ID'))
34 value = serializers.SerializerMethodField()
35
36 def get_value(self, obj):
37 return obj.value
38
39
40 class NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):
41 class Meta:
42 model = RepositoryExampleEntity
43 fields = [
44 'repository_example',
45 'start',
46 'end',
47 'entity',
48 ]
49
50
51 class RepositoryExampleSerializer(serializers.ModelSerializer):
52 class Meta:
53 model = RepositoryExample
54 fields = [
55 'id',
56 'repository_update',
57 'deleted_in',
58 'text',
59 'intent',
60 'language',
61 'created_at',
62 'entities',
63 'translations',
64 ]
65 read_only_fields = [
66 'repository_update',
67 'deleted_in',
68 ]
69
70 entities = RepositoryExampleEntitySerializer(
71 many=True,
72 read_only=True)
73 translations = RepositoryTranslatedExampleSerializer(
74 many=True,
75 read_only=True)
76 language = serializers.SerializerMethodField()
77
78 def get_language(self, obj):
79 return obj.language
80
81
82 class NewRepositoryExampleSerializer(serializers.ModelSerializer):
83 class Meta:
84 model = RepositoryExample
85 fields = [
86 'id',
87 'repository',
88 'repository_update',
89 'text',
90 'entities',
91 'intent',
92 ]
93
94 id = serializers.PrimaryKeyRelatedField(
95 read_only=True,
96 style={'show': False})
97 text = EntityText(style={'entities_field': 'entities'})
98 repository = serializers.PrimaryKeyRelatedField(
99 queryset=Repository.objects,
100 validators=[
101 CanContributeInRepositoryValidator(),
102 ],
103 source='repository_update',
104 style={'show': False})
105 repository_update = serializers.PrimaryKeyRelatedField(
106 read_only=True,
107 style={'show': False})
108 entities = NewRepositoryExampleEntitySerializer(
109 many=True,
110 style={'text_field': 'text'})
111
112 def validate_repository(self, repository):
113 return repository.current_update()
114
115 def create(self, validated_data):
116 entities_data = validated_data.pop('entities')
117 example = self.Meta.model.objects.create(**validated_data)
118 for entity_data in entities_data:
119 RepositoryExampleEntity.objects.create(
120 repository_example=example,
121 **entity_data)
122 return example
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py
--- a/bothub/api/serializers/example.py
+++ b/bothub/api/serializers/example.py
@@ -9,6 +9,7 @@
from ..fields import EntityText
from ..validators import CanContributeInRepositoryExampleValidator
from ..validators import CanContributeInRepositoryValidator
+from ..validators import ExampleWithIntentOrEntityValidator
from .translate import RepositoryTranslatedExampleSerializer
@@ -109,6 +110,10 @@
many=True,
style={'text_field': 'text'})
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.validators.append(ExampleWithIntentOrEntityValidator())
+
def validate_repository(self, repository):
return repository.current_update()
diff --git a/bothub/api/validators.py b/bothub/api/validators.py
--- a/bothub/api/validators.py
+++ b/bothub/api/validators.py
@@ -60,3 +60,12 @@
if original_example.repository_update.language == language:
raise ValidationError({'language': _(
'Can\'t translate to the same language')})
+
+
+class ExampleWithIntentOrEntityValidator(object):
+ def __call__(self, attrs):
+ intent = attrs.get('intent')
+ entities = attrs.get('entities')
+
+ if not intent and not entities:
+ raise ValidationError(_('Define a intent or one entity'))
| {"golden_diff": "diff --git a/bothub/api/serializers/example.py b/bothub/api/serializers/example.py\n--- a/bothub/api/serializers/example.py\n+++ b/bothub/api/serializers/example.py\n@@ -9,6 +9,7 @@\n from ..fields import EntityText\n from ..validators import CanContributeInRepositoryExampleValidator\n from ..validators import CanContributeInRepositoryValidator\n+from ..validators import ExampleWithIntentOrEntityValidator\n from .translate import RepositoryTranslatedExampleSerializer\n \n \n@@ -109,6 +110,10 @@\n many=True,\n style={'text_field': 'text'})\n \n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.validators.append(ExampleWithIntentOrEntityValidator())\n+\n def validate_repository(self, repository):\n return repository.current_update()\n \ndiff --git a/bothub/api/validators.py b/bothub/api/validators.py\n--- a/bothub/api/validators.py\n+++ b/bothub/api/validators.py\n@@ -60,3 +60,12 @@\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n+\n+\n+class ExampleWithIntentOrEntityValidator(object):\n+ def __call__(self, attrs):\n+ intent = attrs.get('intent')\n+ entities = attrs.get('entities')\n+\n+ if not intent and not entities:\n+ raise ValidationError(_('Define a intent or one entity'))\n", "issue": "Disallow samples without intent or entities\nDisallow samples creation without an intent or one entity at least.\n", "before_files": [{"content": "from django.utils.translation import gettext as _\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\n\nfrom bothub.common.models import RepositoryTranslatedExample\n\n\nclass CanContributeInRepositoryValidator(object):\n def __call__(self, value):\n user_authorization = value.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryExampleValidator(object):\n def __call__(self, value):\n repository = value.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryTranslatedExampleValidator(object):\n def __call__(self, value):\n repository = value.original_example.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass TranslatedExampleEntitiesValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n entities_valid = RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: dict(x), attrs.get('entities'))),\n list(map(lambda x: x.to_dict, original_example.entities.all())))\n if not entities_valid:\n raise ValidationError({'entities': _('Invalid entities')})\n\n\nclass TranslatedExampleLanguageValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n language = attrs.get('language')\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n", "path": "bothub/api/validators.py"}, {"content": "from rest_framework import serializers\n\nfrom django.utils.translation import gettext as _\n\nfrom bothub.common.models import Repository\nfrom bothub.common.models import RepositoryExample\nfrom bothub.common.models import RepositoryExampleEntity\n\nfrom ..fields import EntityText\nfrom ..validators import CanContributeInRepositoryExampleValidator\nfrom ..validators import CanContributeInRepositoryValidator\nfrom .translate import RepositoryTranslatedExampleSerializer\n\n\nclass RepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'id',\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n 'created_at',\n 'value',\n ]\n\n repository_example = serializers.PrimaryKeyRelatedField(\n queryset=RepositoryExample.objects,\n validators=[\n CanContributeInRepositoryExampleValidator(),\n ],\n help_text=_('Example\\'s ID'))\n value = serializers.SerializerMethodField()\n\n def get_value(self, obj):\n return obj.value\n\n\nclass NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n ]\n\n\nclass RepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository_update',\n 'deleted_in',\n 'text',\n 'intent',\n 'language',\n 'created_at',\n 'entities',\n 'translations',\n ]\n read_only_fields = [\n 'repository_update',\n 'deleted_in',\n ]\n\n entities = RepositoryExampleEntitySerializer(\n many=True,\n read_only=True)\n translations = RepositoryTranslatedExampleSerializer(\n many=True,\n read_only=True)\n language = serializers.SerializerMethodField()\n\n def get_language(self, obj):\n return obj.language\n\n\nclass NewRepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository',\n 'repository_update',\n 'text',\n 'entities',\n 'intent',\n ]\n\n id = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n text = EntityText(style={'entities_field': 'entities'})\n repository = serializers.PrimaryKeyRelatedField(\n queryset=Repository.objects,\n validators=[\n CanContributeInRepositoryValidator(),\n ],\n source='repository_update',\n style={'show': False})\n repository_update = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n entities = NewRepositoryExampleEntitySerializer(\n many=True,\n style={'text_field': 'text'})\n\n def validate_repository(self, repository):\n return repository.current_update()\n\n def create(self, validated_data):\n entities_data = validated_data.pop('entities')\n example = self.Meta.model.objects.create(**validated_data)\n for entity_data in entities_data:\n RepositoryExampleEntity.objects.create(\n repository_example=example,\n **entity_data)\n return example\n", "path": "bothub/api/serializers/example.py"}], "after_files": [{"content": "from django.utils.translation import gettext as _\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\n\nfrom bothub.common.models import RepositoryTranslatedExample\n\n\nclass CanContributeInRepositoryValidator(object):\n def __call__(self, value):\n user_authorization = value.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryExampleValidator(object):\n def __call__(self, value):\n repository = value.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass CanContributeInRepositoryTranslatedExampleValidator(object):\n def __call__(self, value):\n repository = value.original_example.repository_update.repository\n user_authorization = repository.get_user_authorization(\n self.request.user)\n if not user_authorization.can_contribute:\n raise PermissionDenied(\n _('You can\\'t contribute in this repository'))\n\n def set_context(self, serializer):\n self.request = serializer.context.get('request')\n\n\nclass TranslatedExampleEntitiesValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n entities_valid = RepositoryTranslatedExample.same_entities_validator(\n list(map(lambda x: dict(x), attrs.get('entities'))),\n list(map(lambda x: x.to_dict, original_example.entities.all())))\n if not entities_valid:\n raise ValidationError({'entities': _('Invalid entities')})\n\n\nclass TranslatedExampleLanguageValidator(object):\n def __call__(self, attrs):\n original_example = attrs.get('original_example')\n language = attrs.get('language')\n if original_example.repository_update.language == language:\n raise ValidationError({'language': _(\n 'Can\\'t translate to the same language')})\n\n\nclass ExampleWithIntentOrEntityValidator(object):\n def __call__(self, attrs):\n intent = attrs.get('intent')\n entities = attrs.get('entities')\n\n if not intent and not entities:\n raise ValidationError(_('Define a intent or one entity'))\n", "path": "bothub/api/validators.py"}, {"content": "from rest_framework import serializers\n\nfrom django.utils.translation import gettext as _\n\nfrom bothub.common.models import Repository\nfrom bothub.common.models import RepositoryExample\nfrom bothub.common.models import RepositoryExampleEntity\n\nfrom ..fields import EntityText\nfrom ..validators import CanContributeInRepositoryExampleValidator\nfrom ..validators import CanContributeInRepositoryValidator\nfrom ..validators import ExampleWithIntentOrEntityValidator\nfrom .translate import RepositoryTranslatedExampleSerializer\n\n\nclass RepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'id',\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n 'created_at',\n 'value',\n ]\n\n repository_example = serializers.PrimaryKeyRelatedField(\n queryset=RepositoryExample.objects,\n validators=[\n CanContributeInRepositoryExampleValidator(),\n ],\n help_text=_('Example\\'s ID'))\n value = serializers.SerializerMethodField()\n\n def get_value(self, obj):\n return obj.value\n\n\nclass NewRepositoryExampleEntitySerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExampleEntity\n fields = [\n 'repository_example',\n 'start',\n 'end',\n 'entity',\n ]\n\n\nclass RepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository_update',\n 'deleted_in',\n 'text',\n 'intent',\n 'language',\n 'created_at',\n 'entities',\n 'translations',\n ]\n read_only_fields = [\n 'repository_update',\n 'deleted_in',\n ]\n\n entities = RepositoryExampleEntitySerializer(\n many=True,\n read_only=True)\n translations = RepositoryTranslatedExampleSerializer(\n many=True,\n read_only=True)\n language = serializers.SerializerMethodField()\n\n def get_language(self, obj):\n return obj.language\n\n\nclass NewRepositoryExampleSerializer(serializers.ModelSerializer):\n class Meta:\n model = RepositoryExample\n fields = [\n 'id',\n 'repository',\n 'repository_update',\n 'text',\n 'entities',\n 'intent',\n ]\n\n id = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n text = EntityText(style={'entities_field': 'entities'})\n repository = serializers.PrimaryKeyRelatedField(\n queryset=Repository.objects,\n validators=[\n CanContributeInRepositoryValidator(),\n ],\n source='repository_update',\n style={'show': False})\n repository_update = serializers.PrimaryKeyRelatedField(\n read_only=True,\n style={'show': False})\n entities = NewRepositoryExampleEntitySerializer(\n many=True,\n style={'text_field': 'text'})\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.validators.append(ExampleWithIntentOrEntityValidator())\n\n def validate_repository(self, repository):\n return repository.current_update()\n\n def create(self, validated_data):\n entities_data = validated_data.pop('entities')\n example = self.Meta.model.objects.create(**validated_data)\n for entity_data in entities_data:\n RepositoryExampleEntity.objects.create(\n repository_example=example,\n **entity_data)\n return example\n", "path": "bothub/api/serializers/example.py"}]} | 1,758 | 338 |
gh_patches_debug_14973 | rasdani/github-patches | git_diff | chainer__chainer-104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove `chainer-cuda-requirements` that is deprecated
`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import setup
3
4 setup(
5 name='chainer',
6 version='1.0.1',
7 description='A flexible framework of neural networks',
8 author='Seiya Tokui',
9 author_email='[email protected]',
10 url='http://chainer.org/',
11 packages=['chainer',
12 'chainer.cudnn',
13 'chainer.functions',
14 'chainer.optimizers',
15 'chainer.requirements',
16 'chainer.utils'],
17 package_data={'chainer.requirements': ['cuda-requirements.txt']},
18 install_requires=['numpy',
19 'six>=1.9.0'],
20 scripts=['scripts/chainer-cuda-requirements'],
21 tests_require=['nose'],
22 )
23
```
Path: `chainer/requirements/__init__.py`
Content:
```
1 import os
2
3
4 def get_cuda_requirements_path():
5 return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
6
7
8 def get_cuda_requirements():
9 with open(get_cuda_requirements_path()) as f:
10 return f.read()
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py
deleted file mode 100644
--- a/chainer/requirements/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import os
-
-
-def get_cuda_requirements_path():
- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')
-
-
-def get_cuda_requirements():
- with open(get_cuda_requirements_path()) as f:
- return f.read()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,11 +12,8 @@
'chainer.cudnn',
'chainer.functions',
'chainer.optimizers',
- 'chainer.requirements',
'chainer.utils'],
- package_data={'chainer.requirements': ['cuda-requirements.txt']},
install_requires=['numpy',
'six>=1.9.0'],
- scripts=['scripts/chainer-cuda-requirements'],
tests_require=['nose'],
)
| {"golden_diff": "diff --git a/chainer/requirements/__init__.py b/chainer/requirements/__init__.py\ndeleted file mode 100644\n--- a/chainer/requirements/__init__.py\n+++ /dev/null\n@@ -1,10 +0,0 @@\n-import os\n-\n-\n-def get_cuda_requirements_path():\n- return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n-\n-\n-def get_cuda_requirements():\n- with open(get_cuda_requirements_path()) as f:\n- return f.read()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,11 +12,8 @@\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n- 'chainer.requirements',\n 'chainer.utils'],\n- package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n- scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n )\n", "issue": "Remove `chainer-cuda-requirements` that is deprecated\n`pip install chainer-cuda-deps` is recommended, and `chainer-cuda-requirements` is deprecated now. It will be removed in the future minor release.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup\n\nsetup(\n name='chainer',\n version='1.0.1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n packages=['chainer',\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n 'chainer.requirements',\n 'chainer.utils'],\n package_data={'chainer.requirements': ['cuda-requirements.txt']},\n install_requires=['numpy',\n 'six>=1.9.0'],\n scripts=['scripts/chainer-cuda-requirements'],\n tests_require=['nose'],\n)\n", "path": "setup.py"}, {"content": "import os\n\n\ndef get_cuda_requirements_path():\n return os.path.join(os.path.dirname(__file__), 'cuda-requirements.txt')\n\n\ndef get_cuda_requirements():\n with open(get_cuda_requirements_path()) as f:\n return f.read()\n", "path": "chainer/requirements/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup\n\nsetup(\n name='chainer',\n version='1.0.1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='http://chainer.org/',\n packages=['chainer',\n 'chainer.cudnn',\n 'chainer.functions',\n 'chainer.optimizers',\n 'chainer.utils'],\n install_requires=['numpy',\n 'six>=1.9.0'],\n tests_require=['nose'],\n)\n", "path": "setup.py"}, {"content": null, "path": "chainer/requirements/__init__.py"}]} | 582 | 234 |
gh_patches_debug_26504 | rasdani/github-patches | git_diff | getsentry__sentry-python-253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[0.7.0] `CeleryIntegration` captures retries
Greetings fellows!
We are having an issue with `CeleryIntegration` in Sentry SDK.
### Current versions
Python 3.6.7
Django 2.1.5
Celery 4.1.1
Sentry SDK 0.7.0-0.7.1
### Current behavior
In our code (internal and 3rd-party) we are using [Celery tasks retry functionality](http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying).
> The app.Task.retry() call will raise an exception so any code after the retry won’t be reached. This is the Retry exception, it isn’t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.
We did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before.
But today we updated it to [0.7.0 release](https://github.com/getsentry/sentry-python/blob/master/CHANGES.md#070) (and later to 0.7.1)
This caused **every [`celery.exceptions.Retry`](http://docs.celeryproject.org/en/latest/reference/celery.exceptions.html#celery.exceptions.Retry) to be sent to Sentry**, which quickly filled Sentry server with thousands of events.
Previously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.
### Expected behaviour
`CeleryIntegration` is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.
### Open questions
I am not sure if the old behavior was done intentionally or by mistake.
If that was intended, we should reimplement it in current integration.
If not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code in`before_send` in a clean way).
Could you help me to clarify this issue?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/celery.py`
Content:
```
1 from __future__ import absolute_import
2
3 import sys
4
5 from celery.exceptions import SoftTimeLimitExceeded
6
7 from sentry_sdk.hub import Hub
8 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
9 from sentry_sdk._compat import reraise
10 from sentry_sdk.integrations import Integration
11 from sentry_sdk.integrations.logging import ignore_logger
12
13
14 class CeleryIntegration(Integration):
15 identifier = "celery"
16
17 @staticmethod
18 def setup_once():
19 import celery.app.trace as trace
20
21 old_build_tracer = trace.build_tracer
22
23 def sentry_build_tracer(name, task, *args, **kwargs):
24 # Need to patch both methods because older celery sometimes
25 # short-circuits to task.run if it thinks it's safe.
26 task.__call__ = _wrap_task_call(task.__call__)
27 task.run = _wrap_task_call(task.run)
28 return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
29
30 trace.build_tracer = sentry_build_tracer
31
32 # This logger logs every status of every task that ran on the worker.
33 # Meaning that every task's breadcrumbs are full of stuff like "Task
34 # <foo> raised unexpected <bar>".
35 ignore_logger("celery.worker.job")
36
37
38 def _wrap_tracer(task, f):
39 # Need to wrap tracer for pushing the scope before prerun is sent, and
40 # popping it after postrun is sent.
41 #
42 # This is the reason we don't use signals for hooking in the first place.
43 # Also because in Celery 3, signal dispatch returns early if one handler
44 # crashes.
45 def _inner(*args, **kwargs):
46 hub = Hub.current
47 if hub.get_integration(CeleryIntegration) is None:
48 return f(*args, **kwargs)
49
50 with hub.push_scope() as scope:
51 scope._name = "celery"
52 scope.add_event_processor(_make_event_processor(task, *args, **kwargs))
53
54 return f(*args, **kwargs)
55
56 return _inner
57
58
59 def _wrap_task_call(f):
60 # Need to wrap task call because the exception is caught before we get to
61 # see it. Also celery's reported stacktrace is untrustworthy.
62 def _inner(*args, **kwargs):
63 try:
64 return f(*args, **kwargs)
65 except Exception:
66 reraise(*_capture_exception())
67
68 return _inner
69
70
71 def _make_event_processor(task, uuid, args, kwargs, request=None):
72 def event_processor(event, hint):
73 with capture_internal_exceptions():
74 event["transaction"] = task.name
75
76 with capture_internal_exceptions():
77 extra = event.setdefault("extra", {})
78 extra["celery-job"] = {
79 "task_name": task.name,
80 "args": args,
81 "kwargs": kwargs,
82 }
83
84 if "exc_info" in hint:
85 with capture_internal_exceptions():
86 if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
87 event["fingerprint"] = [
88 "celery",
89 "SoftTimeLimitExceeded",
90 getattr(task, "name", task),
91 ]
92
93 with capture_internal_exceptions():
94 if hasattr(task, "throws") and isinstance(
95 hint["exc_info"][1], task.throws
96 ):
97 return None
98
99 return event
100
101 return event_processor
102
103
104 def _capture_exception():
105 hub = Hub.current
106 exc_info = sys.exc_info()
107
108 if hub.get_integration(CeleryIntegration) is not None:
109 event, hint = event_from_exception(
110 exc_info,
111 client_options=hub.client.options,
112 mechanism={"type": "celery", "handled": False},
113 )
114 hub.capture_event(event, hint=hint)
115
116 return exc_info
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
--- a/sentry_sdk/integrations/celery.py
+++ b/sentry_sdk/integrations/celery.py
@@ -2,7 +2,7 @@
import sys
-from celery.exceptions import SoftTimeLimitExceeded
+from celery.exceptions import SoftTimeLimitExceeded, Retry
from sentry_sdk.hub import Hub
from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
@@ -82,6 +82,15 @@
}
if "exc_info" in hint:
+ with capture_internal_exceptions():
+ if isinstance(hint["exc_info"][1], Retry):
+ return None
+
+ if hasattr(task, "throws") and isinstance(
+ hint["exc_info"][1], task.throws
+ ):
+ return None
+
with capture_internal_exceptions():
if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
event["fingerprint"] = [
@@ -90,12 +99,6 @@
getattr(task, "name", task),
]
- with capture_internal_exceptions():
- if hasattr(task, "throws") and isinstance(
- hint["exc_info"][1], task.throws
- ):
- return None
-
return event
return event_processor
| {"golden_diff": "diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py\n--- a/sentry_sdk/integrations/celery.py\n+++ b/sentry_sdk/integrations/celery.py\n@@ -2,7 +2,7 @@\n \n import sys\n \n-from celery.exceptions import SoftTimeLimitExceeded\n+from celery.exceptions import SoftTimeLimitExceeded, Retry\n \n from sentry_sdk.hub import Hub\n from sentry_sdk.utils import capture_internal_exceptions, event_from_exception\n@@ -82,6 +82,15 @@\n }\n \n if \"exc_info\" in hint:\n+ with capture_internal_exceptions():\n+ if isinstance(hint[\"exc_info\"][1], Retry):\n+ return None\n+\n+ if hasattr(task, \"throws\") and isinstance(\n+ hint[\"exc_info\"][1], task.throws\n+ ):\n+ return None\n+\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n@@ -90,12 +99,6 @@\n getattr(task, \"name\", task),\n ]\n \n- with capture_internal_exceptions():\n- if hasattr(task, \"throws\") and isinstance(\n- hint[\"exc_info\"][1], task.throws\n- ):\n- return None\n-\n return event\n \n return event_processor\n", "issue": "[0.7.0] `CeleryIntegration` captures retries\nGreetings fellows!\r\n\r\nWe are having an issue with `CeleryIntegration` in Sentry SDK.\r\n\r\n### Current versions\r\nPython 3.6.7\r\nDjango 2.1.5\r\nCelery 4.1.1\r\nSentry SDK 0.7.0-0.7.1\r\n\r\n### Current behavior\r\nIn our code (internal and 3rd-party) we are using [Celery tasks retry functionality](http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying).\r\n\r\n> The app.Task.retry() call will raise an exception so any code after the retry won\u2019t be reached. This is the Retry exception, it isn\u2019t handled as an error but rather as a semi-predicate to signify to the worker that the task is to be retried, so that it can store the correct state when a result backend is enabled.\r\n\r\nWe did switch recently from Raven to Sentry SDK 0.6.9, everything seemed working as before.\r\nBut today we updated it to [0.7.0 release](https://github.com/getsentry/sentry-python/blob/master/CHANGES.md#070) (and later to 0.7.1)\r\n\r\nThis caused **every [`celery.exceptions.Retry`](http://docs.celeryproject.org/en/latest/reference/celery.exceptions.html#celery.exceptions.Retry) to be sent to Sentry**, which quickly filled Sentry server with thousands of events.\r\nPreviously (in old SDK and Raven), those exceptions were ignored and not sent to Sentry server.\r\n\r\n### Expected behaviour\r\n`CeleryIntegration` is not flooding Sentry server with every retry exception. Basically, the same behavior as it was in Raven and Sentry SDK<0.7.0.\r\n\r\n### Open questions\r\nI am not sure if the old behavior was done intentionally or by mistake.\r\nIf that was intended, we should reimplement it in current integration.\r\nIf not, there should be a way to filter/ignore that kind of exceptions (I am not sure if we can filter all retries from internal and 3rd-party code in`before_send` in a clean way).\r\n\r\nCould you help me to clarify this issue?\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import SoftTimeLimitExceeded\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n import celery.app.trace as trace\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task.__call__)\n task.run = _wrap_task_call(task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n reraise(*_capture_exception())\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n with capture_internal_exceptions():\n if hasattr(task, \"throws\") and isinstance(\n hint[\"exc_info\"][1], task.throws\n ):\n return None\n\n return event\n\n return event_processor\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(CeleryIntegration) is not None:\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/celery.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import SoftTimeLimitExceeded, Retry\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n import celery.app.trace as trace\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task.__call__)\n task.run = _wrap_task_call(task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n reraise(*_capture_exception())\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if isinstance(hint[\"exc_info\"][1], Retry):\n return None\n\n if hasattr(task, \"throws\") and isinstance(\n hint[\"exc_info\"][1], task.throws\n ):\n return None\n\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n return event\n\n return event_processor\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(CeleryIntegration) is not None:\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/celery.py"}]} | 1,797 | 305 |
gh_patches_debug_51565 | rasdani/github-patches | git_diff | ray-project__ray-1413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Worker dies when passed pandas DataFrame.
### System information
- **Ray version**: 0.3.0
- **Python version**: 3.6.0
- **Exact command to reproduce**:
```python
import pandas as pd
import ray
pd.__version__ # '0.19.2'
ray.init()
df = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})
@ray.remote
def f(x):
pass
f.remote(df)
```
The last line causes the following error to be printed in the background.
```
A worker died or was killed while executing a task.
```
cc @devin-petersohn
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/dataframe/__init__.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 from .dataframe import DataFrame
6 from .dataframe import from_pandas
7 from .dataframe import to_pandas
8 from .series import Series
9 import ray
10 import pandas as pd
11
12 __all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
13
14 ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
15 ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py
--- a/python/ray/dataframe/__init__.py
+++ b/python/ray/dataframe/__init__.py
@@ -6,10 +6,5 @@
from .dataframe import from_pandas
from .dataframe import to_pandas
from .series import Series
-import ray
-import pandas as pd
__all__ = ["DataFrame", "from_pandas", "to_pandas", "Series"]
-
-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)
-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)
| {"golden_diff": "diff --git a/python/ray/dataframe/__init__.py b/python/ray/dataframe/__init__.py\n--- a/python/ray/dataframe/__init__.py\n+++ b/python/ray/dataframe/__init__.py\n@@ -6,10 +6,5 @@\n from .dataframe import from_pandas\n from .dataframe import to_pandas\n from .series import Series\n-import ray\n-import pandas as pd\n \n __all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n-\n-ray.register_custom_serializer(pd.DataFrame, use_pickle=True)\n-ray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "issue": "Worker dies when passed pandas DataFrame.\n### System information\r\n- **Ray version**: 0.3.0\r\n- **Python version**: 3.6.0\r\n- **Exact command to reproduce**:\r\n\r\n```python\r\nimport pandas as pd\r\nimport ray\r\n\r\npd.__version__ # '0.19.2'\r\n\r\nray.init()\r\n\r\ndf = pd.DataFrame(data={'col1': [1, 2, 3, 4], 'col2': [3, 4, 5, 6]})\r\n\r\[email protected]\r\ndef f(x):\r\n pass\r\n\r\nf.remote(df)\r\n```\r\n\r\nThe last line causes the following error to be printed in the background.\r\n\r\n```\r\nA worker died or was killed while executing a task.\r\n```\r\n\r\ncc @devin-petersohn\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\nimport ray\nimport pandas as pd\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n\nray.register_custom_serializer(pd.DataFrame, use_pickle=True)\nray.register_custom_serializer(pd.core.indexes.base.Index, use_pickle=True)\n", "path": "python/ray/dataframe/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .dataframe import DataFrame\nfrom .dataframe import from_pandas\nfrom .dataframe import to_pandas\nfrom .series import Series\n\n__all__ = [\"DataFrame\", \"from_pandas\", \"to_pandas\", \"Series\"]\n", "path": "python/ray/dataframe/__init__.py"}]} | 560 | 138 |
gh_patches_debug_549 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1412 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 ARM test fails
~~~
ValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files
~~~
https://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/__version__.py`
Content:
```
1 __build__ = 244
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 12
31 _revision = 5
32 _status = VersionStatus.BETA
33
34 __version__ = f"{_major}.{_minor}.{_revision}" + (
35 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
36 )
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 244
+__build__ = 248
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 244\n+__build__ = 248\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\ud83e\udeb2 ARM test fails \n\r\n~~~\r\nValueError: 'orso/bitarray/cbitarray.pyx' doesn't match any files\r\n~~~\r\n\r\nhttps://github.com/mabel-dev/opteryx/actions/runs/7535073365/job/20510453555\n", "before_files": [{"content": "__build__ = 244\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 5\n_status = VersionStatus.BETA\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "__build__ = 248\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 12\n_revision = 5\n_status = VersionStatus.BETA\n\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 674 | 101 |
gh_patches_debug_17327 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3950 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Primrose Schools
Is generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/primrose_schools.py`
Content:
```
1 import json
2
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6
7
8 class PrimroseSchoolsSpider(scrapy.Spider):
9 name = "primrose_schools"
10 item_attributes = {"brand": "Primrose Schools", "brand_wikidata": "Q7243677"}
11 allowed_domains = ["primroseschools.com"]
12
13 start_urls = ["https://www.primroseschools.com/find-a-school/"]
14
15 def parse(self, response):
16 with open(
17 "./locations/searchable_points/us_centroids_50mile_radius.csv"
18 ) as points:
19 next(points)
20 for point in points:
21 row = point.replace("\n", "").split(",")
22 lati = row[1]
23 long = row[2]
24 searchurl = "https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}".format(
25 la=lati, lo=long
26 )
27 yield scrapy.Request(
28 response.urljoin(searchurl), callback=self.parse_search
29 )
30
31 def parse_search(self, response):
32 content = response.xpath('//script[@type="application/json"]/text()').get()
33 schools = json.loads(content)
34 for i in schools:
35 if i["address_1"]:
36 properties = {
37 "name": i["name"],
38 "addr_full": i["address_1"] + " " + i["address_2"],
39 "city": i["city"],
40 "state": i["state"],
41 "postcode": i["zip_code"],
42 "phone": i["phone"],
43 "ref": i["id"],
44 "website": "https://www.primroseschools.com" + i["url"],
45 "lat": float(i["latitude"]),
46 "lon": float(i["longitude"]),
47 }
48 yield GeojsonPointItem(**properties)
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py
--- a/locations/spiders/primrose_schools.py
+++ b/locations/spiders/primrose_schools.py
@@ -30,12 +30,17 @@
def parse_search(self, response):
content = response.xpath('//script[@type="application/json"]/text()').get()
+ if content is None:
+ return
+
schools = json.loads(content)
for i in schools:
if i["address_1"]:
properties = {
"name": i["name"],
- "addr_full": i["address_1"] + " " + i["address_2"],
+ "street_address": ", ".join(
+ filter(None, [i["address_1"], i["address_2"]])
+ ),
"city": i["city"],
"state": i["state"],
"postcode": i["zip_code"],
| {"golden_diff": "diff --git a/locations/spiders/primrose_schools.py b/locations/spiders/primrose_schools.py\n--- a/locations/spiders/primrose_schools.py\n+++ b/locations/spiders/primrose_schools.py\n@@ -30,12 +30,17 @@\n \n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n+ if content is None:\n+ return\n+\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n- \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n+ \"street_address\": \", \".join(\n+ filter(None, [i[\"address_1\"], i[\"address_2\"]])\n+ ),\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n", "issue": "Primrose Schools\nIs generating 1,221 errors. Adding a if statement for `content` should fix it. Could also be turned into a sitemap spider.\n", "before_files": [{"content": "import json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PrimroseSchoolsSpider(scrapy.Spider):\n name = \"primrose_schools\"\n item_attributes = {\"brand\": \"Primrose Schools\", \"brand_wikidata\": \"Q7243677\"}\n allowed_domains = [\"primroseschools.com\"]\n\n start_urls = [\"https://www.primroseschools.com/find-a-school/\"]\n\n def parse(self, response):\n with open(\n \"./locations/searchable_points/us_centroids_50mile_radius.csv\"\n ) as points:\n next(points)\n for point in points:\n row = point.replace(\"\\n\", \"\").split(\",\")\n lati = row[1]\n long = row[2]\n searchurl = \"https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}\".format(\n la=lati, lo=long\n )\n yield scrapy.Request(\n response.urljoin(searchurl), callback=self.parse_search\n )\n\n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n \"addr_full\": i[\"address_1\"] + \" \" + i[\"address_2\"],\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n \"phone\": i[\"phone\"],\n \"ref\": i[\"id\"],\n \"website\": \"https://www.primroseschools.com\" + i[\"url\"],\n \"lat\": float(i[\"latitude\"]),\n \"lon\": float(i[\"longitude\"]),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/primrose_schools.py"}], "after_files": [{"content": "import json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PrimroseSchoolsSpider(scrapy.Spider):\n name = \"primrose_schools\"\n item_attributes = {\"brand\": \"Primrose Schools\", \"brand_wikidata\": \"Q7243677\"}\n allowed_domains = [\"primroseschools.com\"]\n\n start_urls = [\"https://www.primroseschools.com/find-a-school/\"]\n\n def parse(self, response):\n with open(\n \"./locations/searchable_points/us_centroids_50mile_radius.csv\"\n ) as points:\n next(points)\n for point in points:\n row = point.replace(\"\\n\", \"\").split(\",\")\n lati = row[1]\n long = row[2]\n searchurl = \"https://www.primroseschools.com/find-a-school/?search_string=USA&latitude={la}&longitude={lo}\".format(\n la=lati, lo=long\n )\n yield scrapy.Request(\n response.urljoin(searchurl), callback=self.parse_search\n )\n\n def parse_search(self, response):\n content = response.xpath('//script[@type=\"application/json\"]/text()').get()\n if content is None:\n return\n\n schools = json.loads(content)\n for i in schools:\n if i[\"address_1\"]:\n properties = {\n \"name\": i[\"name\"],\n \"street_address\": \", \".join(\n filter(None, [i[\"address_1\"], i[\"address_2\"]])\n ),\n \"city\": i[\"city\"],\n \"state\": i[\"state\"],\n \"postcode\": i[\"zip_code\"],\n \"phone\": i[\"phone\"],\n \"ref\": i[\"id\"],\n \"website\": \"https://www.primroseschools.com\" + i[\"url\"],\n \"lat\": float(i[\"latitude\"]),\n \"lon\": float(i[\"longitude\"]),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/primrose_schools.py"}]} | 794 | 217 |
gh_patches_debug_16325 | rasdani/github-patches | git_diff | rasterio__rasterio-670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rio stack output empty
`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls
```
$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max
255
$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \
rio info --tell-me-more /tmp/test.tif | jq .stats[0].max
null
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/stack.py`
Content:
```
1 """Commands for operating on bands of datasets."""
2 import logging
3
4 import click
5 from cligj import files_inout_arg, format_opt
6
7 from .helpers import resolve_inout
8 from . import options
9 import rasterio
10 from rasterio.five import zip_longest
11
12
13 # Stack command.
14 @click.command(short_help="Stack a number of bands into a multiband dataset.")
15 @files_inout_arg
16 @options.output_opt
17 @format_opt
18 @options.bidx_mult_opt
19 @options.rgb_opt
20 @options.force_overwrite_opt
21 @options.creation_options
22 @click.pass_context
23 def stack(ctx, files, output, driver, bidx, photometric, force_overwrite,
24 creation_options):
25 """Stack a number of bands from one or more input files into a
26 multiband dataset.
27
28 Input datasets must be of a kind: same data type, dimensions, etc. The
29 output is cloned from the first input.
30
31 By default, rio-stack will take all bands from each input and write them
32 in same order to the output. Optionally, bands for each input may be
33 specified using a simple syntax:
34
35 --bidx N takes the Nth band from the input (first band is 1).
36
37 --bidx M,N,0 takes bands M, N, and O.
38
39 --bidx M..O takes bands M-O, inclusive.
40
41 --bidx ..N takes all bands up to and including N.
42
43 --bidx N.. takes all bands from N to the end.
44
45 Examples, using the Rasterio testing dataset, which produce a copy.
46
47 rio stack RGB.byte.tif -o stacked.tif
48
49 rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif
50
51 rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif
52
53 rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif
54
55 """
56
57 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2
58 logger = logging.getLogger('rio')
59 try:
60 with rasterio.drivers(CPL_DEBUG=verbosity>2):
61 output, files = resolve_inout(files=files, output=output,
62 force_overwrite=force_overwrite)
63 output_count = 0
64 indexes = []
65 for path, item in zip_longest(files, bidx, fillvalue=None):
66 with rasterio.open(path) as src:
67 src_indexes = src.indexes
68 if item is None:
69 indexes.append(src_indexes)
70 output_count += len(src_indexes)
71 elif '..' in item:
72 start, stop = map(
73 lambda x: int(x) if x else None, item.split('..'))
74 if start is None:
75 start = 1
76 indexes.append(src_indexes[slice(start-1, stop)])
77 output_count += len(src_indexes[slice(start-1, stop)])
78 else:
79 parts = list(map(int, item.split(',')))
80 if len(parts) == 1:
81 indexes.append(parts[0])
82 output_count += 1
83 else:
84 parts = list(parts)
85 indexes.append(parts)
86 output_count += len(parts)
87
88 with rasterio.open(files[0]) as first:
89 kwargs = first.meta
90 kwargs.update(**creation_options)
91 kwargs['transform'] = kwargs.pop('affine')
92
93 kwargs.update(
94 driver=driver,
95 count=output_count)
96
97 if photometric:
98 kwargs['photometric'] = photometric
99
100 with rasterio.open(output, 'w', **kwargs) as dst:
101 dst_idx = 1
102 for path, index in zip(files, indexes):
103 with rasterio.open(path) as src:
104 if isinstance(index, int):
105 data = src.read(index)
106 dst.write(data, dst_idx)
107 dst_idx += 1
108 elif isinstance(index, list):
109 data = src.read(index)
110 dst.write(data, range(dst_idx, dst_idx+len(index)))
111 dst_idx += len(index)
112
113 except Exception:
114 logger.exception("Exception caught during processing")
115 raise click.Abort()
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py
--- a/rasterio/rio/stack.py
+++ b/rasterio/rio/stack.py
@@ -1,4 +1,5 @@
"""Commands for operating on bands of datasets."""
+import collections
import logging
import click
@@ -105,7 +106,7 @@
data = src.read(index)
dst.write(data, dst_idx)
dst_idx += 1
- elif isinstance(index, list):
+ elif isinstance(index, collections.Iterable):
data = src.read(index)
dst.write(data, range(dst_idx, dst_idx+len(index)))
dst_idx += len(index)
| {"golden_diff": "diff --git a/rasterio/rio/stack.py b/rasterio/rio/stack.py\n--- a/rasterio/rio/stack.py\n+++ b/rasterio/rio/stack.py\n@@ -1,4 +1,5 @@\n \"\"\"Commands for operating on bands of datasets.\"\"\"\n+import collections\n import logging\n \n import click\n@@ -105,7 +106,7 @@\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n- elif isinstance(index, list):\n+ elif isinstance(index, collections.Iterable):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n", "issue": "rio stack output empty\n`rio stack`ing one or more rasters without an explicit band index results in a raster with all nulls\n\n```\n$ rio info --tell-me-more tests/data/RGB.byte.tif | jq .stats[0].max\n255\n$ rio stack tests/data/RGB.byte.tif /tmp/test.tif && \\\n rio info --tell-me-more /tmp/test.tif | jq .stats[0].max\nnull\n```\n\n", "before_files": [{"content": "\"\"\"Commands for operating on bands of datasets.\"\"\"\nimport logging\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.five import zip_longest\n\n\n# Stack command.\[email protected](short_help=\"Stack a number of bands into a multiband dataset.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_mult_opt\[email protected]_opt\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef stack(ctx, files, output, driver, bidx, photometric, force_overwrite,\n creation_options):\n \"\"\"Stack a number of bands from one or more input files into a\n multiband dataset.\n\n Input datasets must be of a kind: same data type, dimensions, etc. The\n output is cloned from the first input.\n\n By default, rio-stack will take all bands from each input and write them\n in same order to the output. Optionally, bands for each input may be\n specified using a simple syntax:\n\n --bidx N takes the Nth band from the input (first band is 1).\n\n --bidx M,N,0 takes bands M, N, and O.\n\n --bidx M..O takes bands M-O, inclusive.\n\n --bidx ..N takes all bands up to and including N.\n\n --bidx N.. takes all bands from N to the end.\n\n Examples, using the Rasterio testing dataset, which produce a copy.\n\n rio stack RGB.byte.tif -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif\n\n \"\"\"\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output,\n force_overwrite=force_overwrite)\n output_count = 0\n indexes = []\n for path, item in zip_longest(files, bidx, fillvalue=None):\n with rasterio.open(path) as src:\n src_indexes = src.indexes\n if item is None:\n indexes.append(src_indexes)\n output_count += len(src_indexes)\n elif '..' in item:\n start, stop = map(\n lambda x: int(x) if x else None, item.split('..'))\n if start is None:\n start = 1\n indexes.append(src_indexes[slice(start-1, stop)])\n output_count += len(src_indexes[slice(start-1, stop)])\n else:\n parts = list(map(int, item.split(',')))\n if len(parts) == 1:\n indexes.append(parts[0])\n output_count += 1\n else:\n parts = list(parts)\n indexes.append(parts)\n output_count += len(parts)\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs.update(**creation_options)\n kwargs['transform'] = kwargs.pop('affine')\n\n kwargs.update(\n driver=driver,\n count=output_count)\n\n if photometric:\n kwargs['photometric'] = photometric\n\n with rasterio.open(output, 'w', **kwargs) as dst:\n dst_idx = 1\n for path, index in zip(files, indexes):\n with rasterio.open(path) as src:\n if isinstance(index, int):\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n elif isinstance(index, list):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/stack.py"}], "after_files": [{"content": "\"\"\"Commands for operating on bands of datasets.\"\"\"\nimport collections\nimport logging\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.five import zip_longest\n\n\n# Stack command.\[email protected](short_help=\"Stack a number of bands into a multiband dataset.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_mult_opt\[email protected]_opt\[email protected]_overwrite_opt\[email protected]_options\[email protected]_context\ndef stack(ctx, files, output, driver, bidx, photometric, force_overwrite,\n creation_options):\n \"\"\"Stack a number of bands from one or more input files into a\n multiband dataset.\n\n Input datasets must be of a kind: same data type, dimensions, etc. The\n output is cloned from the first input.\n\n By default, rio-stack will take all bands from each input and write them\n in same order to the output. Optionally, bands for each input may be\n specified using a simple syntax:\n\n --bidx N takes the Nth band from the input (first band is 1).\n\n --bidx M,N,0 takes bands M, N, and O.\n\n --bidx M..O takes bands M-O, inclusive.\n\n --bidx ..N takes all bands up to and including N.\n\n --bidx N.. takes all bands from N to the end.\n\n Examples, using the Rasterio testing dataset, which produce a copy.\n\n rio stack RGB.byte.tif -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1,2,3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx 1..3 -o stacked.tif\n\n rio stack RGB.byte.tif --bidx ..2 RGB.byte.tif --bidx 3.. -o stacked.tif\n\n \"\"\"\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 2\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output,\n force_overwrite=force_overwrite)\n output_count = 0\n indexes = []\n for path, item in zip_longest(files, bidx, fillvalue=None):\n with rasterio.open(path) as src:\n src_indexes = src.indexes\n if item is None:\n indexes.append(src_indexes)\n output_count += len(src_indexes)\n elif '..' in item:\n start, stop = map(\n lambda x: int(x) if x else None, item.split('..'))\n if start is None:\n start = 1\n indexes.append(src_indexes[slice(start-1, stop)])\n output_count += len(src_indexes[slice(start-1, stop)])\n else:\n parts = list(map(int, item.split(',')))\n if len(parts) == 1:\n indexes.append(parts[0])\n output_count += 1\n else:\n parts = list(parts)\n indexes.append(parts)\n output_count += len(parts)\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs.update(**creation_options)\n kwargs['transform'] = kwargs.pop('affine')\n\n kwargs.update(\n driver=driver,\n count=output_count)\n\n if photometric:\n kwargs['photometric'] = photometric\n\n with rasterio.open(output, 'w', **kwargs) as dst:\n dst_idx = 1\n for path, index in zip(files, indexes):\n with rasterio.open(path) as src:\n if isinstance(index, int):\n data = src.read(index)\n dst.write(data, dst_idx)\n dst_idx += 1\n elif isinstance(index, collections.Iterable):\n data = src.read(index)\n dst.write(data, range(dst_idx, dst_idx+len(index)))\n dst_idx += len(index)\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/stack.py"}]} | 1,483 | 155 |
gh_patches_debug_25922 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Restoring tarball to Focal shows error for v2
## Description
When restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is.
## Steps to Reproduce
I used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.
1. Create a v2-only backup tarball from a Xenial host.
2. Perform a clean install of Focal, with v3-only vars.
3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`
## Expected Behavior
Restore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration.
## Actual Behavior
Restore action fails. Even when I include the `--preserve-tor-config` flag, it still fails.
## Comments
On one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133
The above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install_files/ansible-base/roles/restore/files/compare_torrc.py`
Content:
```
1 #!/usr/bin/env python
2
3 #
4 # Compares Tor configurations on the app server and from a backup. If
5 # restoring the backup would alter the server's Tor configuration,
6 # print a warning and exit.
7 #
8
9 from __future__ import print_function
10
11 import os
12 import re
13 import sys
14
15
16 def get_tor_versions(path):
17 """
18 Determine which service versions are offered in the given torrc.
19 """
20 service_re = re.compile(r"HiddenServiceDir\s+(?:.*)/(.*)")
21 versions = set([])
22 with open(path) as f:
23 for line in f:
24 m = service_re.match(line)
25 if m:
26 service = m.group(1)
27 if "v3" in service:
28 versions.add(3)
29 else:
30 versions.add(2)
31
32 return versions
33
34
35 def strset(s):
36 """
37 Sort the given set and join members with "and".
38 """
39 return " and ".join(str(v) for v in sorted(s))
40
41
42 if __name__ == "__main__":
43 tempdir = sys.argv[1]
44
45 server_versions = get_tor_versions(os.path.join(tempdir, "app/etc/tor/torrc"))
46 backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
47
48 if server_versions == backup_versions:
49 print("The Tor configuration in the backup matches the server.")
50 sys.exit(0)
51
52 if (3 in server_versions) and (3 in backup_versions):
53 print("V3 services detected in backup and server - proceeding with v3-only restore")
54 sys.exit(0)
55
56 print(
57 "The Tor configuration on the app server offers version {} services.".format(
58 strset(server_versions)
59 )
60 )
61
62 print(
63 "The Tor configuration in this backup offers version {} services.".format(
64 strset(backup_versions)
65 )
66 )
67
68 print("\nRestoring a backup with a different Tor configuration than the server ")
69 print("is currently unsupported. If you require technical assistance, please ")
70 print("contact the SecureDrop team via the support portal or at ")
71 print("[email protected].")
72
73 sys.exit(1)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py
--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py
+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py
@@ -46,11 +46,11 @@
backup_versions = get_tor_versions(os.path.join(tempdir, "backup/etc/tor/torrc"))
if server_versions == backup_versions:
- print("The Tor configuration in the backup matches the server.")
+ print("Valid configuration: the Tor configuration in the backup matches the server.")
sys.exit(0)
if (3 in server_versions) and (3 in backup_versions):
- print("V3 services detected in backup and server - proceeding with v3-only restore")
+ print("Valid configuration: V3 services only`")
sys.exit(0)
print(
@@ -65,9 +65,11 @@
)
)
- print("\nRestoring a backup with a different Tor configuration than the server ")
- print("is currently unsupported. If you require technical assistance, please ")
- print("contact the SecureDrop team via the support portal or at ")
+ print("\nIncompatible configuration: Restoring a backup including a different ")
+ print("Tor configuration than the server Tor configuration is unsupported. ")
+ print("Optionally, use --preserve-tor-config to apply a data-only backup.")
+ print("If you require technical assistance, please contact the ")
+ print("SecureDrop team via the support portal or at ")
print("[email protected].")
sys.exit(1)
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/compare_torrc.py b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n--- a/install_files/ansible-base/roles/restore/files/compare_torrc.py\n+++ b/install_files/ansible-base/roles/restore/files/compare_torrc.py\n@@ -46,11 +46,11 @@\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n \n if server_versions == backup_versions:\n- print(\"The Tor configuration in the backup matches the server.\")\n+ print(\"Valid configuration: the Tor configuration in the backup matches the server.\")\n sys.exit(0)\n \n if (3 in server_versions) and (3 in backup_versions):\n- print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n+ print(\"Valid configuration: V3 services only`\")\n sys.exit(0)\n \n print(\n@@ -65,9 +65,11 @@\n )\n )\n \n- print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n- print(\"is currently unsupported. If you require technical assistance, please \")\n- print(\"contact the SecureDrop team via the support portal or at \")\n+ print(\"\\nIncompatible configuration: Restoring a backup including a different \")\n+ print(\"Tor configuration than the server Tor configuration is unsupported. \")\n+ print(\"Optionally, use --preserve-tor-config to apply a data-only backup.\")\n+ print(\"If you require technical assistance, please contact the \")\n+ print(\"SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n \n sys.exit(1)\n", "issue": "Restoring tarball to Focal shows error for v2\n## Description\r\n\r\nWhen restoring a v2-only Xenial backup tarball to a v3-only Focal instance, the restore action fails. It fails even if the admin explicitly requests that the tor config be preserved as-is. \r\n\r\n## Steps to Reproduce\r\nI used libvirt-based VMs for testing, and performed all admin actions from a virtualized Tails v4.16 VM.\r\n\r\n1. Create a v2-only backup tarball from a Xenial host.\r\n2. Perform a clean install of Focal, with v3-only vars.\r\n3. Attempt to restore the backup: `./securedrop-admin --force restore --preserve-tor-config ~/Persistent/backups/xenial-v2-only/sd-backup-2021-02-26--15-57-06.tar.gz`\r\n\r\n## Expected Behavior\r\n\r\nRestore action completes, old URLs are restored, and I can proceed with regenerating new v3 URL and finalizing the Xenial -> Focal migration. \r\n\r\n\r\n## Actual Behavior\r\n\r\nRestore action fails. Even when I include the `--preserve-tor-config` flag, it still fails. \r\n\r\n## Comments\r\nOn one hand, the failure is expected, since Focal is v3-only, but in the context of a migration from Xenial, it's likely we're going to have admins migrating to Focal from a recently created backup, so I recommend we defer the fail-closed behavior to a subsequent release. That'd have bearing on WIP docs changes in e..g. https://github.com/freedomofpress/securedrop-docs/pull/133\r\n\r\nThe above is a policy question, but this ticket is also pointing out some bugs that should be fixed. For one, `--preserve-tor-config` is not honored, and it should be.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n#\n# Compares Tor configurations on the app server and from a backup. If\n# restoring the backup would alter the server's Tor configuration,\n# print a warning and exit.\n#\n\nfrom __future__ import print_function\n\nimport os\nimport re\nimport sys\n\n\ndef get_tor_versions(path):\n \"\"\"\n Determine which service versions are offered in the given torrc.\n \"\"\"\n service_re = re.compile(r\"HiddenServiceDir\\s+(?:.*)/(.*)\")\n versions = set([])\n with open(path) as f:\n for line in f:\n m = service_re.match(line)\n if m:\n service = m.group(1)\n if \"v3\" in service:\n versions.add(3)\n else:\n versions.add(2)\n\n return versions\n\n\ndef strset(s):\n \"\"\"\n Sort the given set and join members with \"and\".\n \"\"\"\n return \" and \".join(str(v) for v in sorted(s))\n\n\nif __name__ == \"__main__\":\n tempdir = sys.argv[1]\n\n server_versions = get_tor_versions(os.path.join(tempdir, \"app/etc/tor/torrc\"))\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n\n if server_versions == backup_versions:\n print(\"The Tor configuration in the backup matches the server.\")\n sys.exit(0)\n\n if (3 in server_versions) and (3 in backup_versions):\n print(\"V3 services detected in backup and server - proceeding with v3-only restore\")\n sys.exit(0)\n\n print(\n \"The Tor configuration on the app server offers version {} services.\".format(\n strset(server_versions)\n )\n )\n\n print(\n \"The Tor configuration in this backup offers version {} services.\".format(\n strset(backup_versions)\n )\n )\n\n print(\"\\nRestoring a backup with a different Tor configuration than the server \")\n print(\"is currently unsupported. If you require technical assistance, please \")\n print(\"contact the SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n\n sys.exit(1)\n", "path": "install_files/ansible-base/roles/restore/files/compare_torrc.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n#\n# Compares Tor configurations on the app server and from a backup. If\n# restoring the backup would alter the server's Tor configuration,\n# print a warning and exit.\n#\n\nfrom __future__ import print_function\n\nimport os\nimport re\nimport sys\n\n\ndef get_tor_versions(path):\n \"\"\"\n Determine which service versions are offered in the given torrc.\n \"\"\"\n service_re = re.compile(r\"HiddenServiceDir\\s+(?:.*)/(.*)\")\n versions = set([])\n with open(path) as f:\n for line in f:\n m = service_re.match(line)\n if m:\n service = m.group(1)\n if \"v3\" in service:\n versions.add(3)\n else:\n versions.add(2)\n\n return versions\n\n\ndef strset(s):\n \"\"\"\n Sort the given set and join members with \"and\".\n \"\"\"\n return \" and \".join(str(v) for v in sorted(s))\n\n\nif __name__ == \"__main__\":\n tempdir = sys.argv[1]\n\n server_versions = get_tor_versions(os.path.join(tempdir, \"app/etc/tor/torrc\"))\n backup_versions = get_tor_versions(os.path.join(tempdir, \"backup/etc/tor/torrc\"))\n\n if server_versions == backup_versions:\n print(\"Valid configuration: the Tor configuration in the backup matches the server.\")\n sys.exit(0)\n\n if (3 in server_versions) and (3 in backup_versions):\n print(\"Valid configuration: V3 services only`\")\n sys.exit(0)\n\n print(\n \"The Tor configuration on the app server offers version {} services.\".format(\n strset(server_versions)\n )\n )\n\n print(\n \"The Tor configuration in this backup offers version {} services.\".format(\n strset(backup_versions)\n )\n )\n\n print(\"\\nIncompatible configuration: Restoring a backup including a different \")\n print(\"Tor configuration than the server Tor configuration is unsupported. \")\n print(\"Optionally, use --preserve-tor-config to apply a data-only backup.\")\n print(\"If you require technical assistance, please contact the \")\n print(\"SecureDrop team via the support portal or at \")\n print(\"[email protected].\")\n\n sys.exit(1)\n", "path": "install_files/ansible-base/roles/restore/files/compare_torrc.py"}]} | 1,273 | 383 |
gh_patches_debug_20393 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-405 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add doc page on authors and credits
It would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.
To help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/constants/__init__.py`
Content:
```
1 """Physical and mathematical constants."""
2
3 from numpy import pi
4
5 from astropy.constants.si import (
6 e,
7 mu0,
8 eps0,
9 k_B,
10 c,
11 G,
12 h,
13 hbar,
14 m_p,
15 m_n,
16 m_e,
17 u,
18 sigma_sb,
19 N_A,
20 R,
21 Ryd,
22 a0,
23 muB,
24 sigma_T,
25 au,
26 pc,
27 kpc,
28 g0,
29 L_sun,
30 M_sun,
31 R_sun,
32 M_earth,
33 R_earth,
34 )
35
36 from astropy.constants import atm
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py
--- a/plasmapy/constants/__init__.py
+++ b/plasmapy/constants/__init__.py
@@ -1,4 +1,8 @@
-"""Physical and mathematical constants."""
+"""
+Contains physical and mathematical constants commonly used in plasma
+physics.
+
+"""
from numpy import pi
@@ -34,3 +38,26 @@
)
from astropy.constants import atm
+
+# The following code is modified from astropy.constants to produce a
+# table containing information on the constants contained with PlasmaPy.
+# Mathematical constants can be just entered.
+
+_lines = [
+ 'The following constants are available:\n',
+ '========== ================= ================ ============================================',
+ 'Name Value Units Description',
+ '========== ================= ================ ============================================',
+ " pi 3.141592653589793 Ratio of circumference to diameter of circle",
+]
+
+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']
+for _const in _constants:
+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'
+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))
+
+_lines.append(_lines[1])
+
+__doc__ += '\n'.join(_lines)
+
+del _lines, _const, _constants
| {"golden_diff": "diff --git a/plasmapy/constants/__init__.py b/plasmapy/constants/__init__.py\n--- a/plasmapy/constants/__init__.py\n+++ b/plasmapy/constants/__init__.py\n@@ -1,4 +1,8 @@\n-\"\"\"Physical and mathematical constants.\"\"\"\n+\"\"\"\n+Contains physical and mathematical constants commonly used in plasma\n+physics.\n+\n+\"\"\"\n \n from numpy import pi\n \n@@ -34,3 +38,26 @@\n )\n \n from astropy.constants import atm\n+\n+# The following code is modified from astropy.constants to produce a\n+# table containing information on the constants contained with PlasmaPy.\n+# Mathematical constants can be just entered.\n+\n+_lines = [\n+ 'The following constants are available:\\n',\n+ '========== ================= ================ ============================================',\n+ 'Name Value Units Description',\n+ '========== ================= ================ ============================================',\n+ \" pi 3.141592653589793 Ratio of circumference to diameter of circle\",\n+]\n+\n+_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']\n+for _const in _constants:\n+ _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'\n+ .format(_const.abbrev, _const.value, _const._unit_string, _const.name))\n+\n+_lines.append(_lines[1])\n+\n+__doc__ += '\\n'.join(_lines)\n+\n+del _lines, _const, _constants\n", "issue": "Add doc page on authors and credits\nIt would be really helpful to have a page in our `docs` directory that lists the Coordinating Committee members and a full list of authors of the code, along with other credits. Some examples are Astropy's [Authors and Credits page](http://docs.astropy.org/en/stable/credits.html), and SunPy's [The Project](http://sunpy.org/team.html). The list of code contributors can already be accessed from our GitHub repository and the commit log; however, this often does not include full names. We might be able to find a way to automate this, though that's low priority. We should do this prior to our 0.1 release.\r\n\r\nTo help with the organization, we should probably create an `about` subdirectory that will include pages about the PlasmaPy project as a whole, including this one. The `docs/stability.rst` page could go in this directory too.\n", "before_files": [{"content": "\"\"\"Physical and mathematical constants.\"\"\"\n\nfrom numpy import pi\n\nfrom astropy.constants.si import (\n e,\n mu0,\n eps0,\n k_B,\n c,\n G,\n h,\n hbar,\n m_p,\n m_n,\n m_e,\n u,\n sigma_sb,\n N_A,\n R,\n Ryd,\n a0,\n muB,\n sigma_T,\n au,\n pc,\n kpc,\n g0,\n L_sun,\n M_sun,\n R_sun,\n M_earth,\n R_earth,\n)\n\nfrom astropy.constants import atm\n", "path": "plasmapy/constants/__init__.py"}], "after_files": [{"content": "\"\"\"\nContains physical and mathematical constants commonly used in plasma\nphysics.\n\n\"\"\"\n\nfrom numpy import pi\n\nfrom astropy.constants.si import (\n e,\n mu0,\n eps0,\n k_B,\n c,\n G,\n h,\n hbar,\n m_p,\n m_n,\n m_e,\n u,\n sigma_sb,\n N_A,\n R,\n Ryd,\n a0,\n muB,\n sigma_T,\n au,\n pc,\n kpc,\n g0,\n L_sun,\n M_sun,\n R_sun,\n M_earth,\n R_earth,\n)\n\nfrom astropy.constants import atm\n\n# The following code is modified from astropy.constants to produce a\n# table containing information on the constants contained with PlasmaPy.\n# Mathematical constants can be just entered.\n\n_lines = [\n 'The following constants are available:\\n',\n '========== ================= ================ ============================================',\n 'Name Value Units Description',\n '========== ================= ================ ============================================',\n \" pi 3.141592653589793 Ratio of circumference to diameter of circle\",\n]\n\n_constants = [eval(item) for item in dir() if item[0] != '_' and item != 'pi']\nfor _const in _constants:\n _lines.append('{0:^10} {1:^17.12g} {2:^16} {3}'\n .format(_const.abbrev, _const.value, _const._unit_string, _const.name))\n\n_lines.append(_lines[1])\n\n__doc__ += '\\n'.join(_lines)\n\ndel _lines, _const, _constants\n", "path": "plasmapy/constants/__init__.py"}]} | 663 | 350 |
gh_patches_debug_20348 | rasdani/github-patches | git_diff | google__personfinder-397 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal server error on multiview.py with invalid record ID
multiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.
```
AttributeError: 'NoneType' object has no attribute 'person_record_id'
at get (multiview.py:47)
at serve (main.py:622)
at get (main.py:647)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/multiview.py`
Content:
```
1 #!/usr/bin/python2.7
2 # Copyright 2010 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from model import *
17 from utils import *
18 import pfif
19 import reveal
20 import subscribe
21 import view
22
23 from django.utils.translation import ugettext as _
24
25 # Fields to show for side-by-side comparison.
26 COMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']
27
28
29 class Handler(BaseHandler):
30 def get(self):
31 # To handle multiple persons, we create a single object where
32 # each property is a list of values, one for each person.
33 # This makes page rendering easier.
34 person = dict([(prop, []) for prop in COMPARE_FIELDS])
35 any_person = dict([(prop, None) for prop in COMPARE_FIELDS])
36
37 # Get all persons from db.
38 # TODO: Can later optimize to use fewer DB calls.
39 for i in [1, 2, 3]:
40 id = self.request.get('id%d' % i)
41 if not id:
42 break
43 p = Person.get(self.repo, id)
44 sanitize_urls(p)
45
46 for prop in COMPARE_FIELDS:
47 val = getattr(p, prop)
48 if prop == 'sex': # convert enum value to localized text
49 val = get_person_sex_text(p)
50 person[prop].append(val)
51 any_person[prop] = any_person[prop] or val
52
53 # Compute the local times for the date fields on the person and format.
54 person['source_datetime_local_string'] = map(
55 self.to_formatted_local_datetime, person['source_date'])
56
57 # Check if private info should be revealed.
58 content_id = 'multiview:' + ','.join(person['person_record_id'])
59 reveal_url = reveal.make_reveal_url(self, content_id)
60 show_private_info = reveal.verify(content_id, self.params.signature)
61
62 standalone = self.request.get('standalone')
63
64 # TODO: Handle no persons found.
65
66 person['profile_pages'] = [view.get_profile_pages(profile_urls, self)
67 for profile_urls in person['profile_urls']]
68 any_person['profile_pages'] = any(person['profile_pages'])
69
70 # Note: we're not showing notes and linked persons information
71 # here at the moment.
72 self.render('multiview.html',
73 person=person, any=any_person, standalone=standalone,
74 cols=len(person['full_name']) + 1,
75 onload_function='view_page_loaded()', markdup=True,
76 show_private_info=show_private_info, reveal_url=reveal_url)
77
78 def post(self):
79 if not self.params.text:
80 return self.error(
81 200, _('Message is required. Please go back and try again.'))
82
83 if not self.params.author_name:
84 return self.error(
85 200, _('Your name is required in the "About you" section. Please go back and try again.'))
86
87 # TODO: To reduce possible abuse, we currently limit to 3 person
88 # match. We could guard using e.g. an XSRF token, which I don't know how
89 # to build in GAE.
90
91 ids = set()
92 for i in [1, 2, 3]:
93 id = getattr(self.params, 'id%d' % i)
94 if not id:
95 break
96 ids.add(id)
97
98 if len(ids) > 1:
99 notes = []
100 for person_id in ids:
101 person = Person.get(self.repo, person_id)
102 person_notes = []
103 for other_id in ids - set([person_id]):
104 note = Note.create_original(
105 self.repo,
106 entry_date=get_utcnow(),
107 person_record_id=person_id,
108 linked_person_record_id=other_id,
109 text=self.params.text,
110 author_name=self.params.author_name,
111 author_phone=self.params.author_phone,
112 author_email=self.params.author_email,
113 source_date=get_utcnow())
114 person_notes.append(note)
115 # Notify person's subscribers of all new duplicates. We do not
116 # follow links since each Person record in the ids list gets its
117 # own note. However, 1) when > 2 records are marked as
118 # duplicates, subscribers will still receive multiple
119 # notifications, and 2) subscribers to already-linked Persons
120 # will not be notified of the new link.
121 subscribe.send_notifications(self, person, person_notes, False)
122 notes += person_notes
123 # Write all notes to store
124 db.put(notes)
125 self.redirect('/view', id=self.params.id1)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/multiview.py b/app/multiview.py
--- a/app/multiview.py
+++ b/app/multiview.py
@@ -41,6 +41,11 @@
if not id:
break
p = Person.get(self.repo, id)
+ if not p:
+ return self.error(
+ 404,
+ _("This person's entry does not exist or has been "
+ "deleted."))
sanitize_urls(p)
for prop in COMPARE_FIELDS:
@@ -103,7 +108,7 @@
for other_id in ids - set([person_id]):
note = Note.create_original(
self.repo,
- entry_date=get_utcnow(),
+ entry_date=get_utcnow(),
person_record_id=person_id,
linked_person_record_id=other_id,
text=self.params.text,
| {"golden_diff": "diff --git a/app/multiview.py b/app/multiview.py\n--- a/app/multiview.py\n+++ b/app/multiview.py\n@@ -41,6 +41,11 @@\n if not id:\n break\n p = Person.get(self.repo, id)\n+ if not p:\n+ return self.error(\n+ 404,\n+ _(\"This person's entry does not exist or has been \"\n+ \"deleted.\"))\n sanitize_urls(p)\n \n for prop in COMPARE_FIELDS:\n@@ -103,7 +108,7 @@\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n- entry_date=get_utcnow(), \n+ entry_date=get_utcnow(),\n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n", "issue": "Internal server error on multiview.py with invalid record ID\nmultiview.py returns Internal server error when one of the specified IDs is invalid. It should return 404 or something instead.\r\n\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'person_record_id'\r\nat get (multiview.py:47)\r\nat serve (main.py:622)\r\nat get (main.py:647)\r\n```\n", "before_files": [{"content": "#!/usr/bin/python2.7\n# Copyright 2010 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom model import *\nfrom utils import *\nimport pfif\nimport reveal\nimport subscribe\nimport view\n\nfrom django.utils.translation import ugettext as _\n\n# Fields to show for side-by-side comparison.\nCOMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']\n\n\nclass Handler(BaseHandler):\n def get(self):\n # To handle multiple persons, we create a single object where\n # each property is a list of values, one for each person.\n # This makes page rendering easier.\n person = dict([(prop, []) for prop in COMPARE_FIELDS])\n any_person = dict([(prop, None) for prop in COMPARE_FIELDS])\n\n # Get all persons from db.\n # TODO: Can later optimize to use fewer DB calls.\n for i in [1, 2, 3]:\n id = self.request.get('id%d' % i)\n if not id:\n break\n p = Person.get(self.repo, id)\n sanitize_urls(p)\n\n for prop in COMPARE_FIELDS:\n val = getattr(p, prop)\n if prop == 'sex': # convert enum value to localized text\n val = get_person_sex_text(p)\n person[prop].append(val)\n any_person[prop] = any_person[prop] or val\n\n # Compute the local times for the date fields on the person and format.\n person['source_datetime_local_string'] = map(\n self.to_formatted_local_datetime, person['source_date'])\n\n # Check if private info should be revealed.\n content_id = 'multiview:' + ','.join(person['person_record_id'])\n reveal_url = reveal.make_reveal_url(self, content_id)\n show_private_info = reveal.verify(content_id, self.params.signature)\n\n standalone = self.request.get('standalone')\n\n # TODO: Handle no persons found.\n\n person['profile_pages'] = [view.get_profile_pages(profile_urls, self)\n for profile_urls in person['profile_urls']]\n any_person['profile_pages'] = any(person['profile_pages'])\n\n # Note: we're not showing notes and linked persons information\n # here at the moment.\n self.render('multiview.html',\n person=person, any=any_person, standalone=standalone,\n cols=len(person['full_name']) + 1,\n onload_function='view_page_loaded()', markdup=True,\n show_private_info=show_private_info, reveal_url=reveal_url)\n\n def post(self):\n if not self.params.text:\n return self.error(\n 200, _('Message is required. Please go back and try again.'))\n\n if not self.params.author_name:\n return self.error(\n 200, _('Your name is required in the \"About you\" section. Please go back and try again.'))\n\n # TODO: To reduce possible abuse, we currently limit to 3 person\n # match. We could guard using e.g. an XSRF token, which I don't know how\n # to build in GAE.\n\n ids = set()\n for i in [1, 2, 3]:\n id = getattr(self.params, 'id%d' % i)\n if not id:\n break\n ids.add(id)\n\n if len(ids) > 1:\n notes = []\n for person_id in ids:\n person = Person.get(self.repo, person_id)\n person_notes = []\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n entry_date=get_utcnow(), \n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n author_name=self.params.author_name,\n author_phone=self.params.author_phone,\n author_email=self.params.author_email,\n source_date=get_utcnow())\n person_notes.append(note)\n # Notify person's subscribers of all new duplicates. We do not\n # follow links since each Person record in the ids list gets its\n # own note. However, 1) when > 2 records are marked as\n # duplicates, subscribers will still receive multiple\n # notifications, and 2) subscribers to already-linked Persons\n # will not be notified of the new link.\n subscribe.send_notifications(self, person, person_notes, False)\n notes += person_notes\n # Write all notes to store\n db.put(notes)\n self.redirect('/view', id=self.params.id1)\n", "path": "app/multiview.py"}], "after_files": [{"content": "#!/usr/bin/python2.7\n# Copyright 2010 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom model import *\nfrom utils import *\nimport pfif\nimport reveal\nimport subscribe\nimport view\n\nfrom django.utils.translation import ugettext as _\n\n# Fields to show for side-by-side comparison.\nCOMPARE_FIELDS = pfif.PFIF_1_4.fields['person'] + ['primary_full_name']\n\n\nclass Handler(BaseHandler):\n def get(self):\n # To handle multiple persons, we create a single object where\n # each property is a list of values, one for each person.\n # This makes page rendering easier.\n person = dict([(prop, []) for prop in COMPARE_FIELDS])\n any_person = dict([(prop, None) for prop in COMPARE_FIELDS])\n\n # Get all persons from db.\n # TODO: Can later optimize to use fewer DB calls.\n for i in [1, 2, 3]:\n id = self.request.get('id%d' % i)\n if not id:\n break\n p = Person.get(self.repo, id)\n if not p:\n return self.error(\n 404,\n _(\"This person's entry does not exist or has been \"\n \"deleted.\"))\n sanitize_urls(p)\n\n for prop in COMPARE_FIELDS:\n val = getattr(p, prop)\n if prop == 'sex': # convert enum value to localized text\n val = get_person_sex_text(p)\n person[prop].append(val)\n any_person[prop] = any_person[prop] or val\n\n # Compute the local times for the date fields on the person and format.\n person['source_datetime_local_string'] = map(\n self.to_formatted_local_datetime, person['source_date'])\n\n # Check if private info should be revealed.\n content_id = 'multiview:' + ','.join(person['person_record_id'])\n reveal_url = reveal.make_reveal_url(self, content_id)\n show_private_info = reveal.verify(content_id, self.params.signature)\n\n standalone = self.request.get('standalone')\n\n # TODO: Handle no persons found.\n\n person['profile_pages'] = [view.get_profile_pages(profile_urls, self)\n for profile_urls in person['profile_urls']]\n any_person['profile_pages'] = any(person['profile_pages'])\n\n # Note: we're not showing notes and linked persons information\n # here at the moment.\n self.render('multiview.html',\n person=person, any=any_person, standalone=standalone,\n cols=len(person['full_name']) + 1,\n onload_function='view_page_loaded()', markdup=True,\n show_private_info=show_private_info, reveal_url=reveal_url)\n\n def post(self):\n if not self.params.text:\n return self.error(\n 200, _('Message is required. Please go back and try again.'))\n\n if not self.params.author_name:\n return self.error(\n 200, _('Your name is required in the \"About you\" section. Please go back and try again.'))\n\n # TODO: To reduce possible abuse, we currently limit to 3 person\n # match. We could guard using e.g. an XSRF token, which I don't know how\n # to build in GAE.\n\n ids = set()\n for i in [1, 2, 3]:\n id = getattr(self.params, 'id%d' % i)\n if not id:\n break\n ids.add(id)\n\n if len(ids) > 1:\n notes = []\n for person_id in ids:\n person = Person.get(self.repo, person_id)\n person_notes = []\n for other_id in ids - set([person_id]):\n note = Note.create_original(\n self.repo,\n entry_date=get_utcnow(),\n person_record_id=person_id,\n linked_person_record_id=other_id,\n text=self.params.text,\n author_name=self.params.author_name,\n author_phone=self.params.author_phone,\n author_email=self.params.author_email,\n source_date=get_utcnow())\n person_notes.append(note)\n # Notify person's subscribers of all new duplicates. We do not\n # follow links since each Person record in the ids list gets its\n # own note. However, 1) when > 2 records are marked as\n # duplicates, subscribers will still receive multiple\n # notifications, and 2) subscribers to already-linked Persons\n # will not be notified of the new link.\n subscribe.send_notifications(self, person, person_notes, False)\n notes += person_notes\n # Write all notes to store\n db.put(notes)\n self.redirect('/view', id=self.params.id1)\n", "path": "app/multiview.py"}]} | 1,718 | 197 |
gh_patches_debug_59251 | rasdani/github-patches | git_diff | ephios-dev__ephios-639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PWA does not respect orientation lock on Android
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/core/views/pwa.py`
Content:
```
1 import functools
2
3 from django.conf import settings
4 from django.contrib.staticfiles import finders
5 from django.http import HttpResponse, JsonResponse
6 from django.shortcuts import render
7 from django.utils.translation import get_language
8
9
10 def manifest(request):
11 manifest_json = {
12 "name": "ephios",
13 "short_name": "ephios",
14 "description": "ephios manages events for medical services",
15 "start_url": "/",
16 "display": "standalone",
17 "scope": "/",
18 "orientation": "any",
19 "background_color": "#fff",
20 "theme_color": "#000",
21 "status_bar": "default",
22 "dir": "auto",
23 "icons": settings.PWA_APP_ICONS,
24 "lang": get_language(),
25 }
26 response = JsonResponse(manifest_json)
27 response["Service-Worker-Allowed"] = "/"
28 return response
29
30
31 @functools.lru_cache
32 def serviceworker_js():
33 with open(finders.find("ephios/js/serviceworker.js"), "rb") as sw_js:
34 return sw_js.read()
35
36
37 def serviceworker(request):
38 return HttpResponse(
39 serviceworker_js(),
40 content_type="application/javascript",
41 )
42
43
44 def offline(request):
45 return render(request, "offline.html")
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py
--- a/ephios/core/views/pwa.py
+++ b/ephios/core/views/pwa.py
@@ -15,7 +15,6 @@
"start_url": "/",
"display": "standalone",
"scope": "/",
- "orientation": "any",
"background_color": "#fff",
"theme_color": "#000",
"status_bar": "default",
| {"golden_diff": "diff --git a/ephios/core/views/pwa.py b/ephios/core/views/pwa.py\n--- a/ephios/core/views/pwa.py\n+++ b/ephios/core/views/pwa.py\n@@ -15,7 +15,6 @@\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n- \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n", "issue": "PWA does not respect orientation lock on Android\n\n", "before_files": [{"content": "import functools\n\nfrom django.conf import settings\nfrom django.contrib.staticfiles import finders\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import render\nfrom django.utils.translation import get_language\n\n\ndef manifest(request):\n manifest_json = {\n \"name\": \"ephios\",\n \"short_name\": \"ephios\",\n \"description\": \"ephios manages events for medical services\",\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n \"orientation\": \"any\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n \"dir\": \"auto\",\n \"icons\": settings.PWA_APP_ICONS,\n \"lang\": get_language(),\n }\n response = JsonResponse(manifest_json)\n response[\"Service-Worker-Allowed\"] = \"/\"\n return response\n\n\[email protected]_cache\ndef serviceworker_js():\n with open(finders.find(\"ephios/js/serviceworker.js\"), \"rb\") as sw_js:\n return sw_js.read()\n\n\ndef serviceworker(request):\n return HttpResponse(\n serviceworker_js(),\n content_type=\"application/javascript\",\n )\n\n\ndef offline(request):\n return render(request, \"offline.html\")\n", "path": "ephios/core/views/pwa.py"}], "after_files": [{"content": "import functools\n\nfrom django.conf import settings\nfrom django.contrib.staticfiles import finders\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import render\nfrom django.utils.translation import get_language\n\n\ndef manifest(request):\n manifest_json = {\n \"name\": \"ephios\",\n \"short_name\": \"ephios\",\n \"description\": \"ephios manages events for medical services\",\n \"start_url\": \"/\",\n \"display\": \"standalone\",\n \"scope\": \"/\",\n \"background_color\": \"#fff\",\n \"theme_color\": \"#000\",\n \"status_bar\": \"default\",\n \"dir\": \"auto\",\n \"icons\": settings.PWA_APP_ICONS,\n \"lang\": get_language(),\n }\n response = JsonResponse(manifest_json)\n response[\"Service-Worker-Allowed\"] = \"/\"\n return response\n\n\[email protected]_cache\ndef serviceworker_js():\n with open(finders.find(\"ephios/js/serviceworker.js\"), \"rb\") as sw_js:\n return sw_js.read()\n\n\ndef serviceworker(request):\n return HttpResponse(\n serviceworker_js(),\n content_type=\"application/javascript\",\n )\n\n\ndef offline(request):\n return render(request, \"offline.html\")\n", "path": "ephios/core/views/pwa.py"}]} | 628 | 111 |
gh_patches_debug_31130 | rasdani/github-patches | git_diff | kedro-org__kedro-1789 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add parameters to `%reload_kedro` line magic
## Description
Currently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.
https://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38
Why don't we introduce feature parity here?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/extensions/ipython.py`
Content:
```
1 # pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals
2 """
3 This script creates an IPython extension to load Kedro-related variables in
4 local scope.
5 """
6 import logging
7 import sys
8 from pathlib import Path
9 from typing import Any, Dict
10
11 logger = logging.getLogger(__name__)
12 default_project_path = Path.cwd()
13
14
15 def _remove_cached_modules(package_name):
16 to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]
17 # `del` is used instead of `reload()` because: If the new version of a module does not
18 # define a name that was defined by the old version, the old definition remains.
19 for module in to_remove:
20 del sys.modules[module] # pragma: no cover
21
22
23 def _find_kedro_project(current_dir: Path): # pragma: no cover
24 from kedro.framework.startup import _is_project
25
26 while current_dir != current_dir.parent:
27 if _is_project(current_dir):
28 return current_dir
29 current_dir = current_dir.parent
30
31 return None
32
33
34 def reload_kedro(
35 path: str = None, env: str = None, extra_params: Dict[str, Any] = None
36 ):
37 """Line magic which reloads all Kedro default variables.
38 Setting the path will also make it default for subsequent calls.
39 """
40 from IPython import get_ipython
41 from IPython.core.magic import needs_local_scope, register_line_magic
42
43 from kedro.framework.cli import load_entry_points
44 from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import
45 from kedro.framework.project import configure_project, pipelines
46 from kedro.framework.session import KedroSession
47 from kedro.framework.startup import bootstrap_project
48
49 # If a path is provided, set it as default for subsequent calls
50 global default_project_path
51 if path:
52 default_project_path = Path(path).expanduser().resolve()
53 logger.info("Updated path to Kedro project: %s", default_project_path)
54 else:
55 logger.info("No path argument was provided. Using: %s", default_project_path)
56
57 metadata = bootstrap_project(default_project_path)
58 _remove_cached_modules(metadata.package_name)
59 configure_project(metadata.package_name)
60
61 session = KedroSession.create(
62 metadata.package_name, default_project_path, env=env, extra_params=extra_params
63 )
64 context = session.load_context()
65 catalog = context.catalog
66
67 get_ipython().push(
68 variables={
69 "context": context,
70 "catalog": catalog,
71 "session": session,
72 "pipelines": pipelines,
73 }
74 )
75
76 logger.info("Kedro project %s", str(metadata.project_name))
77 logger.info(
78 "Defined global variable 'context', 'session', 'catalog' and 'pipelines'"
79 )
80
81 for line_magic in load_entry_points("line_magic"):
82 register_line_magic(needs_local_scope(line_magic))
83 logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore
84
85
86 def load_ipython_extension(ipython):
87 """Main entry point when %load_ext is executed"""
88
89 global default_project_path
90
91 ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
92
93 default_project_path = _find_kedro_project(Path.cwd())
94
95 if default_project_path is None:
96 logger.warning(
97 "Kedro extension was registered but couldn't find a Kedro project. "
98 "Make sure you run '%reload_kedro <project_root>'."
99 )
100 return
101
102 reload_kedro(default_project_path)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py
--- a/kedro/extras/extensions/ipython.py
+++ b/kedro/extras/extensions/ipython.py
@@ -8,6 +8,9 @@
from pathlib import Path
from typing import Any, Dict
+from kedro.framework.cli.project import PARAMS_ARG_HELP
+from kedro.framework.cli.utils import ENV_HELP, _split_params
+
logger = logging.getLogger(__name__)
default_project_path = Path.cwd()
@@ -84,12 +87,46 @@
def load_ipython_extension(ipython):
- """Main entry point when %load_ext is executed"""
+ """
+ Main entry point when %load_ext is executed.
+ IPython will look for this function specifically.
+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html
- global default_project_path
+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.
+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is
+ loaded automatically.
+ """
+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
+
+ @magic_arguments()
+ @argument(
+ "path",
+ type=str,
+ help=(
+ "Path to the project root directory. If not given, use the previously set"
+ "project root."
+ ),
+ nargs="?",
+ default=None,
+ )
+ @argument("-e", "--env", type=str, default=None, help=ENV_HELP)
+ @argument(
+ "--params",
+ type=lambda value: _split_params(None, None, value),
+ default=None,
+ help=PARAMS_ARG_HELP,
+ )
+ def magic_reload_kedro(line: str):
+ """
+ The `%reload_kedro` IPython line magic. See
+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.
+ """
+ args = parse_argstring(magic_reload_kedro, line)
+ reload_kedro(args.path, args.env, args.params)
- ipython.register_magic_function(reload_kedro, "line", "reload_kedro")
+ global default_project_path
+ ipython.register_magic_function(magic_reload_kedro, magic_name="reload_kedro")
default_project_path = _find_kedro_project(Path.cwd())
if default_project_path is None:
| {"golden_diff": "diff --git a/kedro/extras/extensions/ipython.py b/kedro/extras/extensions/ipython.py\n--- a/kedro/extras/extensions/ipython.py\n+++ b/kedro/extras/extensions/ipython.py\n@@ -8,6 +8,9 @@\n from pathlib import Path\n from typing import Any, Dict\n \n+from kedro.framework.cli.project import PARAMS_ARG_HELP\n+from kedro.framework.cli.utils import ENV_HELP, _split_params\n+\n logger = logging.getLogger(__name__)\n default_project_path = Path.cwd()\n \n@@ -84,12 +87,46 @@\n \n \n def load_ipython_extension(ipython):\n- \"\"\"Main entry point when %load_ext is executed\"\"\"\n+ \"\"\"\n+ Main entry point when %load_ext is executed.\n+ IPython will look for this function specifically.\n+ See https://ipython.readthedocs.io/en/stable/config/extensions/index.html\n \n- global default_project_path\n+ This function is called when users do `%load_ext kedro.extras.extensions.ipython`.\n+ When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is\n+ loaded automatically.\n+ \"\"\"\n+ from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n+\n+ @magic_arguments()\n+ @argument(\n+ \"path\",\n+ type=str,\n+ help=(\n+ \"Path to the project root directory. If not given, use the previously set\"\n+ \"project root.\"\n+ ),\n+ nargs=\"?\",\n+ default=None,\n+ )\n+ @argument(\"-e\", \"--env\", type=str, default=None, help=ENV_HELP)\n+ @argument(\n+ \"--params\",\n+ type=lambda value: _split_params(None, None, value),\n+ default=None,\n+ help=PARAMS_ARG_HELP,\n+ )\n+ def magic_reload_kedro(line: str):\n+ \"\"\"\n+ The `%reload_kedro` IPython line magic. See\n+ https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.\n+ \"\"\"\n+ args = parse_argstring(magic_reload_kedro, line)\n+ reload_kedro(args.path, args.env, args.params)\n \n- ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n+ global default_project_path\n \n+ ipython.register_magic_function(magic_reload_kedro, magic_name=\"reload_kedro\")\n default_project_path = _find_kedro_project(Path.cwd())\n \n if default_project_path is None:\n", "issue": "Add parameters to `%reload_kedro` line magic \n## Description\r\n\r\nCurrently you cannot pass things like `env` or `extra_params` via the line magic, but you can by importing the function.\r\n\r\nhttps://github.com/kedro-org/kedro/blob/5ae97cfb70e5b0d4490132847977d482f13c840f/kedro/extras/extensions/ipython.py#L38\r\n\r\nWhy don't we introduce feature parity here? \n", "before_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n \"\"\"\n from IPython import get_ipython\n from IPython.core.magic import needs_local_scope, register_line_magic\n\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Main entry point when %load_ext is executed\"\"\"\n\n global default_project_path\n\n ipython.register_magic_function(reload_kedro, \"line\", \"reload_kedro\")\n\n default_project_path = _find_kedro_project(Path.cwd())\n\n if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro(default_project_path)\n", "path": "kedro/extras/extensions/ipython.py"}], "after_files": [{"content": "# pylint: disable=import-outside-toplevel,global-statement,invalid-name,too-many-locals\n\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nfrom kedro.framework.cli.project import PARAMS_ARG_HELP\nfrom kedro.framework.cli.utils import ENV_HELP, _split_params\n\nlogger = logging.getLogger(__name__)\ndefault_project_path = Path.cwd()\n\n\ndef _remove_cached_modules(package_name):\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module] # pragma: no cover\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n from kedro.framework.startup import _is_project\n\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n\n\ndef reload_kedro(\n path: str = None, env: str = None, extra_params: Dict[str, Any] = None\n):\n \"\"\"Line magic which reloads all Kedro default variables.\n Setting the path will also make it default for subsequent calls.\n \"\"\"\n from IPython import get_ipython\n from IPython.core.magic import needs_local_scope, register_line_magic\n\n from kedro.framework.cli import load_entry_points\n from kedro.framework.project import LOGGING # noqa # pylint:disable=unused-import\n from kedro.framework.project import configure_project, pipelines\n from kedro.framework.session import KedroSession\n from kedro.framework.startup import bootstrap_project\n\n # If a path is provided, set it as default for subsequent calls\n global default_project_path\n if path:\n default_project_path = Path(path).expanduser().resolve()\n logger.info(\"Updated path to Kedro project: %s\", default_project_path)\n else:\n logger.info(\"No path argument was provided. Using: %s\", default_project_path)\n\n metadata = bootstrap_project(default_project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, default_project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef load_ipython_extension(ipython):\n \"\"\"\n Main entry point when %load_ext is executed.\n IPython will look for this function specifically.\n See https://ipython.readthedocs.io/en/stable/config/extensions/index.html\n\n This function is called when users do `%load_ext kedro.extras.extensions.ipython`.\n When user use `kedro jupyter notebook` or `jupyter ipython`, this extension is\n loaded automatically.\n \"\"\"\n from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n\n @magic_arguments()\n @argument(\n \"path\",\n type=str,\n help=(\n \"Path to the project root directory. If not given, use the previously set\"\n \"project root.\"\n ),\n nargs=\"?\",\n default=None,\n )\n @argument(\"-e\", \"--env\", type=str, default=None, help=ENV_HELP)\n @argument(\n \"--params\",\n type=lambda value: _split_params(None, None, value),\n default=None,\n help=PARAMS_ARG_HELP,\n )\n def magic_reload_kedro(line: str):\n \"\"\"\n The `%reload_kedro` IPython line magic. See\n https://kedro.readthedocs.io/en/stable/tools_integration/ipython.html for more.\n \"\"\"\n args = parse_argstring(magic_reload_kedro, line)\n reload_kedro(args.path, args.env, args.params)\n\n global default_project_path\n\n ipython.register_magic_function(magic_reload_kedro, magic_name=\"reload_kedro\")\n default_project_path = _find_kedro_project(Path.cwd())\n\n if default_project_path is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro(default_project_path)\n", "path": "kedro/extras/extensions/ipython.py"}]} | 1,378 | 569 |
gh_patches_debug_7545 | rasdani/github-patches | git_diff | deeppavlov__DeepPavlov-861 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.7.0 support
DeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).
This issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.
I can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deeppavlov/__init__.py`
Content:
```
1 # Copyright 2017 Neural Networks and Deep Learning lab, MIPT
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sys
16 from pathlib import Path
17
18 from .core.common.log import init_logger
19
20 try:
21 from .configs import configs
22 # noinspection PyUnresolvedReferences
23 from .core.commands.infer import build_model
24 # noinspection PyUnresolvedReferences
25 from .core.commands.train import train_evaluate_model_from_config
26 from .download import deep_download
27 from .core.common.chainer import Chainer
28
29 # TODO: make better
30 def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:
31 train_evaluate_model_from_config(config, download=download, recursive=recursive)
32 return build_model(config, load_trained=True)
33
34 def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:
35 return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)
36
37 except ImportError:
38 'Assuming that requirements are not yet installed'
39
40 __version__ = '0.4.0'
41 __author__ = 'Neural Networks and Deep Learning lab, MIPT'
42 __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
43 __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
44 __license__ = 'Apache License, Version 2.0'
45 __email__ = '[email protected]'
46
47 # check version
48 assert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'
49
50 # resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9
51 dot_dp_path = Path('~/.deeppavlov').expanduser().resolve()
52 if dot_dp_path.is_file():
53 dot_dp_path.unlink()
54
55 # initiate logging
56 init_logger()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py
--- a/deeppavlov/__init__.py
+++ b/deeppavlov/__init__.py
@@ -37,7 +37,7 @@
except ImportError:
'Assuming that requirements are not yet installed'
-__version__ = '0.4.0'
+__version__ = '0.5.0'
__author__ = 'Neural Networks and Deep Learning lab, MIPT'
__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'
__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']
| {"golden_diff": "diff --git a/deeppavlov/__init__.py b/deeppavlov/__init__.py\n--- a/deeppavlov/__init__.py\n+++ b/deeppavlov/__init__.py\n@@ -37,7 +37,7 @@\n except ImportError:\n 'Assuming that requirements are not yet installed'\n \n-__version__ = '0.4.0'\n+__version__ = '0.5.0'\n __author__ = 'Neural Networks and Deep Learning lab, MIPT'\n __description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n __keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n", "issue": "Python 3.7.0 support\nDeepPavlov has scikit-learn version fixed to v0.19.1, but its c-extensions build fails on python 3.7.0 (at least on macOS), please see [scikit-learn issue](https://github.com/scikit-learn/scikit-learn/issues/11320).\r\n\r\nThis issue has been fixed in scikit-learn v0.19.2 release, so you have to up at least minor version to enable python 3.7.0 support.\r\n\r\nI can try python 3.7.0 compatibility of other packages and prepare a pull-request, if you want.\n", "before_files": [{"content": "# Copyright 2017 Neural Networks and Deep Learning lab, MIPT\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom pathlib import Path\n\nfrom .core.common.log import init_logger\n\ntry:\n from .configs import configs\n # noinspection PyUnresolvedReferences\n from .core.commands.infer import build_model\n # noinspection PyUnresolvedReferences\n from .core.commands.train import train_evaluate_model_from_config\n from .download import deep_download\n from .core.common.chainer import Chainer\n\n # TODO: make better\n def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:\n train_evaluate_model_from_config(config, download=download, recursive=recursive)\n return build_model(config, load_trained=True)\n\n def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:\n return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)\n\nexcept ImportError:\n 'Assuming that requirements are not yet installed'\n\n__version__ = '0.4.0'\n__author__ = 'Neural Networks and Deep Learning lab, MIPT'\n__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n__license__ = 'Apache License, Version 2.0'\n__email__ = '[email protected]'\n\n# check version\nassert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'\n\n# resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9\ndot_dp_path = Path('~/.deeppavlov').expanduser().resolve()\nif dot_dp_path.is_file():\n dot_dp_path.unlink()\n\n# initiate logging\ninit_logger()\n", "path": "deeppavlov/__init__.py"}], "after_files": [{"content": "# Copyright 2017 Neural Networks and Deep Learning lab, MIPT\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom pathlib import Path\n\nfrom .core.common.log import init_logger\n\ntry:\n from .configs import configs\n # noinspection PyUnresolvedReferences\n from .core.commands.infer import build_model\n # noinspection PyUnresolvedReferences\n from .core.commands.train import train_evaluate_model_from_config\n from .download import deep_download\n from .core.common.chainer import Chainer\n\n # TODO: make better\n def train_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> Chainer:\n train_evaluate_model_from_config(config, download=download, recursive=recursive)\n return build_model(config, load_trained=True)\n\n def evaluate_model(config: [str, Path, dict], download: bool = False, recursive: bool = False) -> dict:\n return train_evaluate_model_from_config(config, to_train=False, download=download, recursive=recursive)\n\nexcept ImportError:\n 'Assuming that requirements are not yet installed'\n\n__version__ = '0.5.0'\n__author__ = 'Neural Networks and Deep Learning lab, MIPT'\n__description__ = 'An open source library for building end-to-end dialog systems and training chatbots.'\n__keywords__ = ['NLP', 'NER', 'SQUAD', 'Intents', 'Chatbot']\n__license__ = 'Apache License, Version 2.0'\n__email__ = '[email protected]'\n\n# check version\nassert sys.hexversion >= 0x3060000, 'Does not work in python3.5 or lower'\n\n# resolve conflicts with previous DeepPavlov installations versioned up to 0.0.9\ndot_dp_path = Path('~/.deeppavlov').expanduser().resolve()\nif dot_dp_path.is_file():\n dot_dp_path.unlink()\n\n# initiate logging\ninit_logger()\n", "path": "deeppavlov/__init__.py"}]} | 1,061 | 160 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.