problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14398 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3803 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
/related_project/ issue
Attempted to POST the following data:
{"relation":"1","project":"8012","related_iati_id":"23232"}
The response is 400:
`related_project: ["This field is required."]`
However, we want to allow a selection of external project:
<img width="1440" alt="Screenshot 2019-08-16 10 16 11" src="https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/serializers/related_project.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from akvo.rsr.models import RelatedProject
9
10 from .rsr_serializer import BaseRSRSerializer
11
12 from rest_framework import serializers
13
14
15 class RelatedProjectRawSerializer(BaseRSRSerializer):
16
17 class Meta:
18 model = RelatedProject
19 fields = '__all__'
20
21
22 class RelatedProjectSerializer(RelatedProjectRawSerializer):
23
24 related_project_name = serializers.ReadOnlyField(source='related_project.title')
25 related_project_show_link = serializers.ReadOnlyField()
26 relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py
--- a/akvo/rest/serializers/related_project.py
+++ b/akvo/rest/serializers/related_project.py
@@ -5,7 +5,7 @@
# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
-from akvo.rsr.models import RelatedProject
+from akvo.rsr.models import RelatedProject, Project
from .rsr_serializer import BaseRSRSerializer
@@ -14,6 +14,9 @@
class RelatedProjectRawSerializer(BaseRSRSerializer):
+ related_project = serializers.PrimaryKeyRelatedField(
+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)
+
class Meta:
model = RelatedProject
fields = '__all__'
| {"golden_diff": "diff --git a/akvo/rest/serializers/related_project.py b/akvo/rest/serializers/related_project.py\n--- a/akvo/rest/serializers/related_project.py\n+++ b/akvo/rest/serializers/related_project.py\n@@ -5,7 +5,7 @@\n # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \n \n-from akvo.rsr.models import RelatedProject\n+from akvo.rsr.models import RelatedProject, Project\n \n from .rsr_serializer import BaseRSRSerializer\n \n@@ -14,6 +14,9 @@\n \n class RelatedProjectRawSerializer(BaseRSRSerializer):\n \n+ related_project = serializers.PrimaryKeyRelatedField(\n+ allow_null=True, queryset=Project.objects.all(), required=False, default=None)\n+\n class Meta:\n model = RelatedProject\n fields = '__all__'\n", "issue": "/related_project/ issue\nAttempted to POST the following data:\r\n{\"relation\":\"1\",\"project\":\"8012\",\"related_iati_id\":\"23232\"}\r\n\r\nThe response is 400:\r\n`related_project: [\"This field is required.\"]`\r\n\r\nHowever, we want to allow a selection of external project:\r\n<img width=\"1440\" alt=\"Screenshot 2019-08-16 10 16 11\" src=\"https://user-images.githubusercontent.com/1336477/63150230-effe3400-c00e-11e9-8aff-dc162686751a.png\">\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import RelatedProject\n\nfrom .rsr_serializer import BaseRSRSerializer\n\nfrom rest_framework import serializers\n\n\nclass RelatedProjectRawSerializer(BaseRSRSerializer):\n\n class Meta:\n model = RelatedProject\n fields = '__all__'\n\n\nclass RelatedProjectSerializer(RelatedProjectRawSerializer):\n\n related_project_name = serializers.ReadOnlyField(source='related_project.title')\n related_project_show_link = serializers.ReadOnlyField()\n relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')\n", "path": "akvo/rest/serializers/related_project.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import RelatedProject, Project\n\nfrom .rsr_serializer import BaseRSRSerializer\n\nfrom rest_framework import serializers\n\n\nclass RelatedProjectRawSerializer(BaseRSRSerializer):\n\n related_project = serializers.PrimaryKeyRelatedField(\n allow_null=True, queryset=Project.objects.all(), required=False, default=None)\n\n class Meta:\n model = RelatedProject\n fields = '__all__'\n\n\nclass RelatedProjectSerializer(RelatedProjectRawSerializer):\n\n related_project_name = serializers.ReadOnlyField(source='related_project.title')\n related_project_show_link = serializers.ReadOnlyField()\n relation_label = serializers.ReadOnlyField(source='iati_relation_unicode')\n", "path": "akvo/rest/serializers/related_project.py"}]} | 651 | 193 |
gh_patches_debug_15838 | rasdani/github-patches | git_diff | wagtail__wagtail-4730 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AppRegistryNotReady("Apps aren't loaded yet.")
### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.
Removing **from .sitemap_generator import Sitemap**
from **wagtail/contrib/sitemaps/__init__.py**
fixed the problem.
Maybe it has something to do with the User model; I use a custom one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/sitemaps/sitemap_generator.py`
Content:
```
1 import warnings
2
3 from django.contrib.sitemaps import Sitemap as DjangoSitemap
4
5 from wagtail.core.models import Site
6 from wagtail.core.utils import accepts_kwarg
7 from wagtail.utils.deprecation import RemovedInWagtail24Warning
8
9
10 class Sitemap(DjangoSitemap):
11
12 def __init__(self, request=None):
13 self.request = request
14
15 def location(self, obj):
16 return obj.get_full_url(self.request)
17
18 def lastmod(self, obj):
19 # fall back on latest_revision_created_at if last_published_at is null
20 # (for backwards compatibility from before last_published_at was added)
21 return (obj.last_published_at or obj.latest_revision_created_at)
22
23 def get_wagtail_site(self):
24 site = getattr(self.request, 'site', None)
25 if site is None:
26 return Site.objects.select_related(
27 'root_page'
28 ).get(is_default_site=True)
29 return site
30
31 def items(self):
32 return (
33 self.get_wagtail_site()
34 .root_page
35 .get_descendants(inclusive=True)
36 .live()
37 .public()
38 .order_by('path')
39 .specific())
40
41 def _urls(self, page, protocol, domain):
42 urls = []
43 last_mods = set()
44
45 for item in self.paginator.page(page).object_list:
46
47 if not accepts_kwarg(item.get_sitemap_urls, 'request'):
48 warnings.warn(
49 "%s.get_sitemap_urls() must be updated to accept an optional "
50 "'request' keyword argument" % type(item).__name__,
51 category=RemovedInWagtail24Warning)
52
53 url_info_items = item.get_sitemap_urls()
54 else:
55 url_info_items = item.get_sitemap_urls(self.request)
56
57 for url_info in url_info_items:
58 urls.append(url_info)
59 last_mods.add(url_info.get('lastmod'))
60
61 # last_mods might be empty if the whole site is private
62 if last_mods and None not in last_mods:
63 self.latest_lastmod = max(last_mods)
64 return urls
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py
--- a/wagtail/contrib/sitemaps/sitemap_generator.py
+++ b/wagtail/contrib/sitemaps/sitemap_generator.py
@@ -2,7 +2,6 @@
from django.contrib.sitemaps import Sitemap as DjangoSitemap
-from wagtail.core.models import Site
from wagtail.core.utils import accepts_kwarg
from wagtail.utils.deprecation import RemovedInWagtail24Warning
@@ -23,6 +22,7 @@
def get_wagtail_site(self):
site = getattr(self.request, 'site', None)
if site is None:
+ from wagtail.core.models import Site
return Site.objects.select_related(
'root_page'
).get(is_default_site=True)
| {"golden_diff": "diff --git a/wagtail/contrib/sitemaps/sitemap_generator.py b/wagtail/contrib/sitemaps/sitemap_generator.py\n--- a/wagtail/contrib/sitemaps/sitemap_generator.py\n+++ b/wagtail/contrib/sitemaps/sitemap_generator.py\n@@ -2,7 +2,6 @@\n \n from django.contrib.sitemaps import Sitemap as DjangoSitemap\n \n-from wagtail.core.models import Site\n from wagtail.core.utils import accepts_kwarg\n from wagtail.utils.deprecation import RemovedInWagtail24Warning\n \n@@ -23,6 +22,7 @@\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n+ from wagtail.core.models import Site\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n", "issue": "AppRegistryNotReady(\"Apps aren't loaded yet.\")\n### After updating wagtail from 2.1 to 2.2 I get an AppRegistryNotReady error when migrating.\r\n\r\nRemoving **from .sitemap_generator import Sitemap**\r\nfrom **wagtail/contrib/sitemaps/__init__.py**\r\nfixed the problem.\r\n\r\nMaybe it has something to do with the User model; I use a custom one.\n", "before_files": [{"content": "import warnings\n\nfrom django.contrib.sitemaps import Sitemap as DjangoSitemap\n\nfrom wagtail.core.models import Site\nfrom wagtail.core.utils import accepts_kwarg\nfrom wagtail.utils.deprecation import RemovedInWagtail24Warning\n\n\nclass Sitemap(DjangoSitemap):\n\n def __init__(self, request=None):\n self.request = request\n\n def location(self, obj):\n return obj.get_full_url(self.request)\n\n def lastmod(self, obj):\n # fall back on latest_revision_created_at if last_published_at is null\n # (for backwards compatibility from before last_published_at was added)\n return (obj.last_published_at or obj.latest_revision_created_at)\n\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n return site\n\n def items(self):\n return (\n self.get_wagtail_site()\n .root_page\n .get_descendants(inclusive=True)\n .live()\n .public()\n .order_by('path')\n .specific())\n\n def _urls(self, page, protocol, domain):\n urls = []\n last_mods = set()\n\n for item in self.paginator.page(page).object_list:\n\n if not accepts_kwarg(item.get_sitemap_urls, 'request'):\n warnings.warn(\n \"%s.get_sitemap_urls() must be updated to accept an optional \"\n \"'request' keyword argument\" % type(item).__name__,\n category=RemovedInWagtail24Warning)\n\n url_info_items = item.get_sitemap_urls()\n else:\n url_info_items = item.get_sitemap_urls(self.request)\n\n for url_info in url_info_items:\n urls.append(url_info)\n last_mods.add(url_info.get('lastmod'))\n\n # last_mods might be empty if the whole site is private\n if last_mods and None not in last_mods:\n self.latest_lastmod = max(last_mods)\n return urls\n", "path": "wagtail/contrib/sitemaps/sitemap_generator.py"}], "after_files": [{"content": "import warnings\n\nfrom django.contrib.sitemaps import Sitemap as DjangoSitemap\n\nfrom wagtail.core.utils import accepts_kwarg\nfrom wagtail.utils.deprecation import RemovedInWagtail24Warning\n\n\nclass Sitemap(DjangoSitemap):\n\n def __init__(self, request=None):\n self.request = request\n\n def location(self, obj):\n return obj.get_full_url(self.request)\n\n def lastmod(self, obj):\n # fall back on latest_revision_created_at if last_published_at is null\n # (for backwards compatibility from before last_published_at was added)\n return (obj.last_published_at or obj.latest_revision_created_at)\n\n def get_wagtail_site(self):\n site = getattr(self.request, 'site', None)\n if site is None:\n from wagtail.core.models import Site\n return Site.objects.select_related(\n 'root_page'\n ).get(is_default_site=True)\n return site\n\n def items(self):\n return (\n self.get_wagtail_site()\n .root_page\n .get_descendants(inclusive=True)\n .live()\n .public()\n .order_by('path')\n .specific())\n\n def _urls(self, page, protocol, domain):\n urls = []\n last_mods = set()\n\n for item in self.paginator.page(page).object_list:\n\n if not accepts_kwarg(item.get_sitemap_urls, 'request'):\n warnings.warn(\n \"%s.get_sitemap_urls() must be updated to accept an optional \"\n \"'request' keyword argument\" % type(item).__name__,\n category=RemovedInWagtail24Warning)\n\n url_info_items = item.get_sitemap_urls()\n else:\n url_info_items = item.get_sitemap_urls(self.request)\n\n for url_info in url_info_items:\n urls.append(url_info)\n last_mods.add(url_info.get('lastmod'))\n\n # last_mods might be empty if the whole site is private\n if last_mods and None not in last_mods:\n self.latest_lastmod = max(last_mods)\n return urls\n", "path": "wagtail/contrib/sitemaps/sitemap_generator.py"}]} | 920 | 188 |
gh_patches_debug_14105 | rasdani/github-patches | git_diff | interactions-py__interactions.py-1169 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
[BUG] Cannot pop from Cache during dispatch.
### Describe the bug.
When running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.
This is caused by the implementation of #482
### List the steps.
n/A
### What you expected.
n/A
### What you saw.
```
Websocket have raised an exception, closing.
Traceback (most recent call last):
File "C:\Users\\Desktop\PycharmProjects\library\interactions\client\bot.py", line 440, in _login
await self._websocket.run()
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 279, in run
await self._handle_stream(msg)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 332, in _handle_stream
self._dispatch_event(event, data)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\gateway\client.py", line 562, in _dispatch_event
_message_cache.pop(message_id)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\api\cache.py", line 131, in pop
return self.values.pop(key, default)
File "C:\Users\\Desktop\PycharmProjects\library\interactions\utils\dict_caches.py", line 39, in __getitem__
self.move_to_end(key)
KeyError: Snowflake(1040316644695756912)
Process finished with exit code 0
```
### What version of the library did you use?
unstable
### Version specification
The unstable unstable version
### Code of Conduct
- [X] I agree to follow the contribution requirements.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `interactions/utils/dict_caches.py`
Content:
```
1 from collections import OrderedDict
2 from typing import Generic, TypeVar
3
4 __all__ = ("FIFODict", "LRUDict")
5
6 _KT = TypeVar("_KT")
7 _VT = TypeVar("_VT")
8
9
10 class FIFODict(OrderedDict, Generic[_KT, _VT]):
11 """A dictionary that removes the old keys if over the item limit"""
12
13 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
14 if max_items < 0:
15 raise RuntimeError("You cannot set max_items to negative numbers.")
16
17 super().__init__(*args, **kwargs)
18 self._max_items = max_items
19
20 def __setitem__(self, key: _KT, value: _VT):
21 super().__setitem__(key, value)
22
23 # Prevent buildup over time
24 while len(self) > self._max_items:
25 del self[next(iter(self))]
26
27
28 class LRUDict(OrderedDict, Generic[_KT, _VT]):
29 """A dictionary that removes the value that was the least recently used if over the item limit"""
30
31 def __init__(self, *args, max_items: int = float("inf"), **kwargs):
32 if max_items < 0:
33 raise RuntimeError("You cannot set max_items to negative numbers.")
34
35 super().__init__(*args, **kwargs)
36 self._max_items = max_items
37
38 def __getitem__(self, key: _KT) -> _VT:
39 self.move_to_end(key)
40 return super().__getitem__(key)
41
42 def __setitem__(self, key: _KT, value: _VT):
43 super().__setitem__(key, value)
44
45 # Prevent buildup over time
46 while len(self) > self._max_items:
47 del self[next(iter(self))]
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py
--- a/interactions/utils/dict_caches.py
+++ b/interactions/utils/dict_caches.py
@@ -1,6 +1,8 @@
from collections import OrderedDict
from typing import Generic, TypeVar
+from .missing import MISSING
+
__all__ = ("FIFODict", "LRUDict")
_KT = TypeVar("_KT")
@@ -45,3 +47,14 @@
# Prevent buildup over time
while len(self) > self._max_items:
del self[next(iter(self))]
+
+ __marker = object()
+
+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:
+ if key in self:
+ result = self[key]
+ del self[key]
+ return result
+ if default is MISSING:
+ raise KeyError(key)
+ return default
| {"golden_diff": "diff --git a/interactions/utils/dict_caches.py b/interactions/utils/dict_caches.py\n--- a/interactions/utils/dict_caches.py\n+++ b/interactions/utils/dict_caches.py\n@@ -1,6 +1,8 @@\n from collections import OrderedDict\n from typing import Generic, TypeVar\n \n+from .missing import MISSING\n+\n __all__ = (\"FIFODict\", \"LRUDict\")\n \n _KT = TypeVar(\"_KT\")\n@@ -45,3 +47,14 @@\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n+\n+ __marker = object()\n+\n+ def pop(self, key: _KT, default: _VT = __marker) -> _VT:\n+ if key in self:\n+ result = self[key]\n+ del self[key]\n+ return result\n+ if default is MISSING:\n+ raise KeyError(key)\n+ return default\n", "issue": "[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n[BUG] Cannot pop from Cache during dispatch.\n### Describe the bug.\r\n\r\nWhen running functions leading to delete events, for example `channel.purge` the cache encounters a KeyError and kills the whole Process.\r\n\r\nThis is caused by the implementation of #482\r\n\r\n### List the steps.\r\n\r\nn/A\r\n\r\n### What you expected.\r\n\r\nn/A\r\n\r\n### What you saw.\r\n```\r\nWebsocket have raised an exception, closing.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\client\\bot.py\", line 440, in _login\r\n await self._websocket.run()\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 279, in run\r\n await self._handle_stream(msg)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 332, in _handle_stream\r\n self._dispatch_event(event, data)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\gateway\\client.py\", line 562, in _dispatch_event\r\n _message_cache.pop(message_id)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\api\\cache.py\", line 131, in pop\r\n return self.values.pop(key, default)\r\n File \"C:\\Users\\\\Desktop\\PycharmProjects\\library\\interactions\\utils\\dict_caches.py\", line 39, in __getitem__\r\n self.move_to_end(key)\r\nKeyError: Snowflake(1040316644695756912)\r\n\r\nProcess finished with exit code 0\r\n```\r\n\r\n### What version of the library did you use?\r\n\r\nunstable\r\n\r\n### Version specification\r\n\r\nThe unstable unstable version\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the contribution requirements.\n", "before_files": [{"content": "from collections import OrderedDict\nfrom typing import Generic, TypeVar\n\n__all__ = (\"FIFODict\", \"LRUDict\")\n\n_KT = TypeVar(\"_KT\")\n_VT = TypeVar(\"_VT\")\n\n\nclass FIFODict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the old keys if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n\nclass LRUDict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the value that was the least recently used if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __getitem__(self, key: _KT) -> _VT:\n self.move_to_end(key)\n return super().__getitem__(key)\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n", "path": "interactions/utils/dict_caches.py"}], "after_files": [{"content": "from collections import OrderedDict\nfrom typing import Generic, TypeVar\n\nfrom .missing import MISSING\n\n__all__ = (\"FIFODict\", \"LRUDict\")\n\n_KT = TypeVar(\"_KT\")\n_VT = TypeVar(\"_VT\")\n\n\nclass FIFODict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the old keys if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n\nclass LRUDict(OrderedDict, Generic[_KT, _VT]):\n \"\"\"A dictionary that removes the value that was the least recently used if over the item limit\"\"\"\n\n def __init__(self, *args, max_items: int = float(\"inf\"), **kwargs):\n if max_items < 0:\n raise RuntimeError(\"You cannot set max_items to negative numbers.\")\n\n super().__init__(*args, **kwargs)\n self._max_items = max_items\n\n def __getitem__(self, key: _KT) -> _VT:\n self.move_to_end(key)\n return super().__getitem__(key)\n\n def __setitem__(self, key: _KT, value: _VT):\n super().__setitem__(key, value)\n\n # Prevent buildup over time\n while len(self) > self._max_items:\n del self[next(iter(self))]\n\n __marker = object()\n\n def pop(self, key: _KT, default: _VT = __marker) -> _VT:\n if key in self:\n result = self[key]\n del self[key]\n return result\n if default is MISSING:\n raise KeyError(key)\n return default\n", "path": "interactions/utils/dict_caches.py"}]} | 1,579 | 216 |
gh_patches_debug_9077 | rasdani/github-patches | git_diff | keras-team__keras-nlp-760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Simplify docstring example of `TokenAndPositionEmbedding`
The current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).
Make the following changes to the docstring example:
* Create a real input of the correct shape using `tf.ones`
* Inline constant values like `"seq_length"` directly into the function calls
Be sure to check that your example runs on colab or your local machine!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_nlp/layers/token_and_position_embedding.py`
Content:
```
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Creates an Embedding Layer and adds Positional Embeddings"""
16
17 from tensorflow import keras
18
19 import keras_nlp.layers
20 from keras_nlp.utils.keras_utils import clone_initializer
21
22
23 @keras.utils.register_keras_serializable(package="keras_nlp")
24 class TokenAndPositionEmbedding(keras.layers.Layer):
25 """A layer which sums a token and position embedding.
26
27 Token and position embeddings are ways of representing words and their order
28 in a sentence. This layer creates a `keras.layers.Embedding` token embedding
29 and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their
30 output when called. This layer assumes that the last dimension in the input
31 corresponds to the sequence dimension.
32
33 Args:
34 vocabulary_size: The size of the vocabulary.
35 sequence_length: The maximum length of input sequence
36 embedding_dim: The output dimension of the embedding layer
37 embeddings_initializer: The initializer to use for the Embedding
38 Layers
39 mask_zero: Boolean, whether or not the input value 0 is a special
40 "padding" value that should be masked out.
41 This is useful when using recurrent layers which may take variable
42 length input. If this is True, then all subsequent layers in the
43 model need to support masking or an exception will be raised.
44 If mask_zero` is set to True, as a consequence, index 0 cannot be
45 used in the vocabulary
46 (input_dim should equal size of vocabulary + 1).
47
48 Examples:
49 ```python
50 seq_length = 50
51 vocab_size = 5000
52 embed_dim = 128
53 inputs = keras.Input(shape=(seq_length,))
54 embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
55 vocabulary_size=vocab_size,
56 sequence_length=seq_length,
57 embedding_dim=embed_dim,
58 )
59 outputs = embedding_layer(inputs)
60 ```
61 """
62
63 def __init__(
64 self,
65 vocabulary_size,
66 sequence_length,
67 embedding_dim,
68 embeddings_initializer="glorot_uniform",
69 mask_zero=False,
70 **kwargs
71 ):
72 super().__init__(**kwargs)
73 if vocabulary_size is None:
74 raise ValueError(
75 "`vocabulary_size` must be an Integer, received `None`."
76 )
77 if sequence_length is None:
78 raise ValueError(
79 "`sequence_length` must be an Integer, received `None`."
80 )
81 if embedding_dim is None:
82 raise ValueError(
83 "`embedding_dim` must be an Integer, received `None`."
84 )
85 self.vocabulary_size = int(vocabulary_size)
86 self.sequence_length = int(sequence_length)
87 self.embedding_dim = int(embedding_dim)
88 self.embeddings_initializer = keras.initializers.get(
89 embeddings_initializer
90 )
91 self.token_embedding = keras.layers.Embedding(
92 vocabulary_size,
93 embedding_dim,
94 embeddings_initializer=clone_initializer(
95 self.embeddings_initializer
96 ),
97 mask_zero=mask_zero,
98 name="token_embedding"
99 + str(keras.backend.get_uid("token_embedding")),
100 )
101 self.position_embedding = keras_nlp.layers.PositionEmbedding(
102 sequence_length=sequence_length,
103 initializer=clone_initializer(self.embeddings_initializer),
104 name="position_embedding"
105 + str(keras.backend.get_uid("position_embedding")),
106 )
107 self.supports_masking = self.token_embedding.supports_masking
108
109 def get_config(self):
110 config = super().get_config()
111 config.update(
112 {
113 "vocabulary_size": self.vocabulary_size,
114 "sequence_length": self.sequence_length,
115 "embedding_dim": self.embedding_dim,
116 "embeddings_initializer": keras.initializers.serialize(
117 self.embeddings_initializer
118 ),
119 "mask_zero": self.token_embedding.mask_zero,
120 },
121 )
122 return config
123
124 def call(self, inputs):
125 embedded_tokens = self.token_embedding(inputs)
126 embedded_positions = self.position_embedding(embedded_tokens)
127 outputs = embedded_tokens + embedded_positions
128 return outputs
129
130 def compute_mask(self, inputs, mask=None):
131 return self.token_embedding.compute_mask(inputs, mask=mask)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py
--- a/keras_nlp/layers/token_and_position_embedding.py
+++ b/keras_nlp/layers/token_and_position_embedding.py
@@ -47,14 +47,11 @@
Examples:
```python
- seq_length = 50
- vocab_size = 5000
- embed_dim = 128
- inputs = keras.Input(shape=(seq_length,))
+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)
embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
- vocabulary_size=vocab_size,
- sequence_length=seq_length,
- embedding_dim=embed_dim,
+ vocabulary_size=10_000,
+ sequence_length=50,
+ embedding_dim=128,
)
outputs = embedding_layer(inputs)
```
| {"golden_diff": "diff --git a/keras_nlp/layers/token_and_position_embedding.py b/keras_nlp/layers/token_and_position_embedding.py\n--- a/keras_nlp/layers/token_and_position_embedding.py\n+++ b/keras_nlp/layers/token_and_position_embedding.py\n@@ -47,14 +47,11 @@\n \n Examples:\n ```python\n- seq_length = 50\n- vocab_size = 5000\n- embed_dim = 128\n- inputs = keras.Input(shape=(seq_length,))\n+ inputs = tf.ones(shape=(1, 50), dtype=tf.int64)\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n- vocabulary_size=vocab_size,\n- sequence_length=seq_length,\n- embedding_dim=embed_dim,\n+ vocabulary_size=10_000,\n+ sequence_length=50,\n+ embedding_dim=128,\n )\n outputs = embedding_layer(inputs)\n ```\n", "issue": "Simplify docstring example of `TokenAndPositionEmbedding`\nThe current [docstring example](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/layers/token_and_position_embedding.py#L45) of `TokenAndPositionEmbedding` is a bit more complicated than needed and should better match newer examples like those in [`keras_nlp.models.BertBackbone`](https://github.com/keras-team/keras-nlp/blob/3a951a9f5442b77a0c474bb8b61406f65c7ef924/keras_nlp/models/bert/bert_backbone.py#L67).\r\n\r\nMake the following changes to the docstring example:\r\n* Create a real input of the correct shape using `tf.ones`\r\n* Inline constant values like `\"seq_length\"` directly into the function calls\r\n\r\nBe sure to check that your example runs on colab or your local machine!\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\nfrom keras_nlp.utils.keras_utils import clone_initializer\n\n\[email protected]_keras_serializable(package=\"keras_nlp\")\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n Token and position embeddings are ways of representing words and their order\n in a sentence. This layer creates a `keras.layers.Embedding` token embedding\n and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their\n output when called. This layer assumes that the last dimension in the input\n corresponds to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n seq_length = 50\n vocab_size = 5000\n embed_dim = 128\n inputs = keras.Input(shape=(seq_length,))\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=vocab_size,\n sequence_length=seq_length,\n embedding_dim=embed_dim,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.embeddings_initializer = keras.initializers.get(\n embeddings_initializer\n )\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=clone_initializer(\n self.embeddings_initializer\n ),\n mask_zero=mask_zero,\n name=\"token_embedding\"\n + str(keras.backend.get_uid(\"token_embedding\")),\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=clone_initializer(self.embeddings_initializer),\n name=\"position_embedding\"\n + str(keras.backend.get_uid(\"position_embedding\")),\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}], "after_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Creates an Embedding Layer and adds Positional Embeddings\"\"\"\n\nfrom tensorflow import keras\n\nimport keras_nlp.layers\nfrom keras_nlp.utils.keras_utils import clone_initializer\n\n\[email protected]_keras_serializable(package=\"keras_nlp\")\nclass TokenAndPositionEmbedding(keras.layers.Layer):\n \"\"\"A layer which sums a token and position embedding.\n\n Token and position embeddings are ways of representing words and their order\n in a sentence. This layer creates a `keras.layers.Embedding` token embedding\n and a `keras_nlp.layers.PositionEmbedding` position embedding and sums their\n output when called. This layer assumes that the last dimension in the input\n corresponds to the sequence dimension.\n\n Args:\n vocabulary_size: The size of the vocabulary.\n sequence_length: The maximum length of input sequence\n embedding_dim: The output dimension of the embedding layer\n embeddings_initializer: The initializer to use for the Embedding\n Layers\n mask_zero: Boolean, whether or not the input value 0 is a special\n \"padding\" value that should be masked out.\n This is useful when using recurrent layers which may take variable\n length input. If this is True, then all subsequent layers in the\n model need to support masking or an exception will be raised.\n If mask_zero` is set to True, as a consequence, index 0 cannot be\n used in the vocabulary\n (input_dim should equal size of vocabulary + 1).\n\n Examples:\n ```python\n inputs = tf.ones(shape=(1, 50), dtype=tf.int64)\n embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(\n vocabulary_size=10_000,\n sequence_length=50,\n embedding_dim=128,\n )\n outputs = embedding_layer(inputs)\n ```\n \"\"\"\n\n def __init__(\n self,\n vocabulary_size,\n sequence_length,\n embedding_dim,\n embeddings_initializer=\"glorot_uniform\",\n mask_zero=False,\n **kwargs\n ):\n super().__init__(**kwargs)\n if vocabulary_size is None:\n raise ValueError(\n \"`vocabulary_size` must be an Integer, received `None`.\"\n )\n if sequence_length is None:\n raise ValueError(\n \"`sequence_length` must be an Integer, received `None`.\"\n )\n if embedding_dim is None:\n raise ValueError(\n \"`embedding_dim` must be an Integer, received `None`.\"\n )\n self.vocabulary_size = int(vocabulary_size)\n self.sequence_length = int(sequence_length)\n self.embedding_dim = int(embedding_dim)\n self.embeddings_initializer = keras.initializers.get(\n embeddings_initializer\n )\n self.token_embedding = keras.layers.Embedding(\n vocabulary_size,\n embedding_dim,\n embeddings_initializer=clone_initializer(\n self.embeddings_initializer\n ),\n mask_zero=mask_zero,\n name=\"token_embedding\"\n + str(keras.backend.get_uid(\"token_embedding\")),\n )\n self.position_embedding = keras_nlp.layers.PositionEmbedding(\n sequence_length=sequence_length,\n initializer=clone_initializer(self.embeddings_initializer),\n name=\"position_embedding\"\n + str(keras.backend.get_uid(\"position_embedding\")),\n )\n self.supports_masking = self.token_embedding.supports_masking\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"vocabulary_size\": self.vocabulary_size,\n \"sequence_length\": self.sequence_length,\n \"embedding_dim\": self.embedding_dim,\n \"embeddings_initializer\": keras.initializers.serialize(\n self.embeddings_initializer\n ),\n \"mask_zero\": self.token_embedding.mask_zero,\n },\n )\n return config\n\n def call(self, inputs):\n embedded_tokens = self.token_embedding(inputs)\n embedded_positions = self.position_embedding(embedded_tokens)\n outputs = embedded_tokens + embedded_positions\n return outputs\n\n def compute_mask(self, inputs, mask=None):\n return self.token_embedding.compute_mask(inputs, mask=mask)\n", "path": "keras_nlp/layers/token_and_position_embedding.py"}]} | 1,810 | 221 |
gh_patches_debug_21734 | rasdani/github-patches | git_diff | marshmallow-code__webargs-99 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor tests
The tests can be DRY'd up significantly. I suggest the following:
- [ ] Implement the same HTTP API across all the supported frameworks
- [ ] Use webtest to test endpoints; create a base test class that tests common functionality
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os
3 import sys
4 import webbrowser
5
6 from invoke import task, run
7
8 docs_dir = 'docs'
9 build_dir = os.path.join(docs_dir, '_build')
10
11 @task
12 def test(coverage=False, browse=False):
13 flake()
14 import pytest
15 args = []
16 if coverage:
17 args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
18
19 if sys.version_info < (3, 4, 1):
20 args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
21 retcode = pytest.main(args)
22 if coverage and browse:
23 webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
24 sys.exit(retcode)
25
26 @task
27 def flake():
28 """Run flake8 on codebase."""
29 cmd = 'flake8 .'
30 if sys.version_info < (3, 4, 1):
31 excludes = [
32 os.path.join('tests', 'test_aiohttp'),
33 os.path.join('webargs', 'async.py'),
34 os.path.join('webargs', 'aiohttpparser.py'),
35 os.path.join('examples', 'annotations_example.py'),
36 'build',
37 ]
38 cmd += ' --exclude={0}'.format(','.join(excludes))
39 run(cmd, echo=True)
40
41 @task
42 def clean():
43 run("rm -rf build")
44 run("rm -rf dist")
45 run("rm -rf webargs.egg-info")
46 clean_docs()
47 print("Cleaned up.")
48
49 @task
50 def readme(browse=False):
51 run('rst2html.py README.rst > README.html')
52 if browse:
53 webbrowser.open_new_tab('README.html')
54
55 @task
56 def clean_docs():
57 run("rm -rf %s" % build_dir)
58
59 @task
60 def browse_docs():
61 path = os.path.join(build_dir, 'index.html')
62 webbrowser.open_new_tab(path)
63
64 @task
65 def docs(clean=False, browse=False, watch=False):
66 """Build the docs."""
67 if clean:
68 clean_docs()
69 run("sphinx-build %s %s" % (docs_dir, build_dir), echo=True)
70 if browse:
71 browse_docs()
72 if watch:
73 watch_docs()
74
75 @task
76 def watch_docs():
77 """Run build the docs when a file changes."""
78 try:
79 import sphinx_autobuild # noqa
80 except ImportError:
81 print('ERROR: watch task requires the sphinx_autobuild package.')
82 print('Install it with:')
83 print(' pip install sphinx-autobuild')
84 sys.exit(1)
85 run('sphinx-autobuild {0} {1} --watch {2}'.format(
86 docs_dir, build_dir, 'webargs'), echo=True, pty=True)
87
88 @task
89 def publish(test=False):
90 """Publish to the cheeseshop."""
91 clean()
92 if test:
93 run('python setup.py register -r test sdist bdist_wheel', echo=True)
94 run('twine upload dist/* -r test', echo=True)
95 else:
96 run('python setup.py register sdist bdist_wheel', echo=True)
97 run('twine upload dist/*', echo=True)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -17,7 +17,7 @@
args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])
if sys.version_info < (3, 4, 1):
- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))
+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))
retcode = pytest.main(args)
if coverage and browse:
webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))
@@ -29,7 +29,8 @@
cmd = 'flake8 .'
if sys.version_info < (3, 4, 1):
excludes = [
- os.path.join('tests', 'test_aiohttp'),
+ os.path.join('tests', 'apps', 'aiohttp_app.py'),
+ os.path.join('tests', 'test_aiohttparser.py'),
os.path.join('webargs', 'async.py'),
os.path.join('webargs', 'aiohttpparser.py'),
os.path.join('examples', 'annotations_example.py'),
| {"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -17,7 +17,7 @@\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n \n if sys.version_info < (3, 4, 1):\n- args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n+ args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n@@ -29,7 +29,8 @@\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n- os.path.join('tests', 'test_aiohttp'),\n+ os.path.join('tests', 'apps', 'aiohttp_app.py'),\n+ os.path.join('tests', 'test_aiohttparser.py'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n", "issue": "Refactor tests\nThe tests can be DRY'd up significantly. I suggest the following:\n- [ ] Implement the same HTTP API across all the supported frameworks\n- [ ] Use webtest to test endpoints; create a base test class that tests common functionality\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport webbrowser\n\nfrom invoke import task, run\n\ndocs_dir = 'docs'\nbuild_dir = os.path.join(docs_dir, '_build')\n\n@task\ndef test(coverage=False, browse=False):\n flake()\n import pytest\n args = []\n if coverage:\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n\n if sys.version_info < (3, 4, 1):\n args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttp')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n sys.exit(retcode)\n\n@task\ndef flake():\n \"\"\"Run flake8 on codebase.\"\"\"\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n os.path.join('tests', 'test_aiohttp'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n 'build',\n ]\n cmd += ' --exclude={0}'.format(','.join(excludes))\n run(cmd, echo=True)\n\n@task\ndef clean():\n run(\"rm -rf build\")\n run(\"rm -rf dist\")\n run(\"rm -rf webargs.egg-info\")\n clean_docs()\n print(\"Cleaned up.\")\n\n@task\ndef readme(browse=False):\n run('rst2html.py README.rst > README.html')\n if browse:\n webbrowser.open_new_tab('README.html')\n\n@task\ndef clean_docs():\n run(\"rm -rf %s\" % build_dir)\n\n@task\ndef browse_docs():\n path = os.path.join(build_dir, 'index.html')\n webbrowser.open_new_tab(path)\n\n@task\ndef docs(clean=False, browse=False, watch=False):\n \"\"\"Build the docs.\"\"\"\n if clean:\n clean_docs()\n run(\"sphinx-build %s %s\" % (docs_dir, build_dir), echo=True)\n if browse:\n browse_docs()\n if watch:\n watch_docs()\n\n@task\ndef watch_docs():\n \"\"\"Run build the docs when a file changes.\"\"\"\n try:\n import sphinx_autobuild # noqa\n except ImportError:\n print('ERROR: watch task requires the sphinx_autobuild package.')\n print('Install it with:')\n print(' pip install sphinx-autobuild')\n sys.exit(1)\n run('sphinx-autobuild {0} {1} --watch {2}'.format(\n docs_dir, build_dir, 'webargs'), echo=True, pty=True)\n\n@task\ndef publish(test=False):\n \"\"\"Publish to the cheeseshop.\"\"\"\n clean()\n if test:\n run('python setup.py register -r test sdist bdist_wheel', echo=True)\n run('twine upload dist/* -r test', echo=True)\n else:\n run('python setup.py register sdist bdist_wheel', echo=True)\n run('twine upload dist/*', echo=True)\n", "path": "tasks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport webbrowser\n\nfrom invoke import task, run\n\ndocs_dir = 'docs'\nbuild_dir = os.path.join(docs_dir, '_build')\n\n@task\ndef test(coverage=False, browse=False):\n flake()\n import pytest\n args = []\n if coverage:\n args.extend(['--cov=webargs', '--cov-report=term', '--cov-report=html'])\n\n if sys.version_info < (3, 4, 1):\n args.append('--ignore={0}'.format(os.path.join('tests', 'test_aiohttpparser.py')))\n retcode = pytest.main(args)\n if coverage and browse:\n webbrowser.open_new_tab(os.path.join('htmlcov', 'index.html'))\n sys.exit(retcode)\n\n@task\ndef flake():\n \"\"\"Run flake8 on codebase.\"\"\"\n cmd = 'flake8 .'\n if sys.version_info < (3, 4, 1):\n excludes = [\n os.path.join('tests', 'apps', 'aiohttp_app.py'),\n os.path.join('tests', 'test_aiohttparser.py'),\n os.path.join('webargs', 'async.py'),\n os.path.join('webargs', 'aiohttpparser.py'),\n os.path.join('examples', 'annotations_example.py'),\n 'build',\n ]\n cmd += ' --exclude={0}'.format(','.join(excludes))\n run(cmd, echo=True)\n\n@task\ndef clean():\n run(\"rm -rf build\")\n run(\"rm -rf dist\")\n run(\"rm -rf webargs.egg-info\")\n clean_docs()\n print(\"Cleaned up.\")\n\n@task\ndef readme(browse=False):\n run('rst2html.py README.rst > README.html')\n if browse:\n webbrowser.open_new_tab('README.html')\n\n@task\ndef clean_docs():\n run(\"rm -rf %s\" % build_dir)\n\n@task\ndef browse_docs():\n path = os.path.join(build_dir, 'index.html')\n webbrowser.open_new_tab(path)\n\n@task\ndef docs(clean=False, browse=False, watch=False):\n \"\"\"Build the docs.\"\"\"\n if clean:\n clean_docs()\n run(\"sphinx-build %s %s\" % (docs_dir, build_dir), echo=True)\n if browse:\n browse_docs()\n if watch:\n watch_docs()\n\n@task\ndef watch_docs():\n \"\"\"Run build the docs when a file changes.\"\"\"\n try:\n import sphinx_autobuild # noqa\n except ImportError:\n print('ERROR: watch task requires the sphinx_autobuild package.')\n print('Install it with:')\n print(' pip install sphinx-autobuild')\n sys.exit(1)\n run('sphinx-autobuild {0} {1} --watch {2}'.format(\n docs_dir, build_dir, 'webargs'), echo=True, pty=True)\n\n@task\ndef publish(test=False):\n \"\"\"Publish to the cheeseshop.\"\"\"\n clean()\n if test:\n run('python setup.py register -r test sdist bdist_wheel', echo=True)\n run('twine upload dist/* -r test', echo=True)\n else:\n run('python setup.py register sdist bdist_wheel', echo=True)\n run('twine upload dist/*', echo=True)\n", "path": "tasks.py"}]} | 1,201 | 275 |
gh_patches_debug_12492 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9161 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
issue with permute_recursive
### What would you like to share?
Your code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.
To fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:
def permute_recursive(nums: list[int]) -> list[list[int]]:
"""
Return all permutations.
>>> permute_recursive([1, 2, 3])
[[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
"""
result: list[list[int]] = []
if len(nums) == 0:
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
permutations = permute_recursive(nums[:]) # Make a copy of nums
for perm in permutations:
perm.append(n)
result.extend(permutations)
nums.append(n)
return result
```
With this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/arrays/permutations.py`
Content:
```
1 def permute_recursive(nums: list[int]) -> list[list[int]]:
2 """
3 Return all permutations.
4
5 >>> permute_recursive([1, 2, 3])
6 [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]
7 """
8 result: list[list[int]] = []
9 if len(nums) == 0:
10 return [[]]
11 for _ in range(len(nums)):
12 n = nums.pop(0)
13 permutations = permute_recursive(nums)
14 for perm in permutations:
15 perm.append(n)
16 result.extend(permutations)
17 nums.append(n)
18 return result
19
20
21 def permute_backtrack(nums: list[int]) -> list[list[int]]:
22 """
23 Return all permutations of the given list.
24
25 >>> permute_backtrack([1, 2, 3])
26 [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
27 """
28
29 def backtrack(start: int) -> None:
30 if start == len(nums) - 1:
31 output.append(nums[:])
32 else:
33 for i in range(start, len(nums)):
34 nums[start], nums[i] = nums[i], nums[start]
35 backtrack(start + 1)
36 nums[start], nums[i] = nums[i], nums[start] # backtrack
37
38 output: list[list[int]] = []
39 backtrack(0)
40 return output
41
42
43 if __name__ == "__main__":
44 import doctest
45
46 res = permute_backtrack([1, 2, 3])
47 print(res)
48 doctest.testmod()
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py
--- a/data_structures/arrays/permutations.py
+++ b/data_structures/arrays/permutations.py
@@ -10,7 +10,7 @@
return [[]]
for _ in range(len(nums)):
n = nums.pop(0)
- permutations = permute_recursive(nums)
+ permutations = permute_recursive(nums.copy())
for perm in permutations:
perm.append(n)
result.extend(permutations)
@@ -43,6 +43,6 @@
if __name__ == "__main__":
import doctest
- res = permute_backtrack([1, 2, 3])
- print(res)
+ result = permute_backtrack([1, 2, 3])
+ print(result)
doctest.testmod()
| {"golden_diff": "diff --git a/data_structures/arrays/permutations.py b/data_structures/arrays/permutations.py\n--- a/data_structures/arrays/permutations.py\n+++ b/data_structures/arrays/permutations.py\n@@ -10,7 +10,7 @@\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n- permutations = permute_recursive(nums)\n+ permutations = permute_recursive(nums.copy())\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n@@ -43,6 +43,6 @@\n if __name__ == \"__main__\":\n import doctest\n \n- res = permute_backtrack([1, 2, 3])\n- print(res)\n+ result = permute_backtrack([1, 2, 3])\n+ print(result)\n doctest.testmod()\n", "issue": "issue with permute_recursive\n### What would you like to share?\n\nYour code looks mostly correct, but there's one issue in the `permute_recursive` function due to the modification of the `nums` list. Lists in Python are mutable, and when you use `nums.pop(0)`, it modifies the original `nums` list. This can lead to incorrect results and even an infinite loop.\r\n\r\nTo fix this, you should pass a copy of the `nums` list to the recursive function. Here's the corrected `permute_recursive` function:\r\n\r\n\r\ndef permute_recursive(nums: list[int]) -> list[list[int]]:\r\n \"\"\"\r\n Return all permutations.\r\n\r\n >>> permute_recursive([1, 2, 3])\r\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\r\n \"\"\"\r\n result: list[list[int]] = []\r\n if len(nums) == 0:\r\n return [[]]\r\n for _ in range(len(nums)):\r\n n = nums.pop(0)\r\n permutations = permute_recursive(nums[:]) # Make a copy of nums\r\n for perm in permutations:\r\n perm.append(n)\r\n result.extend(permutations)\r\n nums.append(n)\r\n return result\r\n```\r\n\r\nWith this modification, your code should work correctly for both `permute_recursive` and `permute_backtrack`.\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "def permute_recursive(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations.\n\n >>> permute_recursive([1, 2, 3])\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\n \"\"\"\n result: list[list[int]] = []\n if len(nums) == 0:\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n permutations = permute_recursive(nums)\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n nums.append(n)\n return result\n\n\ndef permute_backtrack(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations of the given list.\n\n >>> permute_backtrack([1, 2, 3])\n [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]\n \"\"\"\n\n def backtrack(start: int) -> None:\n if start == len(nums) - 1:\n output.append(nums[:])\n else:\n for i in range(start, len(nums)):\n nums[start], nums[i] = nums[i], nums[start]\n backtrack(start + 1)\n nums[start], nums[i] = nums[i], nums[start] # backtrack\n\n output: list[list[int]] = []\n backtrack(0)\n return output\n\n\nif __name__ == \"__main__\":\n import doctest\n\n res = permute_backtrack([1, 2, 3])\n print(res)\n doctest.testmod()\n", "path": "data_structures/arrays/permutations.py"}], "after_files": [{"content": "def permute_recursive(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations.\n\n >>> permute_recursive([1, 2, 3])\n [[3, 2, 1], [2, 3, 1], [1, 3, 2], [3, 1, 2], [2, 1, 3], [1, 2, 3]]\n \"\"\"\n result: list[list[int]] = []\n if len(nums) == 0:\n return [[]]\n for _ in range(len(nums)):\n n = nums.pop(0)\n permutations = permute_recursive(nums.copy())\n for perm in permutations:\n perm.append(n)\n result.extend(permutations)\n nums.append(n)\n return result\n\n\ndef permute_backtrack(nums: list[int]) -> list[list[int]]:\n \"\"\"\n Return all permutations of the given list.\n\n >>> permute_backtrack([1, 2, 3])\n [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]]\n \"\"\"\n\n def backtrack(start: int) -> None:\n if start == len(nums) - 1:\n output.append(nums[:])\n else:\n for i in range(start, len(nums)):\n nums[start], nums[i] = nums[i], nums[start]\n backtrack(start + 1)\n nums[start], nums[i] = nums[i], nums[start] # backtrack\n\n output: list[list[int]] = []\n backtrack(0)\n return output\n\n\nif __name__ == \"__main__\":\n import doctest\n\n result = permute_backtrack([1, 2, 3])\n print(result)\n doctest.testmod()\n", "path": "data_structures/arrays/permutations.py"}]} | 1,078 | 187 |
gh_patches_debug_7516 | rasdani/github-patches | git_diff | pre-commit__pre-commit-624 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
commit-msg stage does not work
Everything works as expected when running just `pre-commit install`, then hooks work.
But when running `pre-commit install -t commit-msg` `IOError` happens, since template could not be found.
Here's the detailed information.
## Env
- `python2.7`
- `pipenv 7.3.7`
- `pre-commit 1.1.1`
Actually tested with both `python2` and `python3`.
## Configuration
```yaml
- repo: local
hooks:
- id: gitlint
name: gitlint
entry: "bash -c 'gitlint lint'"
language: system
stages: [commit-msg]
- id: pytest
name: pytest
entry: "bash -c 'python manage.py test'"
language: system
- id: safety
name: safety
entry: "bash -c 'safety check'"
language: system
```
## Output
```
» pre-commit install -t commit-msg
Running in migration mode with existing hooks at /Users/sobolev/Desktop/test/.git/hooks/commit-msg.legacy
Use -f to use only pre-commit.
An unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl'
Check the log at /Users/sobolev/.cache/pre-commit/pre-commit.log
```
When I do `ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl` that's what is see:
```
(test-p4WySO70) ~/Desktop/test master ✗ ✚ 2 ⚡
» ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources
empty_template pre-push-tmpl ruby-build.tar.gz
hook-tmpl rbenv.tar.gz ruby-download.tar.gz
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='1.1.1',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.7',
22 'Programming Language :: Python :: 3',
23 'Programming Language :: Python :: 3.5',
24 'Programming Language :: Python :: 3.6',
25 'Programming Language :: Python :: Implementation :: CPython',
26 'Programming Language :: Python :: Implementation :: PyPy',
27 ],
28
29 packages=find_packages(exclude=('tests*', 'testing*')),
30 package_data={
31 'pre_commit': [
32 'resources/hook-tmpl',
33 'resources/pre-push-tmpl',
34 'resources/rbenv.tar.gz',
35 'resources/ruby-build.tar.gz',
36 'resources/ruby-download.tar.gz',
37 'resources/empty_template/*',
38 'resources/empty_template/.npmignore',
39 ],
40 },
41 install_requires=[
42 'aspy.yaml',
43 'cached-property',
44 'identify>=1.0.0',
45 'nodeenv>=0.11.1',
46 'pyyaml',
47 'six',
48 'virtualenv',
49 ],
50 entry_points={
51 'console_scripts': [
52 'pre-commit = pre_commit.main:main',
53 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa
54 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa
55 ],
56 },
57 )
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,11 +29,7 @@
packages=find_packages(exclude=('tests*', 'testing*')),
package_data={
'pre_commit': [
- 'resources/hook-tmpl',
- 'resources/pre-push-tmpl',
- 'resources/rbenv.tar.gz',
- 'resources/ruby-build.tar.gz',
- 'resources/ruby-download.tar.gz',
+ 'resources/*',
'resources/empty_template/*',
'resources/empty_template/.npmignore',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,11 +29,7 @@\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n- 'resources/hook-tmpl',\n- 'resources/pre-push-tmpl',\n- 'resources/rbenv.tar.gz',\n- 'resources/ruby-build.tar.gz',\n- 'resources/ruby-download.tar.gz',\n+ 'resources/*',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n", "issue": "commit-msg stage does not work\nEverything works as expected when running just `pre-commit install`, then hooks work.\r\nBut when running `pre-commit install -t commit-msg` `IOError` happens, since template could not be found.\r\n\r\nHere's the detailed information.\r\n\r\n## Env\r\n\r\n- `python2.7`\r\n- `pipenv 7.3.7`\r\n- `pre-commit 1.1.1`\r\n\r\nActually tested with both `python2` and `python3`.\r\n\r\n## Configuration\r\n\r\n```yaml\r\n- repo: local\r\n hooks:\r\n - id: gitlint\r\n name: gitlint\r\n entry: \"bash -c 'gitlint lint'\"\r\n language: system\r\n stages: [commit-msg]\r\n\r\n - id: pytest\r\n name: pytest\r\n entry: \"bash -c 'python manage.py test'\"\r\n language: system\r\n\r\n - id: safety\r\n name: safety\r\n entry: \"bash -c 'safety check'\"\r\n language: system\r\n```\r\n\r\n## Output\r\n\r\n```\r\n\u00bb pre-commit install -t commit-msg\r\nRunning in migration mode with existing hooks at /Users/sobolev/Desktop/test/.git/hooks/commit-msg.legacy\r\nUse -f to use only pre-commit.\r\nAn unexpected error has occurred: IOError: [Errno 2] No such file or directory: '/Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl'\r\nCheck the log at /Users/sobolev/.cache/pre-commit/pre-commit.log\r\n```\r\n\r\nWhen I do `ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources/commit-msg-tmpl` that's what is see:\r\n\r\n```\r\n(test-p4WySO70) ~/Desktop/test master \u2717 \u271a 2 \u26a1\r\n\u00bb ls /Users/sobolev/.virtualenvs/test-p4WySO70/lib/python2.7/site-packages/pre_commit/resources \r\nempty_template pre-push-tmpl ruby-build.tar.gz\r\nhook-tmpl rbenv.tar.gz ruby-download.tar.gz\r\n```\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='1.1.1',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'identify>=1.0.0',\n 'nodeenv>=0.11.1',\n 'pyyaml',\n 'six',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='1.1.1',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages(exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/*',\n 'resources/empty_template/*',\n 'resources/empty_template/.npmignore',\n ],\n },\n install_requires=[\n 'aspy.yaml',\n 'cached-property',\n 'identify>=1.0.0',\n 'nodeenv>=0.11.1',\n 'pyyaml',\n 'six',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib:validate_config_main', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib:validate_manifest_main', # noqa\n ],\n },\n)\n", "path": "setup.py"}]} | 1,223 | 130 |
gh_patches_debug_3561 | rasdani/github-patches | git_diff | deepchecks__deepchecks-1101 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] some randomness occurring in tabular sample
**Describe the bug**
The results are not the same even when setting random states
**To Reproduce**
Run model error / performance report a couple of times
**Expected behavior**
Same results
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Quickstart in 5 minutes
4 ***********************
5 In order to run your first Deepchecks Suite all you need to have is the data
6 and model that you wish to validate. More specifically, you need:
7
8 * Your train and test data (in Pandas DataFrames or Numpy Arrays)
9 * (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,
10 scikit-learn models, and many more). Required for running checks that need the
11 model's predictions for running.
12
13 To run your first suite on your data and model, you need only a few lines of
14 code, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.
15
16 # If you don't have deepchecks installed yet:
17
18 .. code:: python
19
20 # If you don't have deepchecks installed yet:
21 import sys
22 !{sys.executable} -m pip install deepchecks -U --quiet #--user
23
24 """
25
26 #%%
27 # Load Data, Split Train-Val, and Train a Simple Model
28 # ====================================================
29 # For the purpose of this guide we'll use the simple iris dataset and train a
30 # simple random forest model for multiclass classification:
31
32 # General imports
33 import pandas as pd
34 import numpy as np
35
36 from deepchecks.tabular.datasets.classification import iris
37 from sklearn.ensemble import RandomForestClassifier
38 from sklearn.model_selection import train_test_split
39
40 # Load Data
41 iris_df = iris.load_data(data_format='Dataframe', as_train_test=False)
42 label_col = 'target'
43 df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)
44
45 # Train Model
46 rf_clf = RandomForestClassifier()
47 rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);
48
49 #%%
50 # Define a Dataset Object
51 # =======================
52 # Initialize the Dataset object, stating the relevant metadata about the dataset
53 # (e.g. the name for the label column)
54 #
55 # Check out the Dataset's attributes to see which additional special columns can be
56 # declared and used (e.g. date column, index column).
57
58 from deepchecks.tabular import Dataset
59
60 # We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred
61 # If the dataset has categorical features, the best practice is to pass a list with their names
62
63 ds_train = Dataset(df_train, label=label_col, cat_features=[])
64 ds_test = Dataset(df_test, label=label_col, cat_features=[])
65
66 #%%
67 # Run a Deepchecks Suite
68 # ======================
69 # Run the full suite
70 # ------------------
71 # Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.
72 #
73 # Check out the `when should you use </getting-started/when_should_you_use.html>`__
74 # deepchecks guide for some more info about the existing suites and when to use them.
75
76 from deepchecks.tabular.suites import full_suite
77
78 suite = full_suite()
79
80 #%%
81
82 suite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)
83
84 #%%
85 # Run the integrity suite
86 # -----------------------
87 # If you still haven't started modeling and just have a single dataset, you
88 # can use the ``single_dataset_integrity``:
89
90 from deepchecks.tabular.suites import single_dataset_integrity
91
92 integ_suite = single_dataset_integrity()
93 integ_suite.run(ds_train)
94
95 #%%
96 # Run a Deepchecks Check
97 # ======================
98 # If you want to run a specific check, you can just import it and run it directly.
99 #
100 # Check out the `Check tabular examples </examples/index.html>`__ in
101 # the examples or the `API Reference </api/index.html>`__ for more info about the
102 # existing checks and their parameters.
103
104 from deepchecks.tabular.checks import TrainTestLabelDrift
105
106 #%%
107
108 check = TrainTestLabelDrift()
109 result = check.run(ds_train, ds_test)
110 result
111
112 #%%
113 # %%
114 # and also inspect the result value which has a check-dependant structure:
115
116 result.value
117
118 #%%
119 # Edit an Existing Suite
120 # ======================
121 # Inspect suite and remove condition
122 # ----------------------------------
123 # We can see that the single feature contribution failed, both for test and for
124 # train. Since this is a very simple dataset with few features and this behavior
125 # is not necessarily problematic, we will remove the existing conditions for the PPS
126
127 # Lets first print the suite to find the conditions that we want to change:
128
129 suite
130
131 #%%
132
133 # now we can use the check's index and the condition's number to remove it:
134 print(suite[6])
135 suite[6].remove_condition(0)
136
137 #%%
138
139 # print and see that the condition was removed
140 suite[6]
141
142 #%%
143 # If we now re-run the suite, all of the existing conditions will pass.
144 #
145 # *Note: the check we manipulated will still run as part of the Suite, however
146 # it won't appear in the Conditions Summary since it no longer has any
147 # conditions defined on it. You can still see its display results in the
148 # Additional Outputs section*
149 #
150 # **For more info about working with conditions, see the detailed configuring
151 # conditions guide.**
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
--- a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
+++ b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py
@@ -43,7 +43,7 @@
df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)
# Train Model
-rf_clf = RandomForestClassifier()
+rf_clf = RandomForestClassifier(random_state=0)
rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);
#%%
| {"golden_diff": "diff --git a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n--- a/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n+++ b/docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py\n@@ -43,7 +43,7 @@\n df_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n \n # Train Model\n-rf_clf = RandomForestClassifier()\n+rf_clf = RandomForestClassifier(random_state=0)\n rf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n \n #%%\n", "issue": "[BUG] some randomness occurring in tabular sample\n**Describe the bug**\r\nThe results are not the same even when setting random states\r\n\r\n**To Reproduce**\r\nRun model error / performance report a couple of times\r\n\r\n**Expected behavior**\r\nSame results\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nQuickstart in 5 minutes\n***********************\nIn order to run your first Deepchecks Suite all you need to have is the data\nand model that you wish to validate. More specifically, you need:\n\n* Your train and test data (in Pandas DataFrames or Numpy Arrays)\n* (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,\n scikit-learn models, and many more). Required for running checks that need the\n model's predictions for running.\n\nTo run your first suite on your data and model, you need only a few lines of\ncode, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.\n\n# If you don't have deepchecks installed yet:\n\n.. code:: python\n\n # If you don't have deepchecks installed yet:\n import sys\n !{sys.executable} -m pip install deepchecks -U --quiet #--user\n\n\"\"\"\n\n#%%\n# Load Data, Split Train-Val, and Train a Simple Model\n# ====================================================\n# For the purpose of this guide we'll use the simple iris dataset and train a\n# simple random forest model for multiclass classification:\n\n# General imports\nimport pandas as pd\nimport numpy as np\n\nfrom deepchecks.tabular.datasets.classification import iris\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n# Load Data\niris_df = iris.load_data(data_format='Dataframe', as_train_test=False)\nlabel_col = 'target'\ndf_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n\n# Train Model\nrf_clf = RandomForestClassifier()\nrf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n\n#%%\n# Define a Dataset Object\n# =======================\n# Initialize the Dataset object, stating the relevant metadata about the dataset\n# (e.g. the name for the label column)\n#\n# Check out the Dataset's attributes to see which additional special columns can be\n# declared and used (e.g. date column, index column).\n\nfrom deepchecks.tabular import Dataset\n\n# We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred\n# If the dataset has categorical features, the best practice is to pass a list with their names\n\nds_train = Dataset(df_train, label=label_col, cat_features=[])\nds_test = Dataset(df_test, label=label_col, cat_features=[])\n\n#%%\n# Run a Deepchecks Suite\n# ======================\n# Run the full suite\n# ------------------\n# Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.\n#\n# Check out the `when should you use </getting-started/when_should_you_use.html>`__\n# deepchecks guide for some more info about the existing suites and when to use them.\n\nfrom deepchecks.tabular.suites import full_suite\n\nsuite = full_suite()\n\n#%%\n\nsuite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)\n\n#%%\n# Run the integrity suite\n# -----------------------\n# If you still haven't started modeling and just have a single dataset, you\n# can use the ``single_dataset_integrity``:\n\nfrom deepchecks.tabular.suites import single_dataset_integrity\n\ninteg_suite = single_dataset_integrity()\ninteg_suite.run(ds_train)\n\n#%%\n# Run a Deepchecks Check\n# ======================\n# If you want to run a specific check, you can just import it and run it directly.\n#\n# Check out the `Check tabular examples </examples/index.html>`__ in\n# the examples or the `API Reference </api/index.html>`__ for more info about the\n# existing checks and their parameters.\n\nfrom deepchecks.tabular.checks import TrainTestLabelDrift\n\n#%%\n\ncheck = TrainTestLabelDrift()\nresult = check.run(ds_train, ds_test)\nresult\n\n#%%\n# %%\n# and also inspect the result value which has a check-dependant structure:\n\nresult.value\n\n#%%\n# Edit an Existing Suite\n# ======================\n# Inspect suite and remove condition\n# ----------------------------------\n# We can see that the single feature contribution failed, both for test and for\n# train. Since this is a very simple dataset with few features and this behavior\n# is not necessarily problematic, we will remove the existing conditions for the PPS\n\n# Lets first print the suite to find the conditions that we want to change:\n\nsuite\n\n#%%\n\n# now we can use the check's index and the condition's number to remove it:\nprint(suite[6])\nsuite[6].remove_condition(0)\n\n#%%\n\n# print and see that the condition was removed\nsuite[6]\n\n#%%\n# If we now re-run the suite, all of the existing conditions will pass.\n#\n# *Note: the check we manipulated will still run as part of the Suite, however\n# it won't appear in the Conditions Summary since it no longer has any\n# conditions defined on it. You can still see its display results in the \n# Additional Outputs section*\n#\n# **For more info about working with conditions, see the detailed configuring \n# conditions guide.**", "path": "docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nQuickstart in 5 minutes\n***********************\nIn order to run your first Deepchecks Suite all you need to have is the data\nand model that you wish to validate. More specifically, you need:\n\n* Your train and test data (in Pandas DataFrames or Numpy Arrays)\n* (optional) A `supported model </user-guide/tabular/supported_models.html>`__ (including XGBoost,\n scikit-learn models, and many more). Required for running checks that need the\n model's predictions for running.\n\nTo run your first suite on your data and model, you need only a few lines of\ncode, that start here: `Define a Dataset Object <#define-a-dataset-object>`__.\n\n# If you don't have deepchecks installed yet:\n\n.. code:: python\n\n # If you don't have deepchecks installed yet:\n import sys\n !{sys.executable} -m pip install deepchecks -U --quiet #--user\n\n\"\"\"\n\n#%%\n# Load Data, Split Train-Val, and Train a Simple Model\n# ====================================================\n# For the purpose of this guide we'll use the simple iris dataset and train a\n# simple random forest model for multiclass classification:\n\n# General imports\nimport pandas as pd\nimport numpy as np\n\nfrom deepchecks.tabular.datasets.classification import iris\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n# Load Data\niris_df = iris.load_data(data_format='Dataframe', as_train_test=False)\nlabel_col = 'target'\ndf_train, df_test = train_test_split(iris_df, stratify=iris_df[label_col], random_state=0)\n\n# Train Model\nrf_clf = RandomForestClassifier(random_state=0)\nrf_clf.fit(df_train.drop(label_col, axis=1), df_train[label_col]);\n\n#%%\n# Define a Dataset Object\n# =======================\n# Initialize the Dataset object, stating the relevant metadata about the dataset\n# (e.g. the name for the label column)\n#\n# Check out the Dataset's attributes to see which additional special columns can be\n# declared and used (e.g. date column, index column).\n\nfrom deepchecks.tabular import Dataset\n\n# We explicitly state that this dataset has no categorical features, otherwise they will be automatically inferred\n# If the dataset has categorical features, the best practice is to pass a list with their names\n\nds_train = Dataset(df_train, label=label_col, cat_features=[])\nds_test = Dataset(df_test, label=label_col, cat_features=[])\n\n#%%\n# Run a Deepchecks Suite\n# ======================\n# Run the full suite\n# ------------------\n# Use the ``full_suite`` that is a collection of (most of) the prebuilt checks.\n#\n# Check out the `when should you use </getting-started/when_should_you_use.html>`__\n# deepchecks guide for some more info about the existing suites and when to use them.\n\nfrom deepchecks.tabular.suites import full_suite\n\nsuite = full_suite()\n\n#%%\n\nsuite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)\n\n#%%\n# Run the integrity suite\n# -----------------------\n# If you still haven't started modeling and just have a single dataset, you\n# can use the ``single_dataset_integrity``:\n\nfrom deepchecks.tabular.suites import single_dataset_integrity\n\ninteg_suite = single_dataset_integrity()\ninteg_suite.run(ds_train)\n\n#%%\n# Run a Deepchecks Check\n# ======================\n# If you want to run a specific check, you can just import it and run it directly.\n#\n# Check out the `Check tabular examples </examples/index.html>`__ in\n# the examples or the `API Reference </api/index.html>`__ for more info about the\n# existing checks and their parameters.\n\nfrom deepchecks.tabular.checks import TrainTestLabelDrift\n\n#%%\n\ncheck = TrainTestLabelDrift()\nresult = check.run(ds_train, ds_test)\nresult\n\n#%%\n# %%\n# and also inspect the result value which has a check-dependant structure:\n\nresult.value\n\n#%%\n# Edit an Existing Suite\n# ======================\n# Inspect suite and remove condition\n# ----------------------------------\n# We can see that the single feature contribution failed, both for test and for\n# train. Since this is a very simple dataset with few features and this behavior\n# is not necessarily problematic, we will remove the existing conditions for the PPS\n\n# Lets first print the suite to find the conditions that we want to change:\n\nsuite\n\n#%%\n\n# now we can use the check's index and the condition's number to remove it:\nprint(suite[6])\nsuite[6].remove_condition(0)\n\n#%%\n\n# print and see that the condition was removed\nsuite[6]\n\n#%%\n# If we now re-run the suite, all of the existing conditions will pass.\n#\n# *Note: the check we manipulated will still run as part of the Suite, however\n# it won't appear in the Conditions Summary since it no longer has any\n# conditions defined on it. You can still see its display results in the \n# Additional Outputs section*\n#\n# **For more info about working with conditions, see the detailed configuring \n# conditions guide.**", "path": "docs/source/tutorials/tabular/plot_quickstart_in_5_minutes.py"}]} | 1,794 | 157 |
gh_patches_debug_14064 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-2255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CPU/GPU Template
## 🐛 Bug
The GPU or CPU template do not run currently on master after changes including the setup hook.
```
python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp
python -m pl_examples.basic_examples.cpu_template
```
CPU Template Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 53, in <module>
main(args)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py", line 34, in main
trainer.fit(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 1063, in run_pretrain_routine
self.reset_val_dataloader(ref_model)
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 331, in reset_val_dataloader
self._reset_eval_dataloader(model, 'val')
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 253, in _reset_eval_dataloader
dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
File "/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 158, in val_dataloader
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'
```
GPU Template Error:
```
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py", line 64, in __init__
self.c_d1_drop = nn.Dropout(self.drop_prob)
File "/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 10, in __init__
if p < 0 or p > 1:
TypeError: '<' not supported between instances of 'Namespace' and 'int'
```
### Environment
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.8.0
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.6.8
- version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pl_examples/basic_examples/gpu_template.py`
Content:
```
1 """
2 Runs a model on a single node across multiple gpus.
3 """
4 import os
5 from argparse import ArgumentParser
6
7 import numpy as np
8 import torch
9
10 import pytorch_lightning as pl
11 from pl_examples.models.lightning_template import LightningTemplateModel
12
13 SEED = 2334
14 torch.manual_seed(SEED)
15 np.random.seed(SEED)
16
17
18 def main(hparams):
19 """
20 Main training routine specific for this project
21 :param hparams:
22 """
23 # ------------------------
24 # 1 INIT LIGHTNING MODEL
25 # ------------------------
26 model = LightningTemplateModel(hparams)
27
28 # ------------------------
29 # 2 INIT TRAINER
30 # ------------------------
31 trainer = pl.Trainer(
32 max_epochs=hparams.epochs,
33 gpus=hparams.gpus,
34 distributed_backend=hparams.distributed_backend,
35 precision=16 if hparams.use_16bit else 32,
36 )
37
38 # ------------------------
39 # 3 START TRAINING
40 # ------------------------
41 trainer.fit(model)
42
43
44 if __name__ == '__main__':
45 # ------------------------
46 # TRAINING ARGUMENTS
47 # ------------------------
48 # these are project-wide arguments
49
50 root_dir = os.path.dirname(os.path.realpath(__file__))
51 parent_parser = ArgumentParser(add_help=False)
52
53 # gpu args
54 parent_parser.add_argument(
55 '--gpus',
56 type=int,
57 default=2,
58 help='how many gpus'
59 )
60 parent_parser.add_argument(
61 '--distributed_backend',
62 type=str,
63 default='dp',
64 help='supports three options dp, ddp, ddp2'
65 )
66 parent_parser.add_argument(
67 '--use_16bit',
68 dest='use_16bit',
69 action='store_true',
70 help='if true uses 16 bit precision'
71 )
72
73 # each LightningModule defines arguments relevant to it
74 parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
75 hyperparams = parser.parse_args()
76
77 # ---------------------
78 # RUN TRAINING
79 # ---------------------
80 main(hyperparams)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -23,7 +23,7 @@
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
- model = LightningTemplateModel(hparams)
+ model = LightningTemplateModel(**vars(hparams))
# ------------------------
# 2 INIT TRAINER
@@ -61,7 +61,7 @@
'--distributed_backend',
type=str,
default='dp',
- help='supports three options dp, ddp, ddp2'
+ help='supports four options dp, ddp, ddp2, ddp_spawn'
)
parent_parser.add_argument(
'--use_16bit',
| {"golden_diff": "diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py\n--- a/pl_examples/basic_examples/gpu_template.py\n+++ b/pl_examples/basic_examples/gpu_template.py\n@@ -23,7 +23,7 @@\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n- model = LightningTemplateModel(hparams)\n+ model = LightningTemplateModel(**vars(hparams))\n \n # ------------------------\n # 2 INIT TRAINER\n@@ -61,7 +61,7 @@\n '--distributed_backend',\n type=str,\n default='dp',\n- help='supports three options dp, ddp, ddp2'\n+ help='supports four options dp, ddp, ddp2, ddp_spawn'\n )\n parent_parser.add_argument(\n '--use_16bit',\n", "issue": "CPU/GPU Template\n\r\n\r\n## \ud83d\udc1b Bug\r\n\r\nThe GPU or CPU template do not run currently on master after changes including the setup hook. \r\n\r\n```\r\npython -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp\r\npython -m pl_examples.basic_examples.cpu_template\r\n```\r\n\r\nCPU Template Error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py\", line 53, in <module>\r\n main(args)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py\", line 34, in main\r\n trainer.fit(model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 952, in fit\r\n self.run_pretrain_routine(model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 1063, in run_pretrain_routine\r\n self.reset_val_dataloader(ref_model)\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 331, in reset_val_dataloader\r\n self._reset_eval_dataloader(model, 'val')\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 253, in _reset_eval_dataloader\r\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\r\n File \"/home/anthony/Downloads/pytorch-lightning/pytorch_lightning/trainer/data_loading.py\", line 352, in request_dataloader\r\n dataloader = dataloader_fx()\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py\", line 158, in val_dataloader\r\n return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=4)\r\n File \"/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 594, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'LightningTemplateModel' object has no attribute 'mnist_test'\r\n```\r\nGPU Template Error:\r\n```\r\n File \"/home/anthony/Downloads/pytorch-lightning/pl_examples/models/lightning_template.py\", line 64, in __init__\r\n self.c_d1_drop = nn.Dropout(self.drop_prob)\r\n File \"/home/anthony/.cache/pypoetry/virtualenvs/robotics-zp-60jGk-py3.6/lib/python3.6/site-packages/torch/nn/modules/dropout.py\", line 10, in __init__\r\n if p < 0 or p > 1:\r\nTypeError: '<' not supported between instances of 'Namespace' and 'int'\r\n```\r\n\r\n### Environment\r\n\r\n\r\n* CUDA:\r\n - GPU:\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - GeForce RTX 2080 Ti\r\n - available: True\r\n - version: 10.2\r\n* Packages:\r\n - numpy: 1.18.4\r\n - pyTorch_debug: False\r\n - pyTorch_version: 1.5.0\r\n - pytorch-lightning: 0.8.0\r\n - tensorboard: 2.2.1\r\n - tqdm: 4.46.0\r\n* System:\r\n - OS: Linux\r\n - architecture:\r\n - 64bit\r\n - ELF\r\n - processor: x86_64\r\n - python: 3.6.8\r\n - version: #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nRuns a model on a single node across multiple gpus.\n\"\"\"\nimport os\nfrom argparse import ArgumentParser\n\nimport numpy as np\nimport torch\n\nimport pytorch_lightning as pl\nfrom pl_examples.models.lightning_template import LightningTemplateModel\n\nSEED = 2334\ntorch.manual_seed(SEED)\nnp.random.seed(SEED)\n\n\ndef main(hparams):\n \"\"\"\n Main training routine specific for this project\n :param hparams:\n \"\"\"\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n model = LightningTemplateModel(hparams)\n\n # ------------------------\n # 2 INIT TRAINER\n # ------------------------\n trainer = pl.Trainer(\n max_epochs=hparams.epochs,\n gpus=hparams.gpus,\n distributed_backend=hparams.distributed_backend,\n precision=16 if hparams.use_16bit else 32,\n )\n\n # ------------------------\n # 3 START TRAINING\n # ------------------------\n trainer.fit(model)\n\n\nif __name__ == '__main__':\n # ------------------------\n # TRAINING ARGUMENTS\n # ------------------------\n # these are project-wide arguments\n\n root_dir = os.path.dirname(os.path.realpath(__file__))\n parent_parser = ArgumentParser(add_help=False)\n\n # gpu args\n parent_parser.add_argument(\n '--gpus',\n type=int,\n default=2,\n help='how many gpus'\n )\n parent_parser.add_argument(\n '--distributed_backend',\n type=str,\n default='dp',\n help='supports three options dp, ddp, ddp2'\n )\n parent_parser.add_argument(\n '--use_16bit',\n dest='use_16bit',\n action='store_true',\n help='if true uses 16 bit precision'\n )\n\n # each LightningModule defines arguments relevant to it\n parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)\n hyperparams = parser.parse_args()\n\n # ---------------------\n # RUN TRAINING\n # ---------------------\n main(hyperparams)\n", "path": "pl_examples/basic_examples/gpu_template.py"}], "after_files": [{"content": "\"\"\"\nRuns a model on a single node across multiple gpus.\n\"\"\"\nimport os\nfrom argparse import ArgumentParser\n\nimport numpy as np\nimport torch\n\nimport pytorch_lightning as pl\nfrom pl_examples.models.lightning_template import LightningTemplateModel\n\nSEED = 2334\ntorch.manual_seed(SEED)\nnp.random.seed(SEED)\n\n\ndef main(hparams):\n \"\"\"\n Main training routine specific for this project\n :param hparams:\n \"\"\"\n # ------------------------\n # 1 INIT LIGHTNING MODEL\n # ------------------------\n model = LightningTemplateModel(**vars(hparams))\n\n # ------------------------\n # 2 INIT TRAINER\n # ------------------------\n trainer = pl.Trainer(\n max_epochs=hparams.epochs,\n gpus=hparams.gpus,\n distributed_backend=hparams.distributed_backend,\n precision=16 if hparams.use_16bit else 32,\n )\n\n # ------------------------\n # 3 START TRAINING\n # ------------------------\n trainer.fit(model)\n\n\nif __name__ == '__main__':\n # ------------------------\n # TRAINING ARGUMENTS\n # ------------------------\n # these are project-wide arguments\n\n root_dir = os.path.dirname(os.path.realpath(__file__))\n parent_parser = ArgumentParser(add_help=False)\n\n # gpu args\n parent_parser.add_argument(\n '--gpus',\n type=int,\n default=2,\n help='how many gpus'\n )\n parent_parser.add_argument(\n '--distributed_backend',\n type=str,\n default='dp',\n help='supports four options dp, ddp, ddp2, ddp_spawn'\n )\n parent_parser.add_argument(\n '--use_16bit',\n dest='use_16bit',\n action='store_true',\n help='if true uses 16 bit precision'\n )\n\n # each LightningModule defines arguments relevant to it\n parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)\n hyperparams = parser.parse_args()\n\n # ---------------------\n # RUN TRAINING\n # ---------------------\n main(hyperparams)\n", "path": "pl_examples/basic_examples/gpu_template.py"}]} | 1,831 | 186 |
gh_patches_debug_37403 | rasdani/github-patches | git_diff | streamlink__streamlink-1670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[plugin issue] plugin.vidio.com / can't handle stream URLs anymore
- [x] This is a bug report.
- [ ] This is a feature request.
- [ ] This is a plugin (improvement) request.
- [ ] I have read the contribution guidelines.
### Description
plugin handling of https://www.vidio.com/live URls not working anymore due to changes at provided stream structure
### Expected / Actual behavior
streamlink -l debug www.vidio.com/live/665-rcti-tv-stream
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.5.2
[cli][debug] Streamlink: 0.12.1
[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.47.0)
[cli][info] Found matching plugin vidio for URL www.vidio.com/live/665-rcti-tv-stream
[plugin.vidio][debug] HLS URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8
error: Unable to open URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8 (403 Client Error: Forbidden for url: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/vidio.py`
Content:
```
1 '''
2 Plugin for vidio.com
3 - https://www.vidio.com/live/5075-dw-tv-stream
4 - https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid
5 '''
6 import re
7
8 from streamlink.plugin import Plugin
9 from streamlink.plugin.api import http
10 from streamlink.stream import HLSStream
11
12 _url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
13 _playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
14
15
16 class Vidio(Plugin):
17 @classmethod
18 def can_handle_url(cls, url):
19 return _url_re.match(url)
20
21 def _get_streams(self):
22 res = http.get(self.url)
23
24 match = _playlist_re.search(res.text)
25 if match is None:
26 return
27
28 url = match.group('url')
29
30 if url:
31 self.logger.debug('HLS URL: {0}'.format(url))
32 for s in HLSStream.parse_variant_playlist(self.session, url).items():
33 yield s
34
35
36 __plugin__ = Vidio
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py
--- a/src/streamlink/plugins/vidio.py
+++ b/src/streamlink/plugins/vidio.py
@@ -1,36 +1,61 @@
-'''
+"""
Plugin for vidio.com
- https://www.vidio.com/live/5075-dw-tv-stream
- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid
-'''
+"""
import re
from streamlink.plugin import Plugin
-from streamlink.plugin.api import http
+from streamlink.plugin.api import http, useragents, validate
from streamlink.stream import HLSStream
-
-_url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
-_playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
+from streamlink.utils import parse_json
class Vidio(Plugin):
+ _url_re = re.compile(r"https?://(?:www\.)?vidio\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\d+)-(?P<name>[^/?#&]+)")
+ _playlist_re = re.compile(r'''hls-url=["'](?P<url>[^"']+)["']''')
+ _data_id_re = re.compile(r'''meta\s+data-id=["'](?P<id>[^"']+)["']''')
+
+ csrf_tokens_url = "https://www.vidio.com/csrf_tokens"
+ tokens_url = "https://www.vidio.com/live/{id}/tokens"
+ token_schema = validate.Schema(validate.transform(parse_json),
+ {"token": str},
+ validate.get("token"))
+
@classmethod
def can_handle_url(cls, url):
- return _url_re.match(url)
+ return cls._url_re.match(url)
+
+ def get_csrf_tokens(self):
+ return http.get(self.csrf_tokens_url,
+ schema=self.token_schema)
+
+ def get_url_tokens(self, stream_id):
+ self.logger.debug("Getting stream tokens")
+ csrf_token = self.get_csrf_tokens()
+ return http.post(self.tokens_url.format(id=stream_id),
+ files={"authenticity_token": (None, csrf_token)},
+ headers={"User-Agent": useragents.CHROME,
+ "Referer": self.url},
+ schema=self.token_schema)
def _get_streams(self):
res = http.get(self.url)
- match = _playlist_re.search(res.text)
- if match is None:
- return
+ plmatch = self._playlist_re.search(res.text)
+ idmatch = self._data_id_re.search(res.text)
+
+ hls_url = plmatch and plmatch.group("url")
+ stream_id = idmatch and idmatch.group("id")
- url = match.group('url')
+ tokens = self.get_url_tokens(stream_id)
- if url:
- self.logger.debug('HLS URL: {0}'.format(url))
- for s in HLSStream.parse_variant_playlist(self.session, url).items():
- yield s
+ if hls_url:
+ self.logger.debug("HLS URL: {0}".format(hls_url))
+ self.logger.debug("Tokens: {0}".format(tokens))
+ return HLSStream.parse_variant_playlist(self.session, hls_url+"?"+tokens,
+ headers={"User-Agent": useragents.CHROME,
+ "Referer": self.url})
__plugin__ = Vidio
| {"golden_diff": "diff --git a/src/streamlink/plugins/vidio.py b/src/streamlink/plugins/vidio.py\n--- a/src/streamlink/plugins/vidio.py\n+++ b/src/streamlink/plugins/vidio.py\n@@ -1,36 +1,61 @@\n-'''\n+\"\"\"\n Plugin for vidio.com\n - https://www.vidio.com/live/5075-dw-tv-stream\n - https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n-'''\n+\"\"\"\n import re\n \n from streamlink.plugin import Plugin\n-from streamlink.plugin.api import http\n+from streamlink.plugin.api import http, useragents, validate\n from streamlink.stream import HLSStream\n-\n-_url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n-_playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n+from streamlink.utils import parse_json\n \n \n class Vidio(Plugin):\n+ _url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n+ _playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n+ _data_id_re = re.compile(r'''meta\\s+data-id=[\"'](?P<id>[^\"']+)[\"']''')\n+\n+ csrf_tokens_url = \"https://www.vidio.com/csrf_tokens\"\n+ tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n+ token_schema = validate.Schema(validate.transform(parse_json),\n+ {\"token\": str},\n+ validate.get(\"token\"))\n+\n @classmethod\n def can_handle_url(cls, url):\n- return _url_re.match(url)\n+ return cls._url_re.match(url)\n+\n+ def get_csrf_tokens(self):\n+ return http.get(self.csrf_tokens_url,\n+ schema=self.token_schema)\n+\n+ def get_url_tokens(self, stream_id):\n+ self.logger.debug(\"Getting stream tokens\")\n+ csrf_token = self.get_csrf_tokens()\n+ return http.post(self.tokens_url.format(id=stream_id),\n+ files={\"authenticity_token\": (None, csrf_token)},\n+ headers={\"User-Agent\": useragents.CHROME,\n+ \"Referer\": self.url},\n+ schema=self.token_schema)\n \n def _get_streams(self):\n res = http.get(self.url)\n \n- match = _playlist_re.search(res.text)\n- if match is None:\n- return\n+ plmatch = self._playlist_re.search(res.text)\n+ idmatch = self._data_id_re.search(res.text)\n+\n+ hls_url = plmatch and plmatch.group(\"url\")\n+ stream_id = idmatch and idmatch.group(\"id\")\n \n- url = match.group('url')\n+ tokens = self.get_url_tokens(stream_id)\n \n- if url:\n- self.logger.debug('HLS URL: {0}'.format(url))\n- for s in HLSStream.parse_variant_playlist(self.session, url).items():\n- yield s\n+ if hls_url:\n+ self.logger.debug(\"HLS URL: {0}\".format(hls_url))\n+ self.logger.debug(\"Tokens: {0}\".format(tokens))\n+ return HLSStream.parse_variant_playlist(self.session, hls_url+\"?\"+tokens,\n+ headers={\"User-Agent\": useragents.CHROME,\n+ \"Referer\": self.url})\n \n \n __plugin__ = Vidio\n", "issue": "[plugin issue] plugin.vidio.com / can't handle stream URLs anymore\n- [x] This is a bug report.\r\n- [ ] This is a feature request.\r\n- [ ] This is a plugin (improvement) request.\r\n- [ ] I have read the contribution guidelines.\r\n\r\n### Description\r\nplugin handling of https://www.vidio.com/live URls not working anymore due to changes at provided stream structure\r\n\r\n### Expected / Actual behavior\r\nstreamlink -l debug www.vidio.com/live/665-rcti-tv-stream\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.5.2\r\n[cli][debug] Streamlink: 0.12.1\r\n[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.47.0)\r\n[cli][info] Found matching plugin vidio for URL www.vidio.com/live/665-rcti-tv-stream\r\n[plugin.vidio][debug] HLS URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8\r\nerror: Unable to open URL: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8 (403 Client Error: Forbidden for url: https://kmklive-lh.akamaihd.net/i/rcti_ta_regular@94478/master.m3u8)\r\n\n", "before_files": [{"content": "'''\nPlugin for vidio.com\n- https://www.vidio.com/live/5075-dw-tv-stream\n- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n'''\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\n_url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n_playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n\n\nclass Vidio(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n res = http.get(self.url)\n\n match = _playlist_re.search(res.text)\n if match is None:\n return\n\n url = match.group('url')\n\n if url:\n self.logger.debug('HLS URL: {0}'.format(url))\n for s in HLSStream.parse_variant_playlist(self.session, url).items():\n yield s\n\n\n__plugin__ = Vidio\n", "path": "src/streamlink/plugins/vidio.py"}], "after_files": [{"content": "\"\"\"\nPlugin for vidio.com\n- https://www.vidio.com/live/5075-dw-tv-stream\n- https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid\n\"\"\"\nimport re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, useragents, validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_json\n\n\nclass Vidio(Plugin):\n _url_re = re.compile(r\"https?://(?:www\\.)?vidio\\.com/(?:en/)?(?P<type>live|watch)/(?P<id>\\d+)-(?P<name>[^/?#&]+)\")\n _playlist_re = re.compile(r'''hls-url=[\"'](?P<url>[^\"']+)[\"']''')\n _data_id_re = re.compile(r'''meta\\s+data-id=[\"'](?P<id>[^\"']+)[\"']''')\n\n csrf_tokens_url = \"https://www.vidio.com/csrf_tokens\"\n tokens_url = \"https://www.vidio.com/live/{id}/tokens\"\n token_schema = validate.Schema(validate.transform(parse_json),\n {\"token\": str},\n validate.get(\"token\"))\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url)\n\n def get_csrf_tokens(self):\n return http.get(self.csrf_tokens_url,\n schema=self.token_schema)\n\n def get_url_tokens(self, stream_id):\n self.logger.debug(\"Getting stream tokens\")\n csrf_token = self.get_csrf_tokens()\n return http.post(self.tokens_url.format(id=stream_id),\n files={\"authenticity_token\": (None, csrf_token)},\n headers={\"User-Agent\": useragents.CHROME,\n \"Referer\": self.url},\n schema=self.token_schema)\n\n def _get_streams(self):\n res = http.get(self.url)\n\n plmatch = self._playlist_re.search(res.text)\n idmatch = self._data_id_re.search(res.text)\n\n hls_url = plmatch and plmatch.group(\"url\")\n stream_id = idmatch and idmatch.group(\"id\")\n\n tokens = self.get_url_tokens(stream_id)\n\n if hls_url:\n self.logger.debug(\"HLS URL: {0}\".format(hls_url))\n self.logger.debug(\"Tokens: {0}\".format(tokens))\n return HLSStream.parse_variant_playlist(self.session, hls_url+\"?\"+tokens,\n headers={\"User-Agent\": useragents.CHROME,\n \"Referer\": self.url})\n\n\n__plugin__ = Vidio\n", "path": "src/streamlink/plugins/vidio.py"}]} | 961 | 835 |
gh_patches_debug_7782 | rasdani/github-patches | git_diff | microsoft__ptvsd-818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test_path_names_uppercase_enabled fails
```
2018-09-18T18:50:20.6706273Z ======================================================================
2018-09-18T18:50:20.6706627Z FAIL [0.001s]: test_path_names_uppercase_enabled (tests.ptvsd.test_pathutils.PathUtilTests)
2018-09-18T18:50:20.6706979Z ----------------------------------------------------------------------
2018-09-18T18:50:20.6707253Z Traceback (most recent call last):
2018-09-18T18:50:20.6707620Z File "D:\a\1\s\tests\ptvsd\test_pathutils.py", line 78, in test_path_names_uppercase_enabled
2018-09-18T18:50:20.6708077Z self.assertEqual(result, ACTUAL)
2018-09-18T18:50:20.6708307Z AssertionError: 'D:\\A\\1\\S\\TESTS\\PTVSD\\test_pathutils.py' != 'D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py'
2018-09-18T18:50:20.6708746Z - D:\A\1\S\TESTS\PTVSD\test_pathutils.py
2018-09-18T18:50:20.6708945Z + D:\a\1\s\tests\ptvsd\test_pathutils.py
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/pathutils.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 from __future__ import print_function, with_statement, absolute_import
6
7 from glob import glob
8 import os.path
9 import platform
10
11
12 MAX_FILES_TO_CACHE = 1000
13
14
15 class PathUnNormcase(object):
16 """Ensures path names of files are returned as they exist on the fs."""
17
18 def __init__(self):
19 self._dict = {}
20 self._enabled = False
21
22 def enable(self):
23 self._enabled = platform.system() == 'Windows'
24
25 def un_normcase(self, file_path):
26 if not self._enabled or len(file_path) == 0:
27 return file_path
28 if file_path in self._dict:
29 return self._dict[file_path]
30 file_path_to_return = self._get_actual_filename(file_path)
31 self.track_file_path_case(file_path_to_return)
32 return file_path_to_return
33
34 def track_file_path_case(self, file_path):
35 if not self._enabled:
36 return
37 if len(self._dict) > MAX_FILES_TO_CACHE:
38 self._dict.clear()
39 self._dict[file_path] = file_path
40
41 def _get_actual_filename(self, name):
42 """
43 Use glob to search for a file by building a regex.
44 Original source from https://stackoverflow.com/a/30374360/4443457
45 (Modified to match file name as well).
46 """
47
48 sep = os.path.sep
49 parts = os.path.normpath(name).split(sep)
50 dirs = parts[0:-1]
51 filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])
52 path_fragment1 = dirs[0].upper()
53 if dirs[0] == os.path.splitdrive(name)[0]:
54 fragments = [path_fragment1]
55 else:
56 dir_names = os.listdir(os.getcwd())
57 fragments = list(filter(
58 lambda x: x.upper() == path_fragment1, dir_names))
59 fragments += list(d for d in dirs[1:] if d)
60 path = glob(sep.join(fragments))
61 if not path:
62 return name
63 res = glob(sep.join((path[0], filename)))
64 if not res:
65 return name
66 return res[0]
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/pathutils.py b/ptvsd/pathutils.py
--- a/ptvsd/pathutils.py
+++ b/ptvsd/pathutils.py
@@ -56,7 +56,8 @@
dir_names = os.listdir(os.getcwd())
fragments = list(filter(
lambda x: x.upper() == path_fragment1, dir_names))
- fragments += list(d for d in dirs[1:] if d)
+ fragments += list('{}[{}]'.format(d[:-1], d[-1])
+ for d in dirs[1:] if d)
path = glob(sep.join(fragments))
if not path:
return name
| {"golden_diff": "diff --git a/ptvsd/pathutils.py b/ptvsd/pathutils.py\n--- a/ptvsd/pathutils.py\n+++ b/ptvsd/pathutils.py\n@@ -56,7 +56,8 @@\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n- fragments += list(d for d in dirs[1:] if d)\n+ fragments += list('{}[{}]'.format(d[:-1], d[-1])\n+ for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n", "issue": "test_path_names_uppercase_enabled fails\n```\r\n2018-09-18T18:50:20.6706273Z ======================================================================\r\n2018-09-18T18:50:20.6706627Z FAIL [0.001s]: test_path_names_uppercase_enabled (tests.ptvsd.test_pathutils.PathUtilTests)\r\n2018-09-18T18:50:20.6706979Z ----------------------------------------------------------------------\r\n2018-09-18T18:50:20.6707253Z Traceback (most recent call last):\r\n2018-09-18T18:50:20.6707620Z File \"D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py\", line 78, in test_path_names_uppercase_enabled\r\n2018-09-18T18:50:20.6708077Z self.assertEqual(result, ACTUAL)\r\n2018-09-18T18:50:20.6708307Z AssertionError: 'D:\\\\A\\\\1\\\\S\\\\TESTS\\\\PTVSD\\\\test_pathutils.py' != 'D:\\\\a\\\\1\\\\s\\\\tests\\\\ptvsd\\\\test_pathutils.py'\r\n2018-09-18T18:50:20.6708746Z - D:\\A\\1\\S\\TESTS\\PTVSD\\test_pathutils.py\r\n2018-09-18T18:50:20.6708945Z + D:\\a\\1\\s\\tests\\ptvsd\\test_pathutils.py\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nfrom glob import glob\nimport os.path\nimport platform\n\n\nMAX_FILES_TO_CACHE = 1000\n\n\nclass PathUnNormcase(object):\n \"\"\"Ensures path names of files are returned as they exist on the fs.\"\"\"\n\n def __init__(self):\n self._dict = {}\n self._enabled = False\n\n def enable(self):\n self._enabled = platform.system() == 'Windows'\n\n def un_normcase(self, file_path):\n if not self._enabled or len(file_path) == 0:\n return file_path\n if file_path in self._dict:\n return self._dict[file_path]\n file_path_to_return = self._get_actual_filename(file_path)\n self.track_file_path_case(file_path_to_return)\n return file_path_to_return\n\n def track_file_path_case(self, file_path):\n if not self._enabled:\n return\n if len(self._dict) > MAX_FILES_TO_CACHE:\n self._dict.clear()\n self._dict[file_path] = file_path\n\n def _get_actual_filename(self, name):\n \"\"\"\n Use glob to search for a file by building a regex.\n Original source from https://stackoverflow.com/a/30374360/4443457\n (Modified to match file name as well).\n \"\"\"\n\n sep = os.path.sep\n parts = os.path.normpath(name).split(sep)\n dirs = parts[0:-1]\n filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])\n path_fragment1 = dirs[0].upper()\n if dirs[0] == os.path.splitdrive(name)[0]:\n fragments = [path_fragment1]\n else:\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n fragments += list(d for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n res = glob(sep.join((path[0], filename)))\n if not res:\n return name\n return res[0]\n", "path": "ptvsd/pathutils.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nfrom glob import glob\nimport os.path\nimport platform\n\n\nMAX_FILES_TO_CACHE = 1000\n\n\nclass PathUnNormcase(object):\n \"\"\"Ensures path names of files are returned as they exist on the fs.\"\"\"\n\n def __init__(self):\n self._dict = {}\n self._enabled = False\n\n def enable(self):\n self._enabled = platform.system() == 'Windows'\n\n def un_normcase(self, file_path):\n if not self._enabled or len(file_path) == 0:\n return file_path\n if file_path in self._dict:\n return self._dict[file_path]\n file_path_to_return = self._get_actual_filename(file_path)\n self.track_file_path_case(file_path_to_return)\n return file_path_to_return\n\n def track_file_path_case(self, file_path):\n if not self._enabled:\n return\n if len(self._dict) > MAX_FILES_TO_CACHE:\n self._dict.clear()\n self._dict[file_path] = file_path\n\n def _get_actual_filename(self, name):\n \"\"\"\n Use glob to search for a file by building a regex.\n Original source from https://stackoverflow.com/a/30374360/4443457\n (Modified to match file name as well).\n \"\"\"\n\n sep = os.path.sep\n parts = os.path.normpath(name).split(sep)\n dirs = parts[0:-1]\n filename = '{}[{}]'.format(parts[-1][:-1], parts[-1][-1:])\n path_fragment1 = dirs[0].upper()\n if dirs[0] == os.path.splitdrive(name)[0]:\n fragments = [path_fragment1]\n else:\n dir_names = os.listdir(os.getcwd())\n fragments = list(filter(\n lambda x: x.upper() == path_fragment1, dir_names))\n fragments += list('{}[{}]'.format(d[:-1], d[-1])\n for d in dirs[1:] if d)\n path = glob(sep.join(fragments))\n if not path:\n return name\n res = glob(sep.join((path[0], filename)))\n if not res:\n return name\n return res[0]\n", "path": "ptvsd/pathutils.py"}]} | 1,332 | 147 |
gh_patches_debug_8485 | rasdani/github-patches | git_diff | SigmaHQ__sigma-1026 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error while pushing sigma to misp using sigma2misp script
(venv) hydra@Kaushals-MacBook-Air session2/sigma (master %) » tools/sigma2misp @misp.conf --insecure --same-event --info "Test Event" -r rules/windows/sysmon 2 ↵
Traceback (most recent call last):
File "tools/sigma2misp", line 57, in <module>
eventid = create_new_event()
File "tools/sigma2misp", line 14, in create_new_event
event = misp.MISPEvent()
AttributeError: 'PyMISP' object has no attribute 'MISPEvent'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/sigma/sigma2misp.py`
Content:
```
1 #!/usr/bin/env python3
2 # Import given Sigma rules to MISP
3
4 import argparse
5 import pathlib
6 import urllib3
7 urllib3.disable_warnings()
8 from pymisp import PyMISP
9
10 def create_new_event(args, misp):
11 if hasattr(misp, "new_event"):
12 return misp.new_event(info=args.info)["Event"]["id"]
13
14 event = misp.MISPEvent()
15 event.info = args.info
16 return misp.add_event(event)["Event"]["id"]
17
18
19 class MISPImportArgumentParser(argparse.ArgumentParser):
20 def __init__(self, *args, **kwargs):
21 super().__init__(
22 description="Import Sigma rules into MISP events",
23 epilog="Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.",
24 fromfile_prefix_chars="@",
25 )
26
27 def convert_arg_line_to_args(self, line : str):
28 return ("--" + line.lstrip("--")).split()
29
30 def main():
31 argparser = MISPImportArgumentParser()
32 argparser.add_argument("--url", "-u", default="https://localhost", help="URL of MISP instance")
33 argparser.add_argument("--key", "-k", required=True, help="API key")
34 argparser.add_argument("--insecure", "-I", action="store_false", help="Disable TLS certifcate validation.")
35 argparser.add_argument("--event", "-e", type=int, help="Add Sigma rule to event with this ID. If not set, create new event.")
36 argparser.add_argument("--same-event", "-s", action="store_true", help="Import all Sigma rules to the same event, if no event is set.")
37 argparser.add_argument("--info", "-i", default="Sigma import", help="Event Information field for newly created MISP event.")
38 argparser.add_argument("--recursive", "-r", action="store_true", help="Recursive traversal of directory")
39 argparser.add_argument("sigma", nargs="+", help="Sigma rule file that should be imported")
40 args = argparser.parse_args()
41
42 if args.recursive:
43 paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob("**/*") if p.is_file() ]
44 else:
45 paths = [ pathlib.Path(sigma) for sigma in args.sigma ]
46
47 misp = PyMISP(args.url, args.key, args.insecure)
48 if args.event:
49 if hasattr(misp, "get"):
50 eventid = misp.get(args.event)["Event"]["id"]
51 else:
52 eventid = misp.get_event(args.event)["Event"]["id"]
53
54 first = True
55
56 for sigma in paths:
57 if not args.event and (first or not args.same_event):
58 eventid = create_new_event(args, misp)
59 print("Importing Sigma rule {} into MISP event {}...".format(sigma, eventid, end=""))
60 f = sigma.open("rt")
61
62 if hasattr(misp, "add_named_attribute"):
63 misp.add_named_attribute(eventid, "sigma", f.read())
64 else:
65 event = misp.get_event(eventid, pythonify=True)
66 event.add_attribute("sigma", f.read())
67 misp.update_event(event)
68
69 f.close()
70 first = False
71
72 if __name__ == "__main__":
73 main()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/sigma/sigma2misp.py b/tools/sigma/sigma2misp.py
--- a/tools/sigma/sigma2misp.py
+++ b/tools/sigma/sigma2misp.py
@@ -5,13 +5,13 @@
import pathlib
import urllib3
urllib3.disable_warnings()
-from pymisp import PyMISP
+from pymisp import PyMISP, MISPEvent
def create_new_event(args, misp):
if hasattr(misp, "new_event"):
return misp.new_event(info=args.info)["Event"]["id"]
- event = misp.MISPEvent()
+ event = MISPEvent()
event.info = args.info
return misp.add_event(event)["Event"]["id"]
| {"golden_diff": "diff --git a/tools/sigma/sigma2misp.py b/tools/sigma/sigma2misp.py\n--- a/tools/sigma/sigma2misp.py\n+++ b/tools/sigma/sigma2misp.py\n@@ -5,13 +5,13 @@\n import pathlib\n import urllib3\n urllib3.disable_warnings()\n-from pymisp import PyMISP\n+from pymisp import PyMISP, MISPEvent\n \n def create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n- event = misp.MISPEvent()\n+ event = MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n", "issue": "Error while pushing sigma to misp using sigma2misp script\n(venv) hydra@Kaushals-MacBook-Air session2/sigma (master %) \u00bb tools/sigma2misp @misp.conf --insecure --same-event --info \"Test Event\" -r rules/windows/sysmon 2 \u21b5\r\nTraceback (most recent call last):\r\n File \"tools/sigma2misp\", line 57, in <module>\r\n eventid = create_new_event()\r\n File \"tools/sigma2misp\", line 14, in create_new_event\r\n event = misp.MISPEvent()\r\nAttributeError: 'PyMISP' object has no attribute 'MISPEvent'\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Import given Sigma rules to MISP\n\nimport argparse\nimport pathlib\nimport urllib3\nurllib3.disable_warnings()\nfrom pymisp import PyMISP\n\ndef create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n event = misp.MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n\n\nclass MISPImportArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n super().__init__(\n description=\"Import Sigma rules into MISP events\",\n epilog=\"Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.\",\n fromfile_prefix_chars=\"@\",\n )\n\n def convert_arg_line_to_args(self, line : str):\n return (\"--\" + line.lstrip(\"--\")).split()\n\ndef main():\n argparser = MISPImportArgumentParser()\n argparser.add_argument(\"--url\", \"-u\", default=\"https://localhost\", help=\"URL of MISP instance\")\n argparser.add_argument(\"--key\", \"-k\", required=True, help=\"API key\")\n argparser.add_argument(\"--insecure\", \"-I\", action=\"store_false\", help=\"Disable TLS certifcate validation.\")\n argparser.add_argument(\"--event\", \"-e\", type=int, help=\"Add Sigma rule to event with this ID. If not set, create new event.\")\n argparser.add_argument(\"--same-event\", \"-s\", action=\"store_true\", help=\"Import all Sigma rules to the same event, if no event is set.\")\n argparser.add_argument(\"--info\", \"-i\", default=\"Sigma import\", help=\"Event Information field for newly created MISP event.\")\n argparser.add_argument(\"--recursive\", \"-r\", action=\"store_true\", help=\"Recursive traversal of directory\")\n argparser.add_argument(\"sigma\", nargs=\"+\", help=\"Sigma rule file that should be imported\")\n args = argparser.parse_args()\n\n if args.recursive:\n paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob(\"**/*\") if p.is_file() ]\n else:\n paths = [ pathlib.Path(sigma) for sigma in args.sigma ]\n\n misp = PyMISP(args.url, args.key, args.insecure)\n if args.event:\n if hasattr(misp, \"get\"):\n eventid = misp.get(args.event)[\"Event\"][\"id\"]\n else:\n eventid = misp.get_event(args.event)[\"Event\"][\"id\"]\n\n first = True\n\n for sigma in paths:\n if not args.event and (first or not args.same_event):\n eventid = create_new_event(args, misp)\n print(\"Importing Sigma rule {} into MISP event {}...\".format(sigma, eventid, end=\"\"))\n f = sigma.open(\"rt\")\n\n if hasattr(misp, \"add_named_attribute\"):\n misp.add_named_attribute(eventid, \"sigma\", f.read())\n else:\n event = misp.get_event(eventid, pythonify=True)\n event.add_attribute(\"sigma\", f.read())\n misp.update_event(event)\n\n f.close()\n first = False\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/sigma/sigma2misp.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Import given Sigma rules to MISP\n\nimport argparse\nimport pathlib\nimport urllib3\nurllib3.disable_warnings()\nfrom pymisp import PyMISP, MISPEvent\n\ndef create_new_event(args, misp):\n if hasattr(misp, \"new_event\"):\n return misp.new_event(info=args.info)[\"Event\"][\"id\"]\n \n event = MISPEvent()\n event.info = args.info\n return misp.add_event(event)[\"Event\"][\"id\"]\n\n\nclass MISPImportArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n super().__init__(\n description=\"Import Sigma rules into MISP events\",\n epilog=\"Parameters can be read from a file by a @filename parameter. The file should contain one parameter per line. Dashes may be omitted.\",\n fromfile_prefix_chars=\"@\",\n )\n\n def convert_arg_line_to_args(self, line : str):\n return (\"--\" + line.lstrip(\"--\")).split()\n\ndef main():\n argparser = MISPImportArgumentParser()\n argparser.add_argument(\"--url\", \"-u\", default=\"https://localhost\", help=\"URL of MISP instance\")\n argparser.add_argument(\"--key\", \"-k\", required=True, help=\"API key\")\n argparser.add_argument(\"--insecure\", \"-I\", action=\"store_false\", help=\"Disable TLS certifcate validation.\")\n argparser.add_argument(\"--event\", \"-e\", type=int, help=\"Add Sigma rule to event with this ID. If not set, create new event.\")\n argparser.add_argument(\"--same-event\", \"-s\", action=\"store_true\", help=\"Import all Sigma rules to the same event, if no event is set.\")\n argparser.add_argument(\"--info\", \"-i\", default=\"Sigma import\", help=\"Event Information field for newly created MISP event.\")\n argparser.add_argument(\"--recursive\", \"-r\", action=\"store_true\", help=\"Recursive traversal of directory\")\n argparser.add_argument(\"sigma\", nargs=\"+\", help=\"Sigma rule file that should be imported\")\n args = argparser.parse_args()\n\n if args.recursive:\n paths = [ p for pathname in args.sigma for p in pathlib.Path(pathname).glob(\"**/*\") if p.is_file() ]\n else:\n paths = [ pathlib.Path(sigma) for sigma in args.sigma ]\n\n misp = PyMISP(args.url, args.key, args.insecure)\n if args.event:\n if hasattr(misp, \"get\"):\n eventid = misp.get(args.event)[\"Event\"][\"id\"]\n else:\n eventid = misp.get_event(args.event)[\"Event\"][\"id\"]\n\n first = True\n\n for sigma in paths:\n if not args.event and (first or not args.same_event):\n eventid = create_new_event(args, misp)\n print(\"Importing Sigma rule {} into MISP event {}...\".format(sigma, eventid, end=\"\"))\n f = sigma.open(\"rt\")\n\n if hasattr(misp, \"add_named_attribute\"):\n misp.add_named_attribute(eventid, \"sigma\", f.read())\n else:\n event = misp.get_event(eventid, pythonify=True)\n event.add_attribute(\"sigma\", f.read())\n misp.update_event(event)\n\n f.close()\n first = False\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/sigma/sigma2misp.py"}]} | 1,269 | 164 |
gh_patches_debug_5382 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1268 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ddtrace not working with new aiobotocore library
### Which version of dd-trace-py are you using?
0.34.0
### Which version of the libraries are you using?
```
aiobotocore==0.12.0
aiohttp==3.6.2
aioitertools==0.5.1
async-timeout==3.0.1
attrs==19.3.0
botocore==1.15.15
chardet==3.0.4
ddtrace==0.34.0
docutils==0.15.2
idna==2.9
jmespath==0.9.5
msgpack==1.0.0
multidict==4.7.5
python-dateutil==2.8.1
six==1.14.0
urllib3==1.25.8
wrapt==1.12.0
yarl==1.4.2
```
### How can we reproduce your problem?
```python
from ddtrace import patch
patch(aiobotocore=True)
```
### What is the result that you get?
```
ImportError: cannot import name 'ClientResponseContentProxy' from 'aiobotocore.endpoint'
raise PatchException('integration not available')
ddtrace.monkey.PatchException: integration not available
```
### What is result that you expected?
Successfully patched `aiobotocore` library
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/aiobotocore/patch.py`
Content:
```
1 import asyncio
2 from ddtrace.vendor import wrapt
3 from ddtrace import config
4 import aiobotocore.client
5
6 from aiobotocore.endpoint import ClientResponseContentProxy
7
8 from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
9 from ...pin import Pin
10 from ...ext import SpanTypes, http, aws
11 from ...compat import PYTHON_VERSION_INFO
12 from ...utils.formats import deep_getattr
13 from ...utils.wrappers import unwrap
14
15
16 ARGS_NAME = ('action', 'params', 'path', 'verb')
17 TRACED_ARGS = ['params', 'path', 'verb']
18
19
20 def patch():
21 if getattr(aiobotocore.client, '_datadog_patch', False):
22 return
23 setattr(aiobotocore.client, '_datadog_patch', True)
24
25 wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)
26 Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)
27
28
29 def unpatch():
30 if getattr(aiobotocore.client, '_datadog_patch', False):
31 setattr(aiobotocore.client, '_datadog_patch', False)
32 unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')
33
34
35 class WrappedClientResponseContentProxy(wrapt.ObjectProxy):
36 def __init__(self, body, pin, parent_span):
37 super(WrappedClientResponseContentProxy, self).__init__(body)
38 self._self_pin = pin
39 self._self_parent_span = parent_span
40
41 @asyncio.coroutine
42 def read(self, *args, **kwargs):
43 # async read that must be child of the parent span operation
44 operation_name = '{}.read'.format(self._self_parent_span.name)
45
46 with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:
47 # inherit parent attributes
48 span.resource = self._self_parent_span.resource
49 span.span_type = self._self_parent_span.span_type
50 span.meta = dict(self._self_parent_span.meta)
51 span.metrics = dict(self._self_parent_span.metrics)
52
53 result = yield from self.__wrapped__.read(*args, **kwargs)
54 span.set_tag('Length', len(result))
55
56 return result
57
58 # wrapt doesn't proxy `async with` context managers
59 if PYTHON_VERSION_INFO >= (3, 5, 0):
60 @asyncio.coroutine
61 def __aenter__(self):
62 # call the wrapped method but return the object proxy
63 yield from self.__wrapped__.__aenter__()
64 return self
65
66 @asyncio.coroutine
67 def __aexit__(self, *args, **kwargs):
68 response = yield from self.__wrapped__.__aexit__(*args, **kwargs)
69 return response
70
71
72 @asyncio.coroutine
73 def _wrapped_api_call(original_func, instance, args, kwargs):
74 pin = Pin.get_from(instance)
75 if not pin or not pin.enabled():
76 result = yield from original_func(*args, **kwargs)
77 return result
78
79 endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')
80
81 with pin.tracer.trace('{}.command'.format(endpoint_name),
82 service='{}.{}'.format(pin.service, endpoint_name),
83 span_type=SpanTypes.HTTP) as span:
84 span.set_tag(SPAN_MEASURED_KEY)
85
86 if len(args) > 0:
87 operation = args[0]
88 span.resource = '{}.{}'.format(endpoint_name, operation.lower())
89 else:
90 operation = None
91 span.resource = endpoint_name
92
93 aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)
94
95 region_name = deep_getattr(instance, 'meta.region_name')
96
97 meta = {
98 'aws.agent': 'aiobotocore',
99 'aws.operation': operation,
100 'aws.region': region_name,
101 }
102 span.set_tags(meta)
103
104 result = yield from original_func(*args, **kwargs)
105
106 body = result.get('Body')
107 if isinstance(body, ClientResponseContentProxy):
108 result['Body'] = WrappedClientResponseContentProxy(body, pin, span)
109
110 response_meta = result['ResponseMetadata']
111 response_headers = response_meta['HTTPHeaders']
112
113 span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])
114 span.set_tag('retry_attempts', response_meta['RetryAttempts'])
115
116 request_id = response_meta.get('RequestId')
117 if request_id:
118 span.set_tag('aws.requestid', request_id)
119
120 request_id2 = response_headers.get('x-amz-id-2')
121 if request_id2:
122 span.set_tag('aws.requestid2', request_id2)
123
124 # set analytics sample rate
125 span.set_tag(
126 ANALYTICS_SAMPLE_RATE_KEY,
127 config.aiobotocore.get_analytics_sample_rate()
128 )
129
130 return result
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/aiobotocore/patch.py b/ddtrace/contrib/aiobotocore/patch.py
--- a/ddtrace/contrib/aiobotocore/patch.py
+++ b/ddtrace/contrib/aiobotocore/patch.py
@@ -3,7 +3,11 @@
from ddtrace import config
import aiobotocore.client
-from aiobotocore.endpoint import ClientResponseContentProxy
+try:
+ from aiobotocore.endpoint import ClientResponseContentProxy
+except ImportError:
+ # aiobotocore>=0.11.0
+ from aiobotocore._endpoint_helpers import ClientResponseContentProxy
from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
from ...pin import Pin
| {"golden_diff": "diff --git a/ddtrace/contrib/aiobotocore/patch.py b/ddtrace/contrib/aiobotocore/patch.py\n--- a/ddtrace/contrib/aiobotocore/patch.py\n+++ b/ddtrace/contrib/aiobotocore/patch.py\n@@ -3,7 +3,11 @@\n from ddtrace import config\n import aiobotocore.client\n \n-from aiobotocore.endpoint import ClientResponseContentProxy\n+try:\n+ from aiobotocore.endpoint import ClientResponseContentProxy\n+except ImportError:\n+ # aiobotocore>=0.11.0\n+ from aiobotocore._endpoint_helpers import ClientResponseContentProxy\n \n from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\n from ...pin import Pin\n", "issue": "ddtrace not working with new aiobotocore library\n\r\n### Which version of dd-trace-py are you using?\r\n0.34.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\n```\r\naiobotocore==0.12.0\r\naiohttp==3.6.2\r\naioitertools==0.5.1\r\nasync-timeout==3.0.1\r\nattrs==19.3.0\r\nbotocore==1.15.15\r\nchardet==3.0.4\r\nddtrace==0.34.0\r\ndocutils==0.15.2\r\nidna==2.9\r\njmespath==0.9.5\r\nmsgpack==1.0.0\r\nmultidict==4.7.5\r\npython-dateutil==2.8.1\r\nsix==1.14.0\r\nurllib3==1.25.8\r\nwrapt==1.12.0\r\nyarl==1.4.2\r\n```\r\n\r\n### How can we reproduce your problem?\r\n```python\r\n from ddtrace import patch\r\n\r\n patch(aiobotocore=True)\r\n```\r\n\r\n### What is the result that you get?\r\n```\r\nImportError: cannot import name 'ClientResponseContentProxy' from 'aiobotocore.endpoint' \r\n raise PatchException('integration not available')\r\nddtrace.monkey.PatchException: integration not available\r\n```\r\n\r\n### What is result that you expected?\r\nSuccessfully patched `aiobotocore` library\n", "before_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\nfrom ddtrace import config\nimport aiobotocore.client\n\nfrom aiobotocore.endpoint import ClientResponseContentProxy\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, http, aws\nfrom ...compat import PYTHON_VERSION_INFO\nfrom ...utils.formats import deep_getattr\nfrom ...utils.wrappers import unwrap\n\n\nARGS_NAME = ('action', 'params', 'path', 'verb')\nTRACED_ARGS = ['params', 'path', 'verb']\n\n\ndef patch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n return\n setattr(aiobotocore.client, '_datadog_patch', True)\n\n wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)\n Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)\n\n\ndef unpatch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n setattr(aiobotocore.client, '_datadog_patch', False)\n unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')\n\n\nclass WrappedClientResponseContentProxy(wrapt.ObjectProxy):\n def __init__(self, body, pin, parent_span):\n super(WrappedClientResponseContentProxy, self).__init__(body)\n self._self_pin = pin\n self._self_parent_span = parent_span\n\n @asyncio.coroutine\n def read(self, *args, **kwargs):\n # async read that must be child of the parent span operation\n operation_name = '{}.read'.format(self._self_parent_span.name)\n\n with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:\n # inherit parent attributes\n span.resource = self._self_parent_span.resource\n span.span_type = self._self_parent_span.span_type\n span.meta = dict(self._self_parent_span.meta)\n span.metrics = dict(self._self_parent_span.metrics)\n\n result = yield from self.__wrapped__.read(*args, **kwargs)\n span.set_tag('Length', len(result))\n\n return result\n\n # wrapt doesn't proxy `async with` context managers\n if PYTHON_VERSION_INFO >= (3, 5, 0):\n @asyncio.coroutine\n def __aenter__(self):\n # call the wrapped method but return the object proxy\n yield from self.__wrapped__.__aenter__()\n return self\n\n @asyncio.coroutine\n def __aexit__(self, *args, **kwargs):\n response = yield from self.__wrapped__.__aexit__(*args, **kwargs)\n return response\n\n\[email protected]\ndef _wrapped_api_call(original_func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n result = yield from original_func(*args, **kwargs)\n return result\n\n endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')\n\n with pin.tracer.trace('{}.command'.format(endpoint_name),\n service='{}.{}'.format(pin.service, endpoint_name),\n span_type=SpanTypes.HTTP) as span:\n span.set_tag(SPAN_MEASURED_KEY)\n\n if len(args) > 0:\n operation = args[0]\n span.resource = '{}.{}'.format(endpoint_name, operation.lower())\n else:\n operation = None\n span.resource = endpoint_name\n\n aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)\n\n region_name = deep_getattr(instance, 'meta.region_name')\n\n meta = {\n 'aws.agent': 'aiobotocore',\n 'aws.operation': operation,\n 'aws.region': region_name,\n }\n span.set_tags(meta)\n\n result = yield from original_func(*args, **kwargs)\n\n body = result.get('Body')\n if isinstance(body, ClientResponseContentProxy):\n result['Body'] = WrappedClientResponseContentProxy(body, pin, span)\n\n response_meta = result['ResponseMetadata']\n response_headers = response_meta['HTTPHeaders']\n\n span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])\n span.set_tag('retry_attempts', response_meta['RetryAttempts'])\n\n request_id = response_meta.get('RequestId')\n if request_id:\n span.set_tag('aws.requestid', request_id)\n\n request_id2 = response_headers.get('x-amz-id-2')\n if request_id2:\n span.set_tag('aws.requestid2', request_id2)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiobotocore.get_analytics_sample_rate()\n )\n\n return result\n", "path": "ddtrace/contrib/aiobotocore/patch.py"}], "after_files": [{"content": "import asyncio\nfrom ddtrace.vendor import wrapt\nfrom ddtrace import config\nimport aiobotocore.client\n\ntry:\n from aiobotocore.endpoint import ClientResponseContentProxy\nexcept ImportError:\n # aiobotocore>=0.11.0\n from aiobotocore._endpoint_helpers import ClientResponseContentProxy\n\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, http, aws\nfrom ...compat import PYTHON_VERSION_INFO\nfrom ...utils.formats import deep_getattr\nfrom ...utils.wrappers import unwrap\n\n\nARGS_NAME = ('action', 'params', 'path', 'verb')\nTRACED_ARGS = ['params', 'path', 'verb']\n\n\ndef patch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n return\n setattr(aiobotocore.client, '_datadog_patch', True)\n\n wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)\n Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)\n\n\ndef unpatch():\n if getattr(aiobotocore.client, '_datadog_patch', False):\n setattr(aiobotocore.client, '_datadog_patch', False)\n unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')\n\n\nclass WrappedClientResponseContentProxy(wrapt.ObjectProxy):\n def __init__(self, body, pin, parent_span):\n super(WrappedClientResponseContentProxy, self).__init__(body)\n self._self_pin = pin\n self._self_parent_span = parent_span\n\n @asyncio.coroutine\n def read(self, *args, **kwargs):\n # async read that must be child of the parent span operation\n operation_name = '{}.read'.format(self._self_parent_span.name)\n\n with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:\n # inherit parent attributes\n span.resource = self._self_parent_span.resource\n span.span_type = self._self_parent_span.span_type\n span.meta = dict(self._self_parent_span.meta)\n span.metrics = dict(self._self_parent_span.metrics)\n\n result = yield from self.__wrapped__.read(*args, **kwargs)\n span.set_tag('Length', len(result))\n\n return result\n\n # wrapt doesn't proxy `async with` context managers\n if PYTHON_VERSION_INFO >= (3, 5, 0):\n @asyncio.coroutine\n def __aenter__(self):\n # call the wrapped method but return the object proxy\n yield from self.__wrapped__.__aenter__()\n return self\n\n @asyncio.coroutine\n def __aexit__(self, *args, **kwargs):\n response = yield from self.__wrapped__.__aexit__(*args, **kwargs)\n return response\n\n\[email protected]\ndef _wrapped_api_call(original_func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n result = yield from original_func(*args, **kwargs)\n return result\n\n endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')\n\n with pin.tracer.trace('{}.command'.format(endpoint_name),\n service='{}.{}'.format(pin.service, endpoint_name),\n span_type=SpanTypes.HTTP) as span:\n span.set_tag(SPAN_MEASURED_KEY)\n\n if len(args) > 0:\n operation = args[0]\n span.resource = '{}.{}'.format(endpoint_name, operation.lower())\n else:\n operation = None\n span.resource = endpoint_name\n\n aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)\n\n region_name = deep_getattr(instance, 'meta.region_name')\n\n meta = {\n 'aws.agent': 'aiobotocore',\n 'aws.operation': operation,\n 'aws.region': region_name,\n }\n span.set_tags(meta)\n\n result = yield from original_func(*args, **kwargs)\n\n body = result.get('Body')\n if isinstance(body, ClientResponseContentProxy):\n result['Body'] = WrappedClientResponseContentProxy(body, pin, span)\n\n response_meta = result['ResponseMetadata']\n response_headers = response_meta['HTTPHeaders']\n\n span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])\n span.set_tag('retry_attempts', response_meta['RetryAttempts'])\n\n request_id = response_meta.get('RequestId')\n if request_id:\n span.set_tag('aws.requestid', request_id)\n\n request_id2 = response_headers.get('x-amz-id-2')\n if request_id2:\n span.set_tag('aws.requestid2', request_id2)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.aiobotocore.get_analytics_sample_rate()\n )\n\n return result\n", "path": "ddtrace/contrib/aiobotocore/patch.py"}]} | 1,938 | 172 |
gh_patches_debug_39363 | rasdani/github-patches | git_diff | deepset-ai__haystack-6564 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feat: Add embed_meta_fields support to rankers like in Haystack v1
**Is your feature request related to a problem? Please describe.**
We have found that `embed_meta_fields` greatly improves ranking when metadata is relevant while searching. This is a feature that we use a lot from v1 that would be great to also have in v2.
**Describe the solution you'd like**
Add support for `embed_meta_fields` as an init parameter to rankers like `TransformersSimilarityRanker`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/components/rankers/transformers_similarity.py`
Content:
```
1 import logging
2 from pathlib import Path
3 from typing import List, Union, Dict, Any, Optional
4
5 from haystack import ComponentError, Document, component, default_to_dict
6 from haystack.lazy_imports import LazyImport
7
8 logger = logging.getLogger(__name__)
9
10
11 with LazyImport(message="Run 'pip install transformers[torch,sentencepiece]'") as torch_and_transformers_import:
12 import torch
13 from transformers import AutoModelForSequenceClassification, AutoTokenizer
14
15
16 @component
17 class TransformersSimilarityRanker:
18 """
19 Ranks Documents based on their similarity to the query.
20 It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.
21
22 Usage example:
23 ```
24 from haystack import Document
25 from haystack.components.rankers import TransformersSimilarityRanker
26
27 ranker = TransformersSimilarityRanker()
28 docs = [Document(content="Paris"), Document(content="Berlin")]
29 query = "City in Germany"
30 output = ranker.run(query=query, documents=docs)
31 docs = output["documents"]
32 assert len(docs) == 2
33 assert docs[0].content == "Berlin"
34 ```
35 """
36
37 def __init__(
38 self,
39 model_name_or_path: Union[str, Path] = "cross-encoder/ms-marco-MiniLM-L-6-v2",
40 device: str = "cpu",
41 token: Union[bool, str, None] = None,
42 top_k: int = 10,
43 ):
44 """
45 Creates an instance of TransformersSimilarityRanker.
46
47 :param model_name_or_path: The name or path of a pre-trained cross-encoder model
48 from the Hugging Face Hub.
49 :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.
50 :param token: The API token used to download private models from Hugging Face.
51 If this parameter is set to `True`, the token generated when running
52 `transformers-cli login` (stored in ~/.huggingface) is used.
53 :param top_k: The maximum number of Documents to return per query.
54 """
55 torch_and_transformers_import.check()
56
57 self.model_name_or_path = model_name_or_path
58 if top_k <= 0:
59 raise ValueError(f"top_k must be > 0, but got {top_k}")
60 self.top_k = top_k
61 self.device = device
62 self.token = token
63 self.model = None
64 self.tokenizer = None
65
66 def _get_telemetry_data(self) -> Dict[str, Any]:
67 """
68 Data that is sent to Posthog for usage analytics.
69 """
70 return {"model": str(self.model_name_or_path)}
71
72 def warm_up(self):
73 """
74 Warm up the model and tokenizer used for scoring the Documents.
75 """
76 if self.model_name_or_path and not self.model:
77 self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)
78 self.model = self.model.to(self.device)
79 self.model.eval()
80 self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)
81
82 def to_dict(self) -> Dict[str, Any]:
83 """
84 Serialize this component to a dictionary.
85 """
86 return default_to_dict(
87 self,
88 device=self.device,
89 model_name_or_path=self.model_name_or_path,
90 token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens
91 top_k=self.top_k,
92 )
93
94 @component.output_types(documents=List[Document])
95 def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):
96 """
97 Returns a list of Documents ranked by their similarity to the given query.
98
99 :param query: Query string.
100 :param documents: List of Documents.
101 :param top_k: The maximum number of Documents you want the Ranker to return.
102 :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.
103 """
104 if not documents:
105 return {"documents": []}
106
107 if top_k is None:
108 top_k = self.top_k
109
110 elif top_k <= 0:
111 raise ValueError(f"top_k must be > 0, but got {top_k}")
112
113 # If a model path is provided but the model isn't loaded
114 if self.model_name_or_path and not self.model:
115 raise ComponentError(
116 f"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'."
117 )
118
119 query_doc_pairs = [[query, doc.content] for doc in documents]
120 features = self.tokenizer(
121 query_doc_pairs, padding=True, truncation=True, return_tensors="pt"
122 ).to( # type: ignore
123 self.device
124 )
125 with torch.inference_mode():
126 similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore
127
128 _, sorted_indices = torch.sort(similarity_scores, descending=True)
129 ranked_docs = []
130 for sorted_index_tensor in sorted_indices:
131 i = sorted_index_tensor.item()
132 documents[i].score = similarity_scores[i].item()
133 ranked_docs.append(documents[i])
134 return {"documents": ranked_docs[:top_k]}
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py
--- a/haystack/components/rankers/transformers_similarity.py
+++ b/haystack/components/rankers/transformers_similarity.py
@@ -40,6 +40,8 @@
device: str = "cpu",
token: Union[bool, str, None] = None,
top_k: int = 10,
+ meta_fields_to_embed: Optional[List[str]] = None,
+ embedding_separator: str = "\n",
):
"""
Creates an instance of TransformersSimilarityRanker.
@@ -51,6 +53,8 @@
If this parameter is set to `True`, the token generated when running
`transformers-cli login` (stored in ~/.huggingface) is used.
:param top_k: The maximum number of Documents to return per query.
+ :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.
+ :param embedding_separator: Separator used to concatenate the meta fields to the Document content.
"""
torch_and_transformers_import.check()
@@ -62,6 +66,8 @@
self.token = token
self.model = None
self.tokenizer = None
+ self.meta_fields_to_embed = meta_fields_to_embed or []
+ self.embedding_separator = embedding_separator
def _get_telemetry_data(self) -> Dict[str, Any]:
"""
@@ -89,6 +95,8 @@
model_name_or_path=self.model_name_or_path,
token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens
top_k=self.top_k,
+ meta_fields_to_embed=self.meta_fields_to_embed,
+ embedding_separator=self.embedding_separator,
)
@component.output_types(documents=List[Document])
@@ -116,7 +124,14 @@
f"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'."
)
- query_doc_pairs = [[query, doc.content] for doc in documents]
+ query_doc_pairs = []
+ for doc in documents:
+ meta_values_to_embed = [
+ str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]
+ ]
+ text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or ""])
+ query_doc_pairs.append([query, text_to_embed])
+
features = self.tokenizer(
query_doc_pairs, padding=True, truncation=True, return_tensors="pt"
).to( # type: ignore
| {"golden_diff": "diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py\n--- a/haystack/components/rankers/transformers_similarity.py\n+++ b/haystack/components/rankers/transformers_similarity.py\n@@ -40,6 +40,8 @@\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n+ meta_fields_to_embed: Optional[List[str]] = None,\n+ embedding_separator: str = \"\\n\",\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n@@ -51,6 +53,8 @@\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n+ :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.\n+ :param embedding_separator: Separator used to concatenate the meta fields to the Document content.\n \"\"\"\n torch_and_transformers_import.check()\n \n@@ -62,6 +66,8 @@\n self.token = token\n self.model = None\n self.tokenizer = None\n+ self.meta_fields_to_embed = meta_fields_to_embed or []\n+ self.embedding_separator = embedding_separator\n \n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n@@ -89,6 +95,8 @@\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n+ meta_fields_to_embed=self.meta_fields_to_embed,\n+ embedding_separator=self.embedding_separator,\n )\n \n @component.output_types(documents=List[Document])\n@@ -116,7 +124,14 @@\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n \n- query_doc_pairs = [[query, doc.content] for doc in documents]\n+ query_doc_pairs = []\n+ for doc in documents:\n+ meta_values_to_embed = [\n+ str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]\n+ ]\n+ text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or \"\"])\n+ query_doc_pairs.append([query, text_to_embed])\n+\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n", "issue": "feat: Add embed_meta_fields support to rankers like in Haystack v1\n**Is your feature request related to a problem? Please describe.**\r\nWe have found that `embed_meta_fields` greatly improves ranking when metadata is relevant while searching. This is a feature that we use a lot from v1 that would be great to also have in v2.\r\n\r\n**Describe the solution you'd like**\r\nAdd support for `embed_meta_fields` as an init parameter to rankers like `TransformersSimilarityRanker`.\r\n\n", "before_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Union, Dict, Any, Optional\n\nfrom haystack import ComponentError, Document, component, default_to_dict\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install transformers[torch,sentencepiece]'\") as torch_and_transformers_import:\n import torch\n from transformers import AutoModelForSequenceClassification, AutoTokenizer\n\n\n@component\nclass TransformersSimilarityRanker:\n \"\"\"\n Ranks Documents based on their similarity to the query.\n It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.\n\n Usage example:\n ```\n from haystack import Document\n from haystack.components.rankers import TransformersSimilarityRanker\n\n ranker = TransformersSimilarityRanker()\n docs = [Document(content=\"Paris\"), Document(content=\"Berlin\")]\n query = \"City in Germany\"\n output = ranker.run(query=query, documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 2\n assert docs[0].content == \"Berlin\"\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path] = \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n\n :param model_name_or_path: The name or path of a pre-trained cross-encoder model\n from the Hugging Face Hub.\n :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.\n :param token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n \"\"\"\n torch_and_transformers_import.check()\n\n self.model_name_or_path = model_name_or_path\n if top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n self.top_k = top_k\n self.device = device\n self.token = token\n self.model = None\n self.tokenizer = None\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n return {\"model\": str(self.model_name_or_path)}\n\n def warm_up(self):\n \"\"\"\n Warm up the model and tokenizer used for scoring the Documents.\n \"\"\"\n if self.model_name_or_path and not self.model:\n self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)\n self.model = self.model.to(self.device)\n self.model.eval()\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n device=self.device,\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n )\n\n @component.output_types(documents=List[Document])\n def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):\n \"\"\"\n Returns a list of Documents ranked by their similarity to the given query.\n\n :param query: Query string.\n :param documents: List of Documents.\n :param top_k: The maximum number of Documents you want the Ranker to return.\n :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n if top_k is None:\n top_k = self.top_k\n\n elif top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n\n # If a model path is provided but the model isn't loaded\n if self.model_name_or_path and not self.model:\n raise ComponentError(\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n\n query_doc_pairs = [[query, doc.content] for doc in documents]\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n self.device\n )\n with torch.inference_mode():\n similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore\n\n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n for sorted_index_tensor in sorted_indices:\n i = sorted_index_tensor.item()\n documents[i].score = similarity_scores[i].item()\n ranked_docs.append(documents[i])\n return {\"documents\": ranked_docs[:top_k]}\n", "path": "haystack/components/rankers/transformers_similarity.py"}], "after_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Union, Dict, Any, Optional\n\nfrom haystack import ComponentError, Document, component, default_to_dict\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install transformers[torch,sentencepiece]'\") as torch_and_transformers_import:\n import torch\n from transformers import AutoModelForSequenceClassification, AutoTokenizer\n\n\n@component\nclass TransformersSimilarityRanker:\n \"\"\"\n Ranks Documents based on their similarity to the query.\n It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.\n\n Usage example:\n ```\n from haystack import Document\n from haystack.components.rankers import TransformersSimilarityRanker\n\n ranker = TransformersSimilarityRanker()\n docs = [Document(content=\"Paris\"), Document(content=\"Berlin\")]\n query = \"City in Germany\"\n output = ranker.run(query=query, documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 2\n assert docs[0].content == \"Berlin\"\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path] = \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n meta_fields_to_embed: Optional[List[str]] = None,\n embedding_separator: str = \"\\n\",\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n\n :param model_name_or_path: The name or path of a pre-trained cross-encoder model\n from the Hugging Face Hub.\n :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.\n :param token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n :param meta_fields_to_embed: List of meta fields that should be embedded along with the Document content.\n :param embedding_separator: Separator used to concatenate the meta fields to the Document content.\n \"\"\"\n torch_and_transformers_import.check()\n\n self.model_name_or_path = model_name_or_path\n if top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n self.top_k = top_k\n self.device = device\n self.token = token\n self.model = None\n self.tokenizer = None\n self.meta_fields_to_embed = meta_fields_to_embed or []\n self.embedding_separator = embedding_separator\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n return {\"model\": str(self.model_name_or_path)}\n\n def warm_up(self):\n \"\"\"\n Warm up the model and tokenizer used for scoring the Documents.\n \"\"\"\n if self.model_name_or_path and not self.model:\n self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)\n self.model = self.model.to(self.device)\n self.model.eval()\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n device=self.device,\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n meta_fields_to_embed=self.meta_fields_to_embed,\n embedding_separator=self.embedding_separator,\n )\n\n @component.output_types(documents=List[Document])\n def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):\n \"\"\"\n Returns a list of Documents ranked by their similarity to the given query.\n\n :param query: Query string.\n :param documents: List of Documents.\n :param top_k: The maximum number of Documents you want the Ranker to return.\n :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n if top_k is None:\n top_k = self.top_k\n\n elif top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n\n # If a model path is provided but the model isn't loaded\n if self.model_name_or_path and not self.model:\n raise ComponentError(\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n\n query_doc_pairs = []\n for doc in documents:\n meta_values_to_embed = [\n str(doc.meta[key]) for key in self.meta_fields_to_embed if key in doc.meta and doc.meta[key]\n ]\n text_to_embed = self.embedding_separator.join(meta_values_to_embed + [doc.content or \"\"])\n query_doc_pairs.append([query, text_to_embed])\n\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n self.device\n )\n with torch.inference_mode():\n similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore\n\n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n for sorted_index_tensor in sorted_indices:\n i = sorted_index_tensor.item()\n documents[i].score = similarity_scores[i].item()\n ranked_docs.append(documents[i])\n return {\"documents\": ranked_docs[:top_k]}\n", "path": "haystack/components/rankers/transformers_similarity.py"}]} | 1,824 | 601 |
gh_patches_debug_13449 | rasdani/github-patches | git_diff | cloudtools__troposphere-178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cloudwatch Alarm Threshold Type
The parameter type 'Threshold' within Cloudwatch Alarms is currently of type 'integer' whereas the AWS documentations notes this should be a String.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cw-alarm.html#cfn-cloudwatch-alarms-threshold
I am hitting an issue when using alarms to check instance health - to monitor StatusCheckFailed I have an implementation which sets Threshold to 0.5 to evaluate a healthcheck of sorts. This works in Cloudformation but fails when I try to use it in the troposphere code.
I think the line 'Threshold': (integer, True), should be 'Threshold': (basestring, True), within cloudwatch.py
Any thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/cloudwatch.py`
Content:
```
1 # Copyright (c) 2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Ref
7 from .validators import integer, positive_integer, boolean
8
9
10 class MetricDimension(AWSProperty):
11 props = {
12 'Name': (basestring, True),
13 'Value': (basestring, True),
14 }
15
16
17 class Alarm(AWSObject):
18 resource_type = "AWS::CloudWatch::Alarm"
19
20 props = {
21 'ActionsEnabled': (boolean, False),
22 'AlarmActions': ([basestring, Ref], False),
23 'AlarmDescription': (basestring, False),
24 'AlarmName': (basestring, False),
25 'ComparisonOperator': (basestring, True),
26 'Dimensions': ([MetricDimension], False),
27 'EvaluationPeriods': (positive_integer, True),
28 'InsufficientDataActions': ([basestring, Ref], False),
29 'MetricName': (basestring, True),
30 'Namespace': (basestring, True),
31 'OKActions': ([basestring, Ref], False),
32 'Period': (positive_integer, True),
33 'Statistic': (basestring, True),
34 'Threshold': (integer, True),
35 'Unit': (basestring, False),
36 }
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/cloudwatch.py b/troposphere/cloudwatch.py
--- a/troposphere/cloudwatch.py
+++ b/troposphere/cloudwatch.py
@@ -4,7 +4,7 @@
# See LICENSE file for full license.
from . import AWSObject, AWSProperty, Ref
-from .validators import integer, positive_integer, boolean
+from .validators import positive_integer, boolean
class MetricDimension(AWSProperty):
@@ -31,6 +31,6 @@
'OKActions': ([basestring, Ref], False),
'Period': (positive_integer, True),
'Statistic': (basestring, True),
- 'Threshold': (integer, True),
+ 'Threshold': (basestring, True),
'Unit': (basestring, False),
}
| {"golden_diff": "diff --git a/troposphere/cloudwatch.py b/troposphere/cloudwatch.py\n--- a/troposphere/cloudwatch.py\n+++ b/troposphere/cloudwatch.py\n@@ -4,7 +4,7 @@\n # See LICENSE file for full license.\n \n from . import AWSObject, AWSProperty, Ref\n-from .validators import integer, positive_integer, boolean\n+from .validators import positive_integer, boolean\n \n \n class MetricDimension(AWSProperty):\n@@ -31,6 +31,6 @@\n 'OKActions': ([basestring, Ref], False),\n 'Period': (positive_integer, True),\n 'Statistic': (basestring, True),\n- 'Threshold': (integer, True),\n+ 'Threshold': (basestring, True),\n 'Unit': (basestring, False),\n }\n", "issue": "Cloudwatch Alarm Threshold Type\nThe parameter type 'Threshold' within Cloudwatch Alarms is currently of type 'integer' whereas the AWS documentations notes this should be a String.\n\nhttp://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cw-alarm.html#cfn-cloudwatch-alarms-threshold\n\nI am hitting an issue when using alarms to check instance health - to monitor StatusCheckFailed I have an implementation which sets Threshold to 0.5 to evaluate a healthcheck of sorts. This works in Cloudformation but fails when I try to use it in the troposphere code.\n\nI think the line 'Threshold': (integer, True), should be 'Threshold': (basestring, True), within cloudwatch.py\n\nAny thoughts?\n\n", "before_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Ref\nfrom .validators import integer, positive_integer, boolean\n\n\nclass MetricDimension(AWSProperty):\n props = {\n 'Name': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Alarm(AWSObject):\n resource_type = \"AWS::CloudWatch::Alarm\"\n\n props = {\n 'ActionsEnabled': (boolean, False),\n 'AlarmActions': ([basestring, Ref], False),\n 'AlarmDescription': (basestring, False),\n 'AlarmName': (basestring, False),\n 'ComparisonOperator': (basestring, True),\n 'Dimensions': ([MetricDimension], False),\n 'EvaluationPeriods': (positive_integer, True),\n 'InsufficientDataActions': ([basestring, Ref], False),\n 'MetricName': (basestring, True),\n 'Namespace': (basestring, True),\n 'OKActions': ([basestring, Ref], False),\n 'Period': (positive_integer, True),\n 'Statistic': (basestring, True),\n 'Threshold': (integer, True),\n 'Unit': (basestring, False),\n }\n", "path": "troposphere/cloudwatch.py"}], "after_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Ref\nfrom .validators import positive_integer, boolean\n\n\nclass MetricDimension(AWSProperty):\n props = {\n 'Name': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Alarm(AWSObject):\n resource_type = \"AWS::CloudWatch::Alarm\"\n\n props = {\n 'ActionsEnabled': (boolean, False),\n 'AlarmActions': ([basestring, Ref], False),\n 'AlarmDescription': (basestring, False),\n 'AlarmName': (basestring, False),\n 'ComparisonOperator': (basestring, True),\n 'Dimensions': ([MetricDimension], False),\n 'EvaluationPeriods': (positive_integer, True),\n 'InsufficientDataActions': ([basestring, Ref], False),\n 'MetricName': (basestring, True),\n 'Namespace': (basestring, True),\n 'OKActions': ([basestring, Ref], False),\n 'Period': (positive_integer, True),\n 'Statistic': (basestring, True),\n 'Threshold': (basestring, True),\n 'Unit': (basestring, False),\n }\n", "path": "troposphere/cloudwatch.py"}]} | 763 | 173 |
gh_patches_debug_14449 | rasdani/github-patches | git_diff | scrapy__scrapy-602 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow sending HTML emails with scrapy.mail.MailSender
I've patched this locally by changing the `send` method:
```
def send(self, to, subject, body, cc=None, attachs=(), mime='text/plain', _callback=None):
if attachs:
msg = MIMEMultipart()
else:
msg = MIMENonMultipart(*mime.split('/'))
```
But it seems fragile. Any thoughts?
Allow sending HTML emails with scrapy.mail.MailSender
I've patched this locally by changing the `send` method:
```
def send(self, to, subject, body, cc=None, attachs=(), mime='text/plain', _callback=None):
if attachs:
msg = MIMEMultipart()
else:
msg = MIMENonMultipart(*mime.split('/'))
```
But it seems fragile. Any thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/mail.py`
Content:
```
1 """
2 Mail sending helpers
3
4 See documentation in docs/topics/email.rst
5 """
6 from cStringIO import StringIO
7 from email.MIMEMultipart import MIMEMultipart
8 from email.MIMENonMultipart import MIMENonMultipart
9 from email.MIMEBase import MIMEBase
10 from email.MIMEText import MIMEText
11 from email.Utils import COMMASPACE, formatdate
12 from email import Encoders
13
14 from twisted.internet import defer, reactor, ssl
15 from twisted.mail.smtp import ESMTPSenderFactory
16
17 from scrapy import log
18
19 class MailSender(object):
20
21 def __init__(self, smtphost='localhost', mailfrom='scrapy@localhost',
22 smtpuser=None, smtppass=None, smtpport=25, smtptls=False, smtpssl=False, debug=False):
23 self.smtphost = smtphost
24 self.smtpport = smtpport
25 self.smtpuser = smtpuser
26 self.smtppass = smtppass
27 self.smtptls = smtptls
28 self.smtpssl = smtpssl
29 self.mailfrom = mailfrom
30 self.debug = debug
31
32 @classmethod
33 def from_settings(cls, settings):
34 return cls(settings['MAIL_HOST'], settings['MAIL_FROM'], settings['MAIL_USER'],
35 settings['MAIL_PASS'], settings.getint('MAIL_PORT'),
36 settings.getbool('MAIL_TLS'), settings.getbool('MAIL_SSL'))
37
38 def send(self, to, subject, body, cc=None, attachs=(), _callback=None):
39 if attachs:
40 msg = MIMEMultipart()
41 else:
42 msg = MIMENonMultipart('text', 'plain')
43 msg['From'] = self.mailfrom
44 msg['To'] = COMMASPACE.join(to)
45 msg['Date'] = formatdate(localtime=True)
46 msg['Subject'] = subject
47 rcpts = to[:]
48 if cc:
49 rcpts.extend(cc)
50 msg['Cc'] = COMMASPACE.join(cc)
51
52 if attachs:
53 msg.attach(MIMEText(body))
54 for attach_name, mimetype, f in attachs:
55 part = MIMEBase(*mimetype.split('/'))
56 part.set_payload(f.read())
57 Encoders.encode_base64(part)
58 part.add_header('Content-Disposition', 'attachment; filename="%s"' \
59 % attach_name)
60 msg.attach(part)
61 else:
62 msg.set_payload(body)
63
64 if _callback:
65 _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg)
66
67 if self.debug:
68 log.msg(format='Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s Subject="%(mailsubject)s" Attachs=%(mailattachs)d',
69 level=log.DEBUG, mailto=to, mailcc=cc, mailsubject=subject, mailattachs=len(attachs))
70 return
71
72 dfd = self._sendmail(rcpts, msg.as_string())
73 dfd.addCallbacks(self._sent_ok, self._sent_failed,
74 callbackArgs=[to, cc, subject, len(attachs)],
75 errbackArgs=[to, cc, subject, len(attachs)])
76 reactor.addSystemEventTrigger('before', 'shutdown', lambda: dfd)
77 return dfd
78
79 def _sent_ok(self, result, to, cc, subject, nattachs):
80 log.msg(format='Mail sent OK: To=%(mailto)s Cc=%(mailcc)s '
81 'Subject="%(mailsubject)s" Attachs=%(mailattachs)d',
82 mailto=to, mailcc=cc, mailsubject=subject, mailattachs=nattachs)
83
84 def _sent_failed(self, failure, to, cc, subject, nattachs):
85 errstr = str(failure.value)
86 log.msg(format='Unable to send mail: To=%(mailto)s Cc=%(mailcc)s '
87 'Subject="%(mailsubject)s" Attachs=%(mailattachs)d'
88 '- %(mailerr)s',
89 level=log.ERROR, mailto=to, mailcc=cc, mailsubject=subject,
90 mailattachs=nattachs, mailerr=errstr)
91
92 def _sendmail(self, to_addrs, msg):
93 msg = StringIO(msg)
94 d = defer.Deferred()
95 factory = ESMTPSenderFactory(self.smtpuser, self.smtppass, self.mailfrom, \
96 to_addrs, msg, d, heloFallback=True, requireAuthentication=False, \
97 requireTransportSecurity=self.smtptls)
98 factory.noisy = False
99
100 if self.smtpssl:
101 reactor.connectSSL(self.smtphost, self.smtpport, factory, ssl.ClientContextFactory())
102 else:
103 reactor.connectTCP(self.smtphost, self.smtpport, factory)
104
105 return d
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/mail.py b/scrapy/mail.py
--- a/scrapy/mail.py
+++ b/scrapy/mail.py
@@ -35,11 +35,11 @@
settings['MAIL_PASS'], settings.getint('MAIL_PORT'),
settings.getbool('MAIL_TLS'), settings.getbool('MAIL_SSL'))
- def send(self, to, subject, body, cc=None, attachs=(), _callback=None):
+ def send(self, to, subject, body, cc=None, attachs=(), mimetype='text/plain', _callback=None):
if attachs:
msg = MIMEMultipart()
else:
- msg = MIMENonMultipart('text', 'plain')
+ msg = MIMENonMultipart(*mimetype.split('/', 1))
msg['From'] = self.mailfrom
msg['To'] = COMMASPACE.join(to)
msg['Date'] = formatdate(localtime=True)
| {"golden_diff": "diff --git a/scrapy/mail.py b/scrapy/mail.py\n--- a/scrapy/mail.py\n+++ b/scrapy/mail.py\n@@ -35,11 +35,11 @@\n settings['MAIL_PASS'], settings.getint('MAIL_PORT'),\n settings.getbool('MAIL_TLS'), settings.getbool('MAIL_SSL'))\n \n- def send(self, to, subject, body, cc=None, attachs=(), _callback=None):\n+ def send(self, to, subject, body, cc=None, attachs=(), mimetype='text/plain', _callback=None):\n if attachs:\n msg = MIMEMultipart()\n else:\n- msg = MIMENonMultipart('text', 'plain')\n+ msg = MIMENonMultipart(*mimetype.split('/', 1))\n msg['From'] = self.mailfrom\n msg['To'] = COMMASPACE.join(to)\n msg['Date'] = formatdate(localtime=True)\n", "issue": "Allow sending HTML emails with scrapy.mail.MailSender\nI've patched this locally by changing the `send` method:\n\n```\ndef send(self, to, subject, body, cc=None, attachs=(), mime='text/plain', _callback=None):\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart(*mime.split('/'))\n```\n\nBut it seems fragile. Any thoughts?\n\nAllow sending HTML emails with scrapy.mail.MailSender\nI've patched this locally by changing the `send` method:\n\n```\ndef send(self, to, subject, body, cc=None, attachs=(), mime='text/plain', _callback=None):\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart(*mime.split('/'))\n```\n\nBut it seems fragile. Any thoughts?\n\n", "before_files": [{"content": "\"\"\"\nMail sending helpers\n\nSee documentation in docs/topics/email.rst\n\"\"\"\nfrom cStringIO import StringIO\nfrom email.MIMEMultipart import MIMEMultipart\nfrom email.MIMENonMultipart import MIMENonMultipart\nfrom email.MIMEBase import MIMEBase\nfrom email.MIMEText import MIMEText\nfrom email.Utils import COMMASPACE, formatdate\nfrom email import Encoders\n\nfrom twisted.internet import defer, reactor, ssl\nfrom twisted.mail.smtp import ESMTPSenderFactory\n\nfrom scrapy import log\n\nclass MailSender(object):\n\n def __init__(self, smtphost='localhost', mailfrom='scrapy@localhost',\n smtpuser=None, smtppass=None, smtpport=25, smtptls=False, smtpssl=False, debug=False):\n self.smtphost = smtphost\n self.smtpport = smtpport\n self.smtpuser = smtpuser\n self.smtppass = smtppass\n self.smtptls = smtptls\n self.smtpssl = smtpssl\n self.mailfrom = mailfrom\n self.debug = debug\n\n @classmethod\n def from_settings(cls, settings):\n return cls(settings['MAIL_HOST'], settings['MAIL_FROM'], settings['MAIL_USER'],\n settings['MAIL_PASS'], settings.getint('MAIL_PORT'),\n settings.getbool('MAIL_TLS'), settings.getbool('MAIL_SSL'))\n\n def send(self, to, subject, body, cc=None, attachs=(), _callback=None):\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart('text', 'plain')\n msg['From'] = self.mailfrom\n msg['To'] = COMMASPACE.join(to)\n msg['Date'] = formatdate(localtime=True)\n msg['Subject'] = subject\n rcpts = to[:]\n if cc:\n rcpts.extend(cc)\n msg['Cc'] = COMMASPACE.join(cc)\n\n if attachs:\n msg.attach(MIMEText(body))\n for attach_name, mimetype, f in attachs:\n part = MIMEBase(*mimetype.split('/'))\n part.set_payload(f.read())\n Encoders.encode_base64(part)\n part.add_header('Content-Disposition', 'attachment; filename=\"%s\"' \\\n % attach_name)\n msg.attach(part)\n else:\n msg.set_payload(body)\n\n if _callback:\n _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg)\n\n if self.debug:\n log.msg(format='Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n level=log.DEBUG, mailto=to, mailcc=cc, mailsubject=subject, mailattachs=len(attachs))\n return\n\n dfd = self._sendmail(rcpts, msg.as_string())\n dfd.addCallbacks(self._sent_ok, self._sent_failed,\n callbackArgs=[to, cc, subject, len(attachs)],\n errbackArgs=[to, cc, subject, len(attachs)])\n reactor.addSystemEventTrigger('before', 'shutdown', lambda: dfd)\n return dfd\n\n def _sent_ok(self, result, to, cc, subject, nattachs):\n log.msg(format='Mail sent OK: To=%(mailto)s Cc=%(mailcc)s '\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n mailto=to, mailcc=cc, mailsubject=subject, mailattachs=nattachs)\n\n def _sent_failed(self, failure, to, cc, subject, nattachs):\n errstr = str(failure.value)\n log.msg(format='Unable to send mail: To=%(mailto)s Cc=%(mailcc)s '\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d'\n '- %(mailerr)s',\n level=log.ERROR, mailto=to, mailcc=cc, mailsubject=subject,\n mailattachs=nattachs, mailerr=errstr)\n\n def _sendmail(self, to_addrs, msg):\n msg = StringIO(msg)\n d = defer.Deferred()\n factory = ESMTPSenderFactory(self.smtpuser, self.smtppass, self.mailfrom, \\\n to_addrs, msg, d, heloFallback=True, requireAuthentication=False, \\\n requireTransportSecurity=self.smtptls)\n factory.noisy = False\n\n if self.smtpssl:\n reactor.connectSSL(self.smtphost, self.smtpport, factory, ssl.ClientContextFactory())\n else:\n reactor.connectTCP(self.smtphost, self.smtpport, factory)\n\n return d\n", "path": "scrapy/mail.py"}], "after_files": [{"content": "\"\"\"\nMail sending helpers\n\nSee documentation in docs/topics/email.rst\n\"\"\"\nfrom cStringIO import StringIO\nfrom email.MIMEMultipart import MIMEMultipart\nfrom email.MIMENonMultipart import MIMENonMultipart\nfrom email.MIMEBase import MIMEBase\nfrom email.MIMEText import MIMEText\nfrom email.Utils import COMMASPACE, formatdate\nfrom email import Encoders\n\nfrom twisted.internet import defer, reactor, ssl\nfrom twisted.mail.smtp import ESMTPSenderFactory\n\nfrom scrapy import log\n\nclass MailSender(object):\n\n def __init__(self, smtphost='localhost', mailfrom='scrapy@localhost',\n smtpuser=None, smtppass=None, smtpport=25, smtptls=False, smtpssl=False, debug=False):\n self.smtphost = smtphost\n self.smtpport = smtpport\n self.smtpuser = smtpuser\n self.smtppass = smtppass\n self.smtptls = smtptls\n self.smtpssl = smtpssl\n self.mailfrom = mailfrom\n self.debug = debug\n\n @classmethod\n def from_settings(cls, settings):\n return cls(settings['MAIL_HOST'], settings['MAIL_FROM'], settings['MAIL_USER'],\n settings['MAIL_PASS'], settings.getint('MAIL_PORT'),\n settings.getbool('MAIL_TLS'), settings.getbool('MAIL_SSL'))\n\n def send(self, to, subject, body, cc=None, attachs=(), mimetype='text/plain', _callback=None):\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart(*mimetype.split('/', 1))\n msg['From'] = self.mailfrom\n msg['To'] = COMMASPACE.join(to)\n msg['Date'] = formatdate(localtime=True)\n msg['Subject'] = subject\n rcpts = to[:]\n if cc:\n rcpts.extend(cc)\n msg['Cc'] = COMMASPACE.join(cc)\n\n if attachs:\n msg.attach(MIMEText(body))\n for attach_name, mimetype, f in attachs:\n part = MIMEBase(*mimetype.split('/'))\n part.set_payload(f.read())\n Encoders.encode_base64(part)\n part.add_header('Content-Disposition', 'attachment; filename=\"%s\"' \\\n % attach_name)\n msg.attach(part)\n else:\n msg.set_payload(body)\n\n if _callback:\n _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg)\n\n if self.debug:\n log.msg(format='Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n level=log.DEBUG, mailto=to, mailcc=cc, mailsubject=subject, mailattachs=len(attachs))\n return\n\n dfd = self._sendmail(rcpts, msg.as_string())\n dfd.addCallbacks(self._sent_ok, self._sent_failed,\n callbackArgs=[to, cc, subject, len(attachs)],\n errbackArgs=[to, cc, subject, len(attachs)])\n reactor.addSystemEventTrigger('before', 'shutdown', lambda: dfd)\n return dfd\n\n def _sent_ok(self, result, to, cc, subject, nattachs):\n log.msg(format='Mail sent OK: To=%(mailto)s Cc=%(mailcc)s '\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n mailto=to, mailcc=cc, mailsubject=subject, mailattachs=nattachs)\n\n def _sent_failed(self, failure, to, cc, subject, nattachs):\n errstr = str(failure.value)\n log.msg(format='Unable to send mail: To=%(mailto)s Cc=%(mailcc)s '\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d'\n '- %(mailerr)s',\n level=log.ERROR, mailto=to, mailcc=cc, mailsubject=subject,\n mailattachs=nattachs, mailerr=errstr)\n\n def _sendmail(self, to_addrs, msg):\n msg = StringIO(msg)\n d = defer.Deferred()\n factory = ESMTPSenderFactory(self.smtpuser, self.smtppass, self.mailfrom, \\\n to_addrs, msg, d, heloFallback=True, requireAuthentication=False, \\\n requireTransportSecurity=self.smtptls)\n factory.noisy = False\n\n if self.smtpssl:\n reactor.connectSSL(self.smtphost, self.smtpport, factory, ssl.ClientContextFactory())\n else:\n reactor.connectTCP(self.smtphost, self.smtpport, factory)\n\n return d\n", "path": "scrapy/mail.py"}]} | 1,690 | 206 |
gh_patches_debug_27014 | rasdani/github-patches | git_diff | chainer__chainer-1106 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BatchNormalization does not switch the current GPU automatically
As reported in the forum, the current implementation of BatchNormalization link does not switch the current GPU appropriately during computing the running statistics of batches.
It causes confusing error message like `ValueError: Array device must be same as the current device: array device = 1 while current = 0`.
See for example: https://groups.google.com/forum/#!topic/chainer/T-6s3KD-X-U
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/links/normalization/batch_normalization.py`
Content:
```
1 import numpy
2
3 from chainer.functions.normalization import batch_normalization
4 from chainer import link
5 from chainer import variable
6
7
8 class BatchNormalization(link.Link):
9
10 """Batch normalization layer on outputs of linear or convolution functions.
11
12 This link wraps the :func:`~chainer.functions.batch_normalization` and
13 :func:`~chainer.functions.fixed_batch_normalization` functions.
14
15 It runs in three modes: training mode, fine-tuning mode, and testing mode.
16
17 In training mode, it normalizes the input by *batch statistics*. It also
18 maintains approximated population statistics by moving averages, which can
19 be used for instant evaluation in testing mode.
20
21 In fine-tuning mode, it accumulates the input to compute *population
22 statistics*. In order to correctly compute the population statistics, a
23 user must use this mode to feed mini batches running through whole training
24 dataset.
25
26 In testing mode, it uses pre-computed population statistics to normalize
27 the input variable. The population statistics is approximated if it is
28 computed by training mode, or accurate if it is correctly computed by
29 fine-tuning mode.
30
31 Args:
32 size (int or tuple of ints): Size (or shape) of channel
33 dimensions.
34 decay (float): Decay rate of moving average. It is used on training.
35 eps (float): Epsilon value for numerical stability.
36 dtype (numpy.dtype): Type to use in computing.
37
38 See: `Batch Normalization: Accelerating Deep Network Training by Reducing\
39 Internal Covariate Shift <http://arxiv.org/abs/1502.03167>`_
40
41 .. seealso::
42 :func:`~chainer.functions.batch_normalization`,
43 :func:`~chainer.functions.fixed_batch_normalization`
44
45 Attributes:
46 gamma (~chainer.Variable): Scaling parameter.
47 beta (~chainer.Variable): Shifting parameter.
48 avg_mean (~chainer.Variable): Population mean.
49 avg_var (~chainer.Variable): Population variance.
50 N (int): Count of batches given for fine-tuning.
51 decay (float): Decay rate of moving average. It is used on training.
52 eps (float): Epsilon value for numerical stability. This value is added
53 to the batch variances.
54
55 """
56 def __init__(self, size, decay=0.9, eps=1e-5, dtype=numpy.float32):
57 super(BatchNormalization, self).__init__()
58 self.add_param('gamma', size, dtype=dtype)
59 self.gamma.data.fill(1)
60 self.add_param('beta', size, dtype=dtype)
61 self.beta.data.fill(0)
62 self.add_persistent('avg_mean', numpy.zeros(size, dtype=dtype))
63 self.add_persistent('avg_var', numpy.zeros(size, dtype=dtype))
64 self.add_persistent('N', 0)
65 self.decay = decay
66 self.eps = eps
67
68 def __call__(self, x, test=False, finetune=False):
69 """Invokes the forward propagation of BatchNormalization.
70
71 BatchNormalization accepts additional arguments, which controls three
72 different running mode.
73
74 Args:
75 x (Variable): An input variable.
76 test (bool): If ``True``, BatchNormalization runs in testing mode;
77 it normalizes the input using pre-computed statistics.
78 finetune (bool): If ``True``, BatchNormalization runs in
79 fine-tuning mode; it accumulates the input array to compute
80 population statistics for normalization, and normalizes the
81 input using batch statistics.
82
83 If ``test`` and ``finetune`` are both ``False``, then
84 BatchNormalization runs in training mode; it computes moving averages
85 of mean and variance for evaluation during training, and normalizes the
86 input using batch statistics.
87
88 """
89 use_batch_mean = not test or finetune
90
91 if use_batch_mean:
92 func = batch_normalization.BatchNormalizationFunction(self.eps)
93 ret = func(x, self.gamma, self.beta)
94
95 if finetune:
96 self.N += 1
97 decay = 1. - 1. / self.N
98 else:
99 decay = self.decay
100
101 m = x.data.size // self.gamma.data.size
102 adjust = m / max(m - 1., 1.) # unbiased estimation
103 self.avg_mean *= decay
104 func.mean *= 1 - decay # reuse buffer as a temporary
105 self.avg_mean += func.mean
106 del func.mean
107 self.avg_var *= decay
108 func.var *= (1 - decay) * adjust # reuse buffer as a temporary
109 self.avg_var += func.var
110 del func.var
111 else:
112 mean = variable.Variable(self.avg_mean, volatile='auto')
113 var = variable.Variable(self.avg_var, volatile='auto')
114 ret = batch_normalization.fixed_batch_normalization(
115 x, self.gamma, self.beta, mean, var, self.eps)
116 return ret
117
118 def start_finetuning(self):
119 """Resets the population count for collecting population statistics.
120
121 This method can be skipped if it is the first time to use the
122 fine-tuning mode. Otherwise, this method should be called before
123 starting the fine-tuning mode again.
124
125 """
126 self.N = 0
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/links/normalization/batch_normalization.py b/chainer/links/normalization/batch_normalization.py
--- a/chainer/links/normalization/batch_normalization.py
+++ b/chainer/links/normalization/batch_normalization.py
@@ -1,5 +1,6 @@
import numpy
+from chainer import cuda
from chainer.functions.normalization import batch_normalization
from chainer import link
from chainer import variable
@@ -98,16 +99,17 @@
else:
decay = self.decay
- m = x.data.size // self.gamma.data.size
- adjust = m / max(m - 1., 1.) # unbiased estimation
- self.avg_mean *= decay
- func.mean *= 1 - decay # reuse buffer as a temporary
- self.avg_mean += func.mean
- del func.mean
- self.avg_var *= decay
- func.var *= (1 - decay) * adjust # reuse buffer as a temporary
- self.avg_var += func.var
- del func.var
+ with cuda.get_device(x.data):
+ m = x.data.size // self.gamma.data.size
+ adjust = m / max(m - 1., 1.) # unbiased estimation
+ self.avg_mean *= decay
+ func.mean *= 1 - decay # reuse buffer as a temporary
+ self.avg_mean += func.mean
+ del func.mean
+ self.avg_var *= decay
+ func.var *= (1 - decay) * adjust # reuse buffer as a temporary
+ self.avg_var += func.var
+ del func.var
else:
mean = variable.Variable(self.avg_mean, volatile='auto')
var = variable.Variable(self.avg_var, volatile='auto')
| {"golden_diff": "diff --git a/chainer/links/normalization/batch_normalization.py b/chainer/links/normalization/batch_normalization.py\n--- a/chainer/links/normalization/batch_normalization.py\n+++ b/chainer/links/normalization/batch_normalization.py\n@@ -1,5 +1,6 @@\n import numpy\n \n+from chainer import cuda\n from chainer.functions.normalization import batch_normalization\n from chainer import link\n from chainer import variable\n@@ -98,16 +99,17 @@\n else:\n decay = self.decay\n \n- m = x.data.size // self.gamma.data.size\n- adjust = m / max(m - 1., 1.) # unbiased estimation\n- self.avg_mean *= decay\n- func.mean *= 1 - decay # reuse buffer as a temporary\n- self.avg_mean += func.mean\n- del func.mean\n- self.avg_var *= decay\n- func.var *= (1 - decay) * adjust # reuse buffer as a temporary\n- self.avg_var += func.var\n- del func.var\n+ with cuda.get_device(x.data):\n+ m = x.data.size // self.gamma.data.size\n+ adjust = m / max(m - 1., 1.) # unbiased estimation\n+ self.avg_mean *= decay\n+ func.mean *= 1 - decay # reuse buffer as a temporary\n+ self.avg_mean += func.mean\n+ del func.mean\n+ self.avg_var *= decay\n+ func.var *= (1 - decay) * adjust # reuse buffer as a temporary\n+ self.avg_var += func.var\n+ del func.var\n else:\n mean = variable.Variable(self.avg_mean, volatile='auto')\n var = variable.Variable(self.avg_var, volatile='auto')\n", "issue": "BatchNormalization does not switch the current GPU automatically\nAs reported in the forum, the current implementation of BatchNormalization link does not switch the current GPU appropriately during computing the running statistics of batches.\nIt causes confusing error message like `ValueError: Array device must be same as the current device: array device = 1 while current = 0`.\nSee for example: https://groups.google.com/forum/#!topic/chainer/T-6s3KD-X-U\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer.functions.normalization import batch_normalization\nfrom chainer import link\nfrom chainer import variable\n\n\nclass BatchNormalization(link.Link):\n\n \"\"\"Batch normalization layer on outputs of linear or convolution functions.\n\n This link wraps the :func:`~chainer.functions.batch_normalization` and\n :func:`~chainer.functions.fixed_batch_normalization` functions.\n\n It runs in three modes: training mode, fine-tuning mode, and testing mode.\n\n In training mode, it normalizes the input by *batch statistics*. It also\n maintains approximated population statistics by moving averages, which can\n be used for instant evaluation in testing mode.\n\n In fine-tuning mode, it accumulates the input to compute *population\n statistics*. In order to correctly compute the population statistics, a\n user must use this mode to feed mini batches running through whole training\n dataset.\n\n In testing mode, it uses pre-computed population statistics to normalize\n the input variable. The population statistics is approximated if it is\n computed by training mode, or accurate if it is correctly computed by\n fine-tuning mode.\n\n Args:\n size (int or tuple of ints): Size (or shape) of channel\n dimensions.\n decay (float): Decay rate of moving average. It is used on training.\n eps (float): Epsilon value for numerical stability.\n dtype (numpy.dtype): Type to use in computing.\n\n See: `Batch Normalization: Accelerating Deep Network Training by Reducing\\\n Internal Covariate Shift <http://arxiv.org/abs/1502.03167>`_\n\n .. seealso::\n :func:`~chainer.functions.batch_normalization`,\n :func:`~chainer.functions.fixed_batch_normalization`\n\n Attributes:\n gamma (~chainer.Variable): Scaling parameter.\n beta (~chainer.Variable): Shifting parameter.\n avg_mean (~chainer.Variable): Population mean.\n avg_var (~chainer.Variable): Population variance.\n N (int): Count of batches given for fine-tuning.\n decay (float): Decay rate of moving average. It is used on training.\n eps (float): Epsilon value for numerical stability. This value is added\n to the batch variances.\n\n \"\"\"\n def __init__(self, size, decay=0.9, eps=1e-5, dtype=numpy.float32):\n super(BatchNormalization, self).__init__()\n self.add_param('gamma', size, dtype=dtype)\n self.gamma.data.fill(1)\n self.add_param('beta', size, dtype=dtype)\n self.beta.data.fill(0)\n self.add_persistent('avg_mean', numpy.zeros(size, dtype=dtype))\n self.add_persistent('avg_var', numpy.zeros(size, dtype=dtype))\n self.add_persistent('N', 0)\n self.decay = decay\n self.eps = eps\n\n def __call__(self, x, test=False, finetune=False):\n \"\"\"Invokes the forward propagation of BatchNormalization.\n\n BatchNormalization accepts additional arguments, which controls three\n different running mode.\n\n Args:\n x (Variable): An input variable.\n test (bool): If ``True``, BatchNormalization runs in testing mode;\n it normalizes the input using pre-computed statistics.\n finetune (bool): If ``True``, BatchNormalization runs in\n fine-tuning mode; it accumulates the input array to compute\n population statistics for normalization, and normalizes the\n input using batch statistics.\n\n If ``test`` and ``finetune`` are both ``False``, then\n BatchNormalization runs in training mode; it computes moving averages\n of mean and variance for evaluation during training, and normalizes the\n input using batch statistics.\n\n \"\"\"\n use_batch_mean = not test or finetune\n\n if use_batch_mean:\n func = batch_normalization.BatchNormalizationFunction(self.eps)\n ret = func(x, self.gamma, self.beta)\n\n if finetune:\n self.N += 1\n decay = 1. - 1. / self.N\n else:\n decay = self.decay\n\n m = x.data.size // self.gamma.data.size\n adjust = m / max(m - 1., 1.) # unbiased estimation\n self.avg_mean *= decay\n func.mean *= 1 - decay # reuse buffer as a temporary\n self.avg_mean += func.mean\n del func.mean\n self.avg_var *= decay\n func.var *= (1 - decay) * adjust # reuse buffer as a temporary\n self.avg_var += func.var\n del func.var\n else:\n mean = variable.Variable(self.avg_mean, volatile='auto')\n var = variable.Variable(self.avg_var, volatile='auto')\n ret = batch_normalization.fixed_batch_normalization(\n x, self.gamma, self.beta, mean, var, self.eps)\n return ret\n\n def start_finetuning(self):\n \"\"\"Resets the population count for collecting population statistics.\n\n This method can be skipped if it is the first time to use the\n fine-tuning mode. Otherwise, this method should be called before\n starting the fine-tuning mode again.\n\n \"\"\"\n self.N = 0\n", "path": "chainer/links/normalization/batch_normalization.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer.functions.normalization import batch_normalization\nfrom chainer import link\nfrom chainer import variable\n\n\nclass BatchNormalization(link.Link):\n\n \"\"\"Batch normalization layer on outputs of linear or convolution functions.\n\n This link wraps the :func:`~chainer.functions.batch_normalization` and\n :func:`~chainer.functions.fixed_batch_normalization` functions.\n\n It runs in three modes: training mode, fine-tuning mode, and testing mode.\n\n In training mode, it normalizes the input by *batch statistics*. It also\n maintains approximated population statistics by moving averages, which can\n be used for instant evaluation in testing mode.\n\n In fine-tuning mode, it accumulates the input to compute *population\n statistics*. In order to correctly compute the population statistics, a\n user must use this mode to feed mini batches running through whole training\n dataset.\n\n In testing mode, it uses pre-computed population statistics to normalize\n the input variable. The population statistics is approximated if it is\n computed by training mode, or accurate if it is correctly computed by\n fine-tuning mode.\n\n Args:\n size (int or tuple of ints): Size (or shape) of channel\n dimensions.\n decay (float): Decay rate of moving average. It is used on training.\n eps (float): Epsilon value for numerical stability.\n dtype (numpy.dtype): Type to use in computing.\n\n See: `Batch Normalization: Accelerating Deep Network Training by Reducing\\\n Internal Covariate Shift <http://arxiv.org/abs/1502.03167>`_\n\n .. seealso::\n :func:`~chainer.functions.batch_normalization`,\n :func:`~chainer.functions.fixed_batch_normalization`\n\n Attributes:\n gamma (~chainer.Variable): Scaling parameter.\n beta (~chainer.Variable): Shifting parameter.\n avg_mean (~chainer.Variable): Population mean.\n avg_var (~chainer.Variable): Population variance.\n N (int): Count of batches given for fine-tuning.\n decay (float): Decay rate of moving average. It is used on training.\n eps (float): Epsilon value for numerical stability. This value is added\n to the batch variances.\n\n \"\"\"\n def __init__(self, size, decay=0.9, eps=1e-5, dtype=numpy.float32):\n super(BatchNormalization, self).__init__()\n self.add_param('gamma', size, dtype=dtype)\n self.gamma.data.fill(1)\n self.add_param('beta', size, dtype=dtype)\n self.beta.data.fill(0)\n self.add_persistent('avg_mean', numpy.zeros(size, dtype=dtype))\n self.add_persistent('avg_var', numpy.zeros(size, dtype=dtype))\n self.add_persistent('N', 0)\n self.decay = decay\n self.eps = eps\n\n def __call__(self, x, test=False, finetune=False):\n \"\"\"Invokes the forward propagation of BatchNormalization.\n\n BatchNormalization accepts additional arguments, which controls three\n different running mode.\n\n Args:\n x (Variable): An input variable.\n test (bool): If ``True``, BatchNormalization runs in testing mode;\n it normalizes the input using pre-computed statistics.\n finetune (bool): If ``True``, BatchNormalization runs in\n fine-tuning mode; it accumulates the input array to compute\n population statistics for normalization, and normalizes the\n input using batch statistics.\n\n If ``test`` and ``finetune`` are both ``False``, then\n BatchNormalization runs in training mode; it computes moving averages\n of mean and variance for evaluation during training, and normalizes the\n input using batch statistics.\n\n \"\"\"\n use_batch_mean = not test or finetune\n\n if use_batch_mean:\n func = batch_normalization.BatchNormalizationFunction(self.eps)\n ret = func(x, self.gamma, self.beta)\n\n if finetune:\n self.N += 1\n decay = 1. - 1. / self.N\n else:\n decay = self.decay\n\n with cuda.get_device(x.data):\n m = x.data.size // self.gamma.data.size\n adjust = m / max(m - 1., 1.) # unbiased estimation\n self.avg_mean *= decay\n func.mean *= 1 - decay # reuse buffer as a temporary\n self.avg_mean += func.mean\n del func.mean\n self.avg_var *= decay\n func.var *= (1 - decay) * adjust # reuse buffer as a temporary\n self.avg_var += func.var\n del func.var\n else:\n mean = variable.Variable(self.avg_mean, volatile='auto')\n var = variable.Variable(self.avg_var, volatile='auto')\n ret = batch_normalization.fixed_batch_normalization(\n x, self.gamma, self.beta, mean, var, self.eps)\n return ret\n\n def start_finetuning(self):\n \"\"\"Resets the population count for collecting population statistics.\n\n This method can be skipped if it is the first time to use the\n fine-tuning mode. Otherwise, this method should be called before\n starting the fine-tuning mode again.\n\n \"\"\"\n self.N = 0\n", "path": "chainer/links/normalization/batch_normalization.py"}]} | 1,770 | 397 |
gh_patches_debug_31987 | rasdani/github-patches | git_diff | vas3k__vas3k.club-142 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Визуализировать результаты батлов
<img width="1113" alt="image" src="https://user-images.githubusercontent.com/19980512/81127819-1f135780-8f48-11ea-83bc-7c56e6e849e4.png">
Было бы круто как-то визуализировать полоску в зависимости от результатов баттла. Чтобы такой раз — и увидел результат, а не подсчитывал, где больше аргументов и плюсов
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `posts/templatetags/battle.py`
Content:
```
1 from django import template
2 from django.template import loader
3
4 register = template.Library()
5
6
7 battle_stats_template = loader.get_template("posts/widgets/battle_stats.html")
8
9
10 def _is_argument_for_side(comment, side):
11 for_side = comment.metadata and comment.metadata.get("battle", {}).get("side") == side
12
13 return not comment.is_deleted and not comment.reply_to_id and for_side
14
15
16 @register.simple_tag()
17 def battle_stats(post, comments):
18 arguments_for_a = [c for c in comments if _is_argument_for_side(c, "a")]
19 arguments_for_b = [c for c in comments if _is_argument_for_side(c, "b")]
20
21 total_votes_a = sum(c.upvotes for c in arguments_for_a)
22 total_votes_b = sum(c.upvotes for c in arguments_for_b)
23 return battle_stats_template.render({
24 "total_arguments": {
25 "a": len(arguments_for_a),
26 "b": len(arguments_for_b),
27 },
28 "total_votes": {
29 "a": total_votes_a,
30 "b": total_votes_b,
31 },
32 "battle": post,
33 })
34
35
36 @register.filter()
37 def side_name(battle, side_code):
38 if battle and battle.metadata and battle.metadata.get("battle"):
39 return battle.metadata["battle"]["sides"][side_code]["name"]
40 return ""
41
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/posts/templatetags/battle.py b/posts/templatetags/battle.py
--- a/posts/templatetags/battle.py
+++ b/posts/templatetags/battle.py
@@ -3,7 +3,6 @@
register = template.Library()
-
battle_stats_template = loader.get_template("posts/widgets/battle_stats.html")
@@ -20,6 +19,7 @@
total_votes_a = sum(c.upvotes for c in arguments_for_a)
total_votes_b = sum(c.upvotes for c in arguments_for_b)
+
return battle_stats_template.render({
"total_arguments": {
"a": len(arguments_for_a),
@@ -29,6 +29,7 @@
"a": total_votes_a,
"b": total_votes_b,
},
+ "graph": graph_percentages(len(arguments_for_a), len(arguments_for_b), total_votes_a, total_votes_b),
"battle": post,
})
@@ -39,3 +40,27 @@
return battle.metadata["battle"]["sides"][side_code]["name"]
return ""
+
+def graph_percentages(a_arguments: int, b_arguments: int, a_votes: int, b_votes: int):
+ """Counts percentages for battle graph
+
+ Percentage for a side is a rounded up arithmetic average of side's argument and upvote percentages
+
+ For each side: (argument % of total arguments amount + vote % of total votes amount ) / 2
+ """
+ percent_a = 0
+ percent_b = 0
+ total_arguments = a_arguments + b_arguments
+ total_upvotes = a_votes + b_votes
+ if total_arguments > 0:
+ argument_percent = 100 / total_arguments
+ percent_a = a_arguments * argument_percent
+ percent_b = b_arguments * argument_percent
+ if total_upvotes > 0:
+ upvote_percent = 100 / total_upvotes
+ percent_a = (percent_a + a_votes * upvote_percent) / 2
+ percent_b = (percent_b + b_votes * upvote_percent) / 2
+ return {
+ "percent_a": round(percent_a),
+ "percent_b": round(percent_b)
+ }
| {"golden_diff": "diff --git a/posts/templatetags/battle.py b/posts/templatetags/battle.py\n--- a/posts/templatetags/battle.py\n+++ b/posts/templatetags/battle.py\n@@ -3,7 +3,6 @@\n \n register = template.Library()\n \n-\n battle_stats_template = loader.get_template(\"posts/widgets/battle_stats.html\")\n \n \n@@ -20,6 +19,7 @@\n \n total_votes_a = sum(c.upvotes for c in arguments_for_a)\n total_votes_b = sum(c.upvotes for c in arguments_for_b)\n+\n return battle_stats_template.render({\n \"total_arguments\": {\n \"a\": len(arguments_for_a),\n@@ -29,6 +29,7 @@\n \"a\": total_votes_a,\n \"b\": total_votes_b,\n },\n+ \"graph\": graph_percentages(len(arguments_for_a), len(arguments_for_b), total_votes_a, total_votes_b),\n \"battle\": post,\n })\n \n@@ -39,3 +40,27 @@\n return battle.metadata[\"battle\"][\"sides\"][side_code][\"name\"]\n return \"\"\n \n+\n+def graph_percentages(a_arguments: int, b_arguments: int, a_votes: int, b_votes: int):\n+ \"\"\"Counts percentages for battle graph\n+\n+ Percentage for a side is a rounded up arithmetic average of side's argument and upvote percentages\n+\n+ For each side: (argument % of total arguments amount + vote % of total votes amount ) / 2\n+ \"\"\"\n+ percent_a = 0\n+ percent_b = 0\n+ total_arguments = a_arguments + b_arguments\n+ total_upvotes = a_votes + b_votes\n+ if total_arguments > 0:\n+ argument_percent = 100 / total_arguments\n+ percent_a = a_arguments * argument_percent\n+ percent_b = b_arguments * argument_percent\n+ if total_upvotes > 0:\n+ upvote_percent = 100 / total_upvotes\n+ percent_a = (percent_a + a_votes * upvote_percent) / 2\n+ percent_b = (percent_b + b_votes * upvote_percent) / 2\n+ return {\n+ \"percent_a\": round(percent_a),\n+ \"percent_b\": round(percent_b)\n+ }\n", "issue": "\u0412\u0438\u0437\u0443\u0430\u043b\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u044b \u0431\u0430\u0442\u043b\u043e\u0432\n<img width=\"1113\" alt=\"image\" src=\"https://user-images.githubusercontent.com/19980512/81127819-1f135780-8f48-11ea-83bc-7c56e6e849e4.png\">\r\n\r\n\u0411\u044b\u043b\u043e \u0431\u044b \u043a\u0440\u0443\u0442\u043e \u043a\u0430\u043a-\u0442\u043e \u0432\u0438\u0437\u0443\u0430\u043b\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043f\u043e\u043b\u043e\u0441\u043a\u0443 \u0432 \u0437\u0430\u0432\u0438\u0441\u0438\u043c\u043e\u0441\u0442\u0438 \u043e\u0442 \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u043e\u0432 \u0431\u0430\u0442\u0442\u043b\u0430. \u0427\u0442\u043e\u0431\u044b \u0442\u0430\u043a\u043e\u0439 \u0440\u0430\u0437 \u2014 \u0438 \u0443\u0432\u0438\u0434\u0435\u043b \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442, \u0430 \u043d\u0435 \u043f\u043e\u0434\u0441\u0447\u0438\u0442\u044b\u0432\u0430\u043b, \u0433\u0434\u0435 \u0431\u043e\u043b\u044c\u0448\u0435 \u0430\u0440\u0433\u0443\u043c\u0435\u043d\u0442\u043e\u0432 \u0438 \u043f\u043b\u044e\u0441\u043e\u0432\n", "before_files": [{"content": "from django import template\nfrom django.template import loader\n\nregister = template.Library()\n\n\nbattle_stats_template = loader.get_template(\"posts/widgets/battle_stats.html\")\n\n\ndef _is_argument_for_side(comment, side):\n for_side = comment.metadata and comment.metadata.get(\"battle\", {}).get(\"side\") == side\n\n return not comment.is_deleted and not comment.reply_to_id and for_side\n\n\[email protected]_tag()\ndef battle_stats(post, comments):\n arguments_for_a = [c for c in comments if _is_argument_for_side(c, \"a\")]\n arguments_for_b = [c for c in comments if _is_argument_for_side(c, \"b\")]\n\n total_votes_a = sum(c.upvotes for c in arguments_for_a)\n total_votes_b = sum(c.upvotes for c in arguments_for_b)\n return battle_stats_template.render({\n \"total_arguments\": {\n \"a\": len(arguments_for_a),\n \"b\": len(arguments_for_b),\n },\n \"total_votes\": {\n \"a\": total_votes_a,\n \"b\": total_votes_b,\n },\n \"battle\": post,\n })\n\n\[email protected]()\ndef side_name(battle, side_code):\n if battle and battle.metadata and battle.metadata.get(\"battle\"):\n return battle.metadata[\"battle\"][\"sides\"][side_code][\"name\"]\n return \"\"\n\n", "path": "posts/templatetags/battle.py"}], "after_files": [{"content": "from django import template\nfrom django.template import loader\n\nregister = template.Library()\n\nbattle_stats_template = loader.get_template(\"posts/widgets/battle_stats.html\")\n\n\ndef _is_argument_for_side(comment, side):\n for_side = comment.metadata and comment.metadata.get(\"battle\", {}).get(\"side\") == side\n\n return not comment.is_deleted and not comment.reply_to_id and for_side\n\n\[email protected]_tag()\ndef battle_stats(post, comments):\n arguments_for_a = [c for c in comments if _is_argument_for_side(c, \"a\")]\n arguments_for_b = [c for c in comments if _is_argument_for_side(c, \"b\")]\n\n total_votes_a = sum(c.upvotes for c in arguments_for_a)\n total_votes_b = sum(c.upvotes for c in arguments_for_b)\n\n return battle_stats_template.render({\n \"total_arguments\": {\n \"a\": len(arguments_for_a),\n \"b\": len(arguments_for_b),\n },\n \"total_votes\": {\n \"a\": total_votes_a,\n \"b\": total_votes_b,\n },\n \"graph\": graph_percentages(len(arguments_for_a), len(arguments_for_b), total_votes_a, total_votes_b),\n \"battle\": post,\n })\n\n\[email protected]()\ndef side_name(battle, side_code):\n if battle and battle.metadata and battle.metadata.get(\"battle\"):\n return battle.metadata[\"battle\"][\"sides\"][side_code][\"name\"]\n return \"\"\n\n\ndef graph_percentages(a_arguments: int, b_arguments: int, a_votes: int, b_votes: int):\n \"\"\"Counts percentages for battle graph\n\n Percentage for a side is a rounded up arithmetic average of side's argument and upvote percentages\n\n For each side: (argument % of total arguments amount + vote % of total votes amount ) / 2\n \"\"\"\n percent_a = 0\n percent_b = 0\n total_arguments = a_arguments + b_arguments\n total_upvotes = a_votes + b_votes\n if total_arguments > 0:\n argument_percent = 100 / total_arguments\n percent_a = a_arguments * argument_percent\n percent_b = b_arguments * argument_percent\n if total_upvotes > 0:\n upvote_percent = 100 / total_upvotes\n percent_a = (percent_a + a_votes * upvote_percent) / 2\n percent_b = (percent_b + b_votes * upvote_percent) / 2\n return {\n \"percent_a\": round(percent_a),\n \"percent_b\": round(percent_b)\n }\n", "path": "posts/templatetags/battle.py"}]} | 767 | 508 |
gh_patches_debug_2773 | rasdani/github-patches | git_diff | Netflix__lemur-3166 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DNS Providers list doesn't show type
In the DNS Providers list, there is a column for the provider type, but it's always empty.
Looking at the code, and the API requests, the issue seems to be with the dns_providers API call, which returns the list of all providers.
There should be a providerType value in the JSON, but it's not there.
A quick glance at the `DnsProvidersNestedOutputSchema` shows that the value is called `providerType`, but in the database the field is called `provider_type` similar to `api_endpoint` which is called `api_endpoint` in the OutputSchema, so I guess, it's probably just mislabeled in the OutputSchema, and needs to be adjusted there, and maybe in the angular template.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lemur/dns_providers/schemas.py`
Content:
```
1 from marshmallow import fields
2
3 from lemur.common.fields import ArrowDateTime
4 from lemur.common.schema import LemurInputSchema, LemurOutputSchema
5
6
7 class DnsProvidersNestedOutputSchema(LemurOutputSchema):
8 __envelope__ = False
9 id = fields.Integer()
10 name = fields.String()
11 providerType = fields.String()
12 description = fields.String()
13 credentials = fields.String()
14 api_endpoint = fields.String()
15 date_created = ArrowDateTime()
16
17
18 class DnsProvidersNestedInputSchema(LemurInputSchema):
19 __envelope__ = False
20 name = fields.String()
21 description = fields.String()
22 provider_type = fields.Dict()
23
24
25 dns_provider_output_schema = DnsProvidersNestedOutputSchema()
26
27 dns_provider_input_schema = DnsProvidersNestedInputSchema()
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lemur/dns_providers/schemas.py b/lemur/dns_providers/schemas.py
--- a/lemur/dns_providers/schemas.py
+++ b/lemur/dns_providers/schemas.py
@@ -8,7 +8,7 @@
__envelope__ = False
id = fields.Integer()
name = fields.String()
- providerType = fields.String()
+ provider_type = fields.String()
description = fields.String()
credentials = fields.String()
api_endpoint = fields.String()
| {"golden_diff": "diff --git a/lemur/dns_providers/schemas.py b/lemur/dns_providers/schemas.py\n--- a/lemur/dns_providers/schemas.py\n+++ b/lemur/dns_providers/schemas.py\n@@ -8,7 +8,7 @@\n __envelope__ = False\n id = fields.Integer()\n name = fields.String()\n- providerType = fields.String()\n+ provider_type = fields.String()\n description = fields.String()\n credentials = fields.String()\n api_endpoint = fields.String()\n", "issue": "DNS Providers list doesn't show type\nIn the DNS Providers list, there is a column for the provider type, but it's always empty.\r\n\r\nLooking at the code, and the API requests, the issue seems to be with the dns_providers API call, which returns the list of all providers.\r\n\r\nThere should be a providerType value in the JSON, but it's not there. \r\n\r\nA quick glance at the `DnsProvidersNestedOutputSchema` shows that the value is called `providerType`, but in the database the field is called `provider_type` similar to `api_endpoint` which is called `api_endpoint` in the OutputSchema, so I guess, it's probably just mislabeled in the OutputSchema, and needs to be adjusted there, and maybe in the angular template.\n", "before_files": [{"content": "from marshmallow import fields\n\nfrom lemur.common.fields import ArrowDateTime\nfrom lemur.common.schema import LemurInputSchema, LemurOutputSchema\n\n\nclass DnsProvidersNestedOutputSchema(LemurOutputSchema):\n __envelope__ = False\n id = fields.Integer()\n name = fields.String()\n providerType = fields.String()\n description = fields.String()\n credentials = fields.String()\n api_endpoint = fields.String()\n date_created = ArrowDateTime()\n\n\nclass DnsProvidersNestedInputSchema(LemurInputSchema):\n __envelope__ = False\n name = fields.String()\n description = fields.String()\n provider_type = fields.Dict()\n\n\ndns_provider_output_schema = DnsProvidersNestedOutputSchema()\n\ndns_provider_input_schema = DnsProvidersNestedInputSchema()\n", "path": "lemur/dns_providers/schemas.py"}], "after_files": [{"content": "from marshmallow import fields\n\nfrom lemur.common.fields import ArrowDateTime\nfrom lemur.common.schema import LemurInputSchema, LemurOutputSchema\n\n\nclass DnsProvidersNestedOutputSchema(LemurOutputSchema):\n __envelope__ = False\n id = fields.Integer()\n name = fields.String()\n provider_type = fields.String()\n description = fields.String()\n credentials = fields.String()\n api_endpoint = fields.String()\n date_created = ArrowDateTime()\n\n\nclass DnsProvidersNestedInputSchema(LemurInputSchema):\n __envelope__ = False\n name = fields.String()\n description = fields.String()\n provider_type = fields.Dict()\n\n\ndns_provider_output_schema = DnsProvidersNestedOutputSchema()\n\ndns_provider_input_schema = DnsProvidersNestedInputSchema()\n", "path": "lemur/dns_providers/schemas.py"}]} | 641 | 117 |
gh_patches_debug_26623 | rasdani/github-patches | git_diff | e-valuation__EvaP-1291 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Django 2.1 upgrade
https://docs.djangoproject.com/en/2.1/releases/2.1/
There is a guide for upgrading: https://docs.djangoproject.com/en/2.1/howto/upgrade-version/
Basically
* Read the release notes
* update dependencies
* run tests with `python -Wa` and solve deprecation warnings
* put the new django into the requirements
* run tests, fix failures if any
* run tests with `python -Wa` and solve deprecation warnings again
* if there was any new feature in the release notes that might help us, use it
also, we need to check the installed python version on production, django 2.1 supports python 3.5 and newer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py`
Content:
```
1 # Generated by Django 1.11.3 on 2017-07-03 18:31
2
3 from django.db import migrations, models
4 import uuid
5
6
7 def fill_textanswer_uuid(apps, schema_editor):
8 db_alias = schema_editor.connection.alias
9 TextAnswer = apps.get_model('evaluation', 'TextAnswer')
10 for obj in TextAnswer.objects.using(db_alias).all():
11 obj.uuid = uuid.uuid4()
12 obj.save()
13
14
15 class Migration(migrations.Migration):
16
17 dependencies = [
18 ('evaluation', '0061_editor_review_reminder_template'),
19 ]
20
21 # Based on
22 # https://gist.github.com/smcoll/8bb867dc631433c01fd0
23
24 operations = [
25 migrations.AddField(
26 model_name='textanswer',
27 name='uuid',
28 field=models.UUIDField(null=True),
29 ),
30 migrations.RunPython(fill_textanswer_uuid, migrations.RunPython.noop),
31 migrations.AlterField(
32 model_name='textanswer',
33 name='uuid',
34 field=models.UUIDField(primary_key=False, default=uuid.uuid4, serialize=False, editable=False),
35 ),
36 # rename the old id field before deleting it at the end of the
37 # migration for compatibility with the sqlite driver
38 migrations.RenameField(
39 model_name='textanswer',
40 old_name='id',
41 new_name='old_id'
42 ),
43 migrations.RenameField(
44 model_name='textanswer',
45 old_name='uuid',
46 new_name='id'
47 ),
48 migrations.AlterField(
49 model_name='textanswer',
50 name='id',
51 field=models.UUIDField(primary_key=True, default=uuid.uuid4, serialize=False, editable=False),
52 ),
53 migrations.AlterModelOptions(
54 name='textanswer',
55 options={'ordering': ['id'], 'verbose_name': 'text answer', 'verbose_name_plural': 'text answers'},
56 ),
57 migrations.RemoveField(model_name='textanswer', name='old_id'),
58 ]
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py b/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py
--- a/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py
+++ b/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py
@@ -33,13 +33,12 @@
name='uuid',
field=models.UUIDField(primary_key=False, default=uuid.uuid4, serialize=False, editable=False),
),
- # rename the old id field before deleting it at the end of the
- # migration for compatibility with the sqlite driver
- migrations.RenameField(
- model_name='textanswer',
- old_name='id',
- new_name='old_id'
- ),
+ # this causes trouble with sqlite. We have two open bug reports with django for this, see
+ # https://code.djangoproject.com/ticket/29790 and https://code.djangoproject.com/ticket/28541
+ # We can not get this to work with sqlite and postgres right now and we want django2.1, we only
+ # support postgres here. For sqlite, you need to rename the field here and move the RemoveField to
+ # the end.
+ migrations.RemoveField(model_name='textanswer', name='id'),
migrations.RenameField(
model_name='textanswer',
old_name='uuid',
@@ -54,5 +53,4 @@
name='textanswer',
options={'ordering': ['id'], 'verbose_name': 'text answer', 'verbose_name_plural': 'text answers'},
),
- migrations.RemoveField(model_name='textanswer', name='old_id'),
]
| {"golden_diff": "diff --git a/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py b/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py\n--- a/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py\n+++ b/evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py\n@@ -33,13 +33,12 @@\n name='uuid',\n field=models.UUIDField(primary_key=False, default=uuid.uuid4, serialize=False, editable=False),\n ),\n- # rename the old id field before deleting it at the end of the\n- # migration for compatibility with the sqlite driver\n- migrations.RenameField(\n- model_name='textanswer',\n- old_name='id',\n- new_name='old_id'\n- ),\n+ # this causes trouble with sqlite. We have two open bug reports with django for this, see\n+ # https://code.djangoproject.com/ticket/29790 and https://code.djangoproject.com/ticket/28541\n+ # We can not get this to work with sqlite and postgres right now and we want django2.1, we only\n+ # support postgres here. For sqlite, you need to rename the field here and move the RemoveField to\n+ # the end.\n+ migrations.RemoveField(model_name='textanswer', name='id'),\n migrations.RenameField(\n model_name='textanswer',\n old_name='uuid',\n@@ -54,5 +53,4 @@\n name='textanswer',\n options={'ordering': ['id'], 'verbose_name': 'text answer', 'verbose_name_plural': 'text answers'},\n ),\n- migrations.RemoveField(model_name='textanswer', name='old_id'),\n ]\n", "issue": "Django 2.1 upgrade\nhttps://docs.djangoproject.com/en/2.1/releases/2.1/\r\n\r\nThere is a guide for upgrading: https://docs.djangoproject.com/en/2.1/howto/upgrade-version/\r\n\r\nBasically\r\n* Read the release notes\r\n* update dependencies\r\n* run tests with `python -Wa` and solve deprecation warnings\r\n* put the new django into the requirements\r\n* run tests, fix failures if any\r\n* run tests with `python -Wa` and solve deprecation warnings again\r\n* if there was any new feature in the release notes that might help us, use it\r\n\r\nalso, we need to check the installed python version on production, django 2.1 supports python 3.5 and newer.\n", "before_files": [{"content": "# Generated by Django 1.11.3 on 2017-07-03 18:31\n\nfrom django.db import migrations, models\nimport uuid\n\n\ndef fill_textanswer_uuid(apps, schema_editor):\n db_alias = schema_editor.connection.alias\n TextAnswer = apps.get_model('evaluation', 'TextAnswer')\n for obj in TextAnswer.objects.using(db_alias).all():\n obj.uuid = uuid.uuid4()\n obj.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0061_editor_review_reminder_template'),\n ]\n\n # Based on\n # https://gist.github.com/smcoll/8bb867dc631433c01fd0\n\n operations = [\n migrations.AddField(\n model_name='textanswer',\n name='uuid',\n field=models.UUIDField(null=True),\n ),\n migrations.RunPython(fill_textanswer_uuid, migrations.RunPython.noop),\n migrations.AlterField(\n model_name='textanswer',\n name='uuid',\n field=models.UUIDField(primary_key=False, default=uuid.uuid4, serialize=False, editable=False),\n ),\n # rename the old id field before deleting it at the end of the\n # migration for compatibility with the sqlite driver\n migrations.RenameField(\n model_name='textanswer',\n old_name='id',\n new_name='old_id'\n ),\n migrations.RenameField(\n model_name='textanswer',\n old_name='uuid',\n new_name='id'\n ),\n migrations.AlterField(\n model_name='textanswer',\n name='id',\n field=models.UUIDField(primary_key=True, default=uuid.uuid4, serialize=False, editable=False),\n ),\n migrations.AlterModelOptions(\n name='textanswer',\n options={'ordering': ['id'], 'verbose_name': 'text answer', 'verbose_name_plural': 'text answers'},\n ),\n migrations.RemoveField(model_name='textanswer', name='old_id'),\n ]\n", "path": "evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py"}], "after_files": [{"content": "# Generated by Django 1.11.3 on 2017-07-03 18:31\n\nfrom django.db import migrations, models\nimport uuid\n\n\ndef fill_textanswer_uuid(apps, schema_editor):\n db_alias = schema_editor.connection.alias\n TextAnswer = apps.get_model('evaluation', 'TextAnswer')\n for obj in TextAnswer.objects.using(db_alias).all():\n obj.uuid = uuid.uuid4()\n obj.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0061_editor_review_reminder_template'),\n ]\n\n # Based on\n # https://gist.github.com/smcoll/8bb867dc631433c01fd0\n\n operations = [\n migrations.AddField(\n model_name='textanswer',\n name='uuid',\n field=models.UUIDField(null=True),\n ),\n migrations.RunPython(fill_textanswer_uuid, migrations.RunPython.noop),\n migrations.AlterField(\n model_name='textanswer',\n name='uuid',\n field=models.UUIDField(primary_key=False, default=uuid.uuid4, serialize=False, editable=False),\n ),\n # this causes trouble with sqlite. We have two open bug reports with django for this, see\n # https://code.djangoproject.com/ticket/29790 and https://code.djangoproject.com/ticket/28541\n # We can not get this to work with sqlite and postgres right now and we want django2.1, we only\n # support postgres here. For sqlite, you need to rename the field here and move the RemoveField to\n # the end.\n migrations.RemoveField(model_name='textanswer', name='id'),\n migrations.RenameField(\n model_name='textanswer',\n old_name='uuid',\n new_name='id'\n ),\n migrations.AlterField(\n model_name='textanswer',\n name='id',\n field=models.UUIDField(primary_key=True, default=uuid.uuid4, serialize=False, editable=False),\n ),\n migrations.AlterModelOptions(\n name='textanswer',\n options={'ordering': ['id'], 'verbose_name': 'text answer', 'verbose_name_plural': 'text answers'},\n ),\n ]\n", "path": "evap/evaluation/migrations/0062_replace_textanswer_id_with_uuid.py"}]} | 968 | 398 |
gh_patches_debug_2234 | rasdani/github-patches | git_diff | redis__redis-py-2674 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Canceling async Redis command leaves connection open, in unsafe state for future commands
**Version**: 4.5.3
**Platform**: Python 3.8 on Ubuntu / Generic
**Description**: Canceling async Redis command leaves connection open, in unsafe state for future commands
This is a reincarnation of #2624, which was closed with an incomplete fix and a possibly unreliable test case. This is the same issue that recently got a lot of attention due to ChatGPT outage, and that remains only partially fixed. The cancellation shielding introduced in #2641 addressed only the cancellation of Redis pipeline operation, but non-pipelined ops are still vulnerable.
This time I am attaching a script that reproduces the issue reliably without relying on an external, slow Redis server. This is achieved by inserting a small TCP socket proxy between the Redis client and local Redis server, with the proxy introducing a 0.1 second delay when sending data in either direction.
Running this script with a Redis server running locally on port 6379 produces the following output:
```
$ python redis_cancel.py
managed to cancel the task, connection is left open with unread response
bar: b'foo'
ping: False
foo: b'PONG'
```
```python
import asyncio
from redis.asyncio import Redis
async def pipe(reader: asyncio.StreamReader, writer: asyncio.StreamWriter, delay: float, name=''):
while data := await reader.read(1000):
# print(name, 'received:', data)
await asyncio.sleep(delay)
writer.write(data)
await writer.drain()
class DelayProxy:
def __init__(self, addr, redis_addr, delay: float):
self.addr = addr
self.redis_addr = redis_addr
self.delay = delay
async def start(self):
server = await asyncio.start_server(self.handle, *self.addr)
asyncio.create_task(server.serve_forever())
async def handle(self, reader, writer):
# establish connection to redis
redis_reader, redis_writer = await asyncio.open_connection(*self.redis_addr)
pipe1 = asyncio.create_task(pipe(reader, redis_writer, self.delay, 'to redis:'))
pipe2 = asyncio.create_task(pipe(redis_reader, writer, self.delay, 'from redis:'))
await asyncio.gather(pipe1, pipe2)
async def main():
# create a tcp socket proxy that relays data to Redis and back, inserting 0.1 seconds of delay
dp = DelayProxy(addr=('localhost', 6380), redis_addr=('localhost', 6379), delay=0.1)
await dp.start()
# note that we connect to proxy, rather than to Redis directly
async with Redis(host='localhost', port=6380) as r:
await r.set('foo', 'foo')
await r.set('bar', 'bar')
t = asyncio.create_task(r.get('foo'))
await asyncio.sleep(0.050)
t.cancel()
try:
await t
print('try again, we did not cancel the task in time')
except asyncio.CancelledError:
print('managed to cancel the task, connection is left open with unread response')
print('bar:', await r.get('bar'))
print('ping:', await r.ping())
print('foo:', await r.get('foo'))
if __name__ == '__main__':
asyncio.run(main())
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import find_packages, setup
3
4 setup(
5 name="redis",
6 description="Python client for Redis database and key-value store",
7 long_description=open("README.md").read().strip(),
8 long_description_content_type="text/markdown",
9 keywords=["Redis", "key-value store", "database"],
10 license="MIT",
11 version="4.5.3",
12 packages=find_packages(
13 include=[
14 "redis",
15 "redis.asyncio",
16 "redis.commands",
17 "redis.commands.bf",
18 "redis.commands.json",
19 "redis.commands.search",
20 "redis.commands.timeseries",
21 "redis.commands.graph",
22 ]
23 ),
24 url="https://github.com/redis/redis-py",
25 project_urls={
26 "Documentation": "https://redis.readthedocs.io/en/latest/",
27 "Changes": "https://github.com/redis/redis-py/releases",
28 "Code": "https://github.com/redis/redis-py",
29 "Issue tracker": "https://github.com/redis/redis-py/issues",
30 },
31 author="Redis Inc.",
32 author_email="[email protected]",
33 python_requires=">=3.7",
34 install_requires=[
35 'importlib-metadata >= 1.0; python_version < "3.8"',
36 'typing-extensions; python_version<"3.8"',
37 'async-timeout>=4.0.2; python_version<="3.11.2"',
38 ],
39 classifiers=[
40 "Development Status :: 5 - Production/Stable",
41 "Environment :: Console",
42 "Intended Audience :: Developers",
43 "License :: OSI Approved :: MIT License",
44 "Operating System :: OS Independent",
45 "Programming Language :: Python",
46 "Programming Language :: Python :: 3",
47 "Programming Language :: Python :: 3 :: Only",
48 "Programming Language :: Python :: 3.7",
49 "Programming Language :: Python :: 3.8",
50 "Programming Language :: Python :: 3.9",
51 "Programming Language :: Python :: 3.10",
52 "Programming Language :: Python :: 3.11",
53 "Programming Language :: Python :: Implementation :: CPython",
54 "Programming Language :: Python :: Implementation :: PyPy",
55 ],
56 extras_require={
57 "hiredis": ["hiredis>=1.0.0"],
58 "ocsp": ["cryptography>=36.0.1", "pyopenssl==20.0.1", "requests>=2.26.0"],
59 },
60 )
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
long_description_content_type="text/markdown",
keywords=["Redis", "key-value store", "database"],
license="MIT",
- version="4.5.3",
+ version="4.5.4",
packages=find_packages(
include=[
"redis",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -8,7 +8,7 @@\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n- version=\"4.5.3\",\n+ version=\"4.5.4\",\n packages=find_packages(\n include=[\n \"redis\",\n", "issue": "Canceling async Redis command leaves connection open, in unsafe state for future commands\n\r\n**Version**: 4.5.3\r\n\r\n**Platform**: Python 3.8 on Ubuntu / Generic\r\n\r\n**Description**: Canceling async Redis command leaves connection open, in unsafe state for future commands\r\n\r\nThis is a reincarnation of #2624, which was closed with an incomplete fix and a possibly unreliable test case. This is the same issue that recently got a lot of attention due to ChatGPT outage, and that remains only partially fixed. The cancellation shielding introduced in #2641 addressed only the cancellation of Redis pipeline operation, but non-pipelined ops are still vulnerable.\r\n\r\nThis time I am attaching a script that reproduces the issue reliably without relying on an external, slow Redis server. This is achieved by inserting a small TCP socket proxy between the Redis client and local Redis server, with the proxy introducing a 0.1 second delay when sending data in either direction. \r\n\r\nRunning this script with a Redis server running locally on port 6379 produces the following output:\r\n```\r\n$ python redis_cancel.py \r\nmanaged to cancel the task, connection is left open with unread response\r\nbar: b'foo'\r\nping: False\r\nfoo: b'PONG'\r\n```\r\n\r\n```python\r\nimport asyncio\r\n\r\nfrom redis.asyncio import Redis\r\n\r\n\r\nasync def pipe(reader: asyncio.StreamReader, writer: asyncio.StreamWriter, delay: float, name=''):\r\n while data := await reader.read(1000):\r\n # print(name, 'received:', data)\r\n await asyncio.sleep(delay)\r\n writer.write(data)\r\n await writer.drain()\r\n\r\n\r\nclass DelayProxy:\r\n\r\n def __init__(self, addr, redis_addr, delay: float):\r\n self.addr = addr\r\n self.redis_addr = redis_addr\r\n self.delay = delay\r\n\r\n async def start(self):\r\n server = await asyncio.start_server(self.handle, *self.addr)\r\n asyncio.create_task(server.serve_forever())\r\n\r\n async def handle(self, reader, writer):\r\n # establish connection to redis\r\n redis_reader, redis_writer = await asyncio.open_connection(*self.redis_addr)\r\n pipe1 = asyncio.create_task(pipe(reader, redis_writer, self.delay, 'to redis:'))\r\n pipe2 = asyncio.create_task(pipe(redis_reader, writer, self.delay, 'from redis:'))\r\n await asyncio.gather(pipe1, pipe2)\r\n\r\n\r\nasync def main():\r\n\r\n # create a tcp socket proxy that relays data to Redis and back, inserting 0.1 seconds of delay\r\n dp = DelayProxy(addr=('localhost', 6380), redis_addr=('localhost', 6379), delay=0.1)\r\n await dp.start()\r\n\r\n # note that we connect to proxy, rather than to Redis directly\r\n async with Redis(host='localhost', port=6380) as r:\r\n\r\n await r.set('foo', 'foo')\r\n await r.set('bar', 'bar')\r\n\r\n t = asyncio.create_task(r.get('foo'))\r\n await asyncio.sleep(0.050)\r\n t.cancel()\r\n try:\r\n await t\r\n print('try again, we did not cancel the task in time')\r\n except asyncio.CancelledError:\r\n print('managed to cancel the task, connection is left open with unread response')\r\n\r\n print('bar:', await r.get('bar'))\r\n print('ping:', await r.ping())\r\n print('foo:', await r.get('foo'))\r\n\r\nif __name__ == '__main__':\r\n asyncio.run(main())\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=\"4.5.3\",\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.asyncio\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n project_urls={\n \"Documentation\": \"https://redis.readthedocs.io/en/latest/\",\n \"Changes\": \"https://github.com/redis/redis-py/releases\",\n \"Code\": \"https://github.com/redis/redis-py\",\n \"Issue tracker\": \"https://github.com/redis/redis-py/issues\",\n },\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n install_requires=[\n 'importlib-metadata >= 1.0; python_version < \"3.8\"',\n 'typing-extensions; python_version<\"3.8\"',\n 'async-timeout>=4.0.2; python_version<=\"3.11.2\"',\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n \"ocsp\": [\"cryptography>=36.0.1\", \"pyopenssl==20.0.1\", \"requests>=2.26.0\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=\"4.5.4\",\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.asyncio\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n project_urls={\n \"Documentation\": \"https://redis.readthedocs.io/en/latest/\",\n \"Changes\": \"https://github.com/redis/redis-py/releases\",\n \"Code\": \"https://github.com/redis/redis-py\",\n \"Issue tracker\": \"https://github.com/redis/redis-py/issues\",\n },\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n install_requires=[\n 'importlib-metadata >= 1.0; python_version < \"3.8\"',\n 'typing-extensions; python_version<\"3.8\"',\n 'async-timeout>=4.0.2; python_version<=\"3.11.2\"',\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n \"ocsp\": [\"cryptography>=36.0.1\", \"pyopenssl==20.0.1\", \"requests>=2.26.0\"],\n },\n)\n", "path": "setup.py"}]} | 1,644 | 90 |
gh_patches_debug_3335 | rasdani/github-patches | git_diff | mkdocs__mkdocs-413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`mkdocs new` will overwrite an existing index.md file without warning.
If you run the command: `mkdocs new /path/to/dir` and `/path/to/dir/docs/index.md` already exists it will be replaced with out placeholder.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/new.py`
Content:
```
1 # coding: utf-8
2 from __future__ import print_function
3 import os
4
5 config_text = 'site_name: My Docs\n'
6 index_text = """# Welcome to MkDocs
7
8 For full documentation visit [mkdocs.org](http://mkdocs.org).
9
10 ## Commands
11
12 * `mkdocs new [dir-name]` - Create a new project.
13 * `mkdocs serve` - Start the live-reloading docs server.
14 * `mkdocs build` - Build the documentation site.
15 * `mkdocs help` - Print this help message.
16
17 ## Project layout
18
19 mkdocs.yml # The configuration file.
20 docs/
21 index.md # The documentation homepage.
22 ... # Other markdown pages, images and other files.
23 """
24
25
26 def new(args, options):
27 if len(args) != 1:
28 print("Usage 'mkdocs new [directory-name]'")
29 return
30
31 output_dir = args[0]
32
33 docs_dir = os.path.join(output_dir, 'docs')
34 config_path = os.path.join(output_dir, 'mkdocs.yml')
35 index_path = os.path.join(docs_dir, 'index.md')
36
37 if os.path.exists(config_path):
38 print('Project already exists.')
39 return
40
41 if not os.path.exists(output_dir):
42 print('Creating project directory: %s' % output_dir)
43 os.mkdir(output_dir)
44
45 print('Writing config file: %s' % config_path)
46 open(config_path, 'w').write(config_text)
47
48 print('Writing initial docs: %s' % index_path)
49 if not os.path.exists(docs_dir):
50 os.mkdir(docs_dir)
51 open(index_path, 'w').write(index_text)
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/new.py b/mkdocs/new.py
--- a/mkdocs/new.py
+++ b/mkdocs/new.py
@@ -45,6 +45,9 @@
print('Writing config file: %s' % config_path)
open(config_path, 'w').write(config_text)
+ if os.path.exists(index_path):
+ return
+
print('Writing initial docs: %s' % index_path)
if not os.path.exists(docs_dir):
os.mkdir(docs_dir)
| {"golden_diff": "diff --git a/mkdocs/new.py b/mkdocs/new.py\n--- a/mkdocs/new.py\n+++ b/mkdocs/new.py\n@@ -45,6 +45,9 @@\n print('Writing config file: %s' % config_path)\n open(config_path, 'w').write(config_text)\n \n+ if os.path.exists(index_path):\n+ return\n+\n print('Writing initial docs: %s' % index_path)\n if not os.path.exists(docs_dir):\n os.mkdir(docs_dir)\n", "issue": "`mkdocs new` will overwrite an existing index.md file without warning.\nIf you run the command: `mkdocs new /path/to/dir` and `/path/to/dir/docs/index.md` already exists it will be replaced with out placeholder.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\nimport os\n\nconfig_text = 'site_name: My Docs\\n'\nindex_text = \"\"\"# Welcome to MkDocs\n\nFor full documentation visit [mkdocs.org](http://mkdocs.org).\n\n## Commands\n\n* `mkdocs new [dir-name]` - Create a new project.\n* `mkdocs serve` - Start the live-reloading docs server.\n* `mkdocs build` - Build the documentation site.\n* `mkdocs help` - Print this help message.\n\n## Project layout\n\n mkdocs.yml # The configuration file.\n docs/\n index.md # The documentation homepage.\n ... # Other markdown pages, images and other files.\n\"\"\"\n\n\ndef new(args, options):\n if len(args) != 1:\n print(\"Usage 'mkdocs new [directory-name]'\")\n return\n\n output_dir = args[0]\n\n docs_dir = os.path.join(output_dir, 'docs')\n config_path = os.path.join(output_dir, 'mkdocs.yml')\n index_path = os.path.join(docs_dir, 'index.md')\n\n if os.path.exists(config_path):\n print('Project already exists.')\n return\n\n if not os.path.exists(output_dir):\n print('Creating project directory: %s' % output_dir)\n os.mkdir(output_dir)\n\n print('Writing config file: %s' % config_path)\n open(config_path, 'w').write(config_text)\n\n print('Writing initial docs: %s' % index_path)\n if not os.path.exists(docs_dir):\n os.mkdir(docs_dir)\n open(index_path, 'w').write(index_text)\n", "path": "mkdocs/new.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\nimport os\n\nconfig_text = 'site_name: My Docs\\n'\nindex_text = \"\"\"# Welcome to MkDocs\n\nFor full documentation visit [mkdocs.org](http://mkdocs.org).\n\n## Commands\n\n* `mkdocs new [dir-name]` - Create a new project.\n* `mkdocs serve` - Start the live-reloading docs server.\n* `mkdocs build` - Build the documentation site.\n* `mkdocs help` - Print this help message.\n\n## Project layout\n\n mkdocs.yml # The configuration file.\n docs/\n index.md # The documentation homepage.\n ... # Other markdown pages, images and other files.\n\"\"\"\n\n\ndef new(args, options):\n if len(args) != 1:\n print(\"Usage 'mkdocs new [directory-name]'\")\n return\n\n output_dir = args[0]\n\n docs_dir = os.path.join(output_dir, 'docs')\n config_path = os.path.join(output_dir, 'mkdocs.yml')\n index_path = os.path.join(docs_dir, 'index.md')\n\n if os.path.exists(config_path):\n print('Project already exists.')\n return\n\n if not os.path.exists(output_dir):\n print('Creating project directory: %s' % output_dir)\n os.mkdir(output_dir)\n\n print('Writing config file: %s' % config_path)\n open(config_path, 'w').write(config_text)\n\n if os.path.exists(index_path):\n return\n\n print('Writing initial docs: %s' % index_path)\n if not os.path.exists(docs_dir):\n os.mkdir(docs_dir)\n open(index_path, 'w').write(index_text)\n", "path": "mkdocs/new.py"}]} | 763 | 114 |
gh_patches_debug_25234 | rasdani/github-patches | git_diff | cupy__cupy-1947 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`cupy.allclose` does not support comparison of complex-number arrays
As title. The reason is that in this line of the ufunc helper
https://github.com/cupy/cupy/blob/bb99716ffee178368ec71c875ace0553053cadc2/cupy/logic/comparison.py#L6
only `float16` (`e`), `float32` (`f`), and `float64` (`d`) arrays are included. Note that the NumPy counterpart does support comparing complex arrays using the same comparison logic, and I believe this can be easily patched by adding another ufunc helper for `complex64` and `complex128` arrays. PR to follow.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/logic/comparison.py`
Content:
```
1 from cupy import core
2
3
4 _is_close = core.create_ufunc(
5 'cupy_is_close',
6 ('eeee?->?', 'ffff?->?', 'dddd?->?'),
7 '''
8 bool equal_nan = in4;
9 if (isfinite(in0) && isfinite(in1)) {
10 out0 = fabs(in0 - in1) <= in3 + in2 * fabs(in1);
11 } else if (equal_nan) {
12 out0 = (in0 == in1) || (isnan(in0) && isnan(in1));
13 } else {
14 out0 = (in0 == in1);
15 }
16 '''
17 )
18
19
20 def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):
21 """Returns True if two arrays are element-wise equal within a tolerance.
22
23 Two values in ``a`` and ``b`` are considiered equal when the following
24 equation is satisfied.
25
26 .. math::
27
28 |a - b| \\le \\mathrm{atol} + \\mathrm{rtol} |b|
29
30 Args:
31 a (cupy.ndarray): Input array to compare.
32 b (cupy.ndarray): Input array to compare.
33 rtol (float): The relative tolerance.
34 atol (float): The absolute tolerance.
35 equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal
36 to NaN's in ``b``.
37
38 Returns:
39 bool: if ``True``, two arrays are element-wise equal within a
40 tolerance.
41
42 .. seealso:: :func:`numpy.allclose`
43
44 """
45 res = isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan).all()
46 return bool(res)
47
48
49 def isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):
50 """Returns a boolean array where two arrays are equal within a tolerance.
51
52 Two values in ``a`` and ``b`` are considiered equal when the following
53 equation is satisfied.
54
55 .. math::
56
57 |a - b| \\le \\mathrm{atol} + \\mathrm{rtol} |b|
58
59 Args:
60 a (cupy.ndarray): Input array to compare.
61 b (cupy.ndarray): Input array to compare.
62 rtol (float): The relative tolerance.
63 atol (float): The absolute tolerance.
64 equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal
65 to NaN's in ``b``.
66
67 Returns:
68 cupy.ndarray: A boolean array storing where ``a`` and ``b`` are equal.
69
70 .. seealso:: :func:`numpy.isclose`
71
72 """
73 return _is_close(a, b, rtol, atol, equal_nan)
74
75
76 # TODO(okuta): Implement array_equal
77
78
79 # TODO(okuta): Implement array_equiv
80
81
82 greater = core.greater
83
84
85 greater_equal = core.greater_equal
86
87
88 less = core.less
89
90
91 less_equal = core.less_equal
92
93
94 equal = core.equal
95
96
97 not_equal = core.not_equal
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/logic/comparison.py b/cupy/logic/comparison.py
--- a/cupy/logic/comparison.py
+++ b/cupy/logic/comparison.py
@@ -1,3 +1,6 @@
+import numpy
+
+import cupy
from cupy import core
@@ -16,6 +19,24 @@
'''
)
+# Note that in cupy/core/include/cupy/complex.cuh, we already got isfinite and
+# isnan working for complex numbers, so just replace fabs above by abs (from
+# thrust) and we are ready to go
+_is_close_complex = core.create_ufunc(
+ 'cupy_is_close_complex',
+ ('FFff?->?', 'DDdd?->?'),
+ '''
+ bool equal_nan = in4;
+ if (isfinite(in0) && isfinite(in1)) {
+ out0 = abs(in0 - in1) <= in3 + in2 * abs(in1);
+ } else if (equal_nan) {
+ out0 = (in0 == in1) || (isnan(in0) && isnan(in1));
+ } else {
+ out0 = (in0 == in1);
+ }
+ '''
+)
+
def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):
"""Returns True if two arrays are element-wise equal within a tolerance.
@@ -70,7 +91,13 @@
.. seealso:: :func:`numpy.isclose`
"""
- return _is_close(a, b, rtol, atol, equal_nan)
+ a = cupy.asanyarray(a)
+ b = cupy.asanyarray(b)
+ if (a.dtype in [numpy.complex64, numpy.complex128]) or \
+ (b.dtype in [numpy.complex64, numpy.complex128]):
+ return _is_close_complex(a, b, rtol, atol, equal_nan)
+ else:
+ return _is_close(a, b, rtol, atol, equal_nan)
# TODO(okuta): Implement array_equal
| {"golden_diff": "diff --git a/cupy/logic/comparison.py b/cupy/logic/comparison.py\n--- a/cupy/logic/comparison.py\n+++ b/cupy/logic/comparison.py\n@@ -1,3 +1,6 @@\n+import numpy\n+\n+import cupy\n from cupy import core\n \n \n@@ -16,6 +19,24 @@\n '''\n )\n \n+# Note that in cupy/core/include/cupy/complex.cuh, we already got isfinite and\n+# isnan working for complex numbers, so just replace fabs above by abs (from\n+# thrust) and we are ready to go\n+_is_close_complex = core.create_ufunc(\n+ 'cupy_is_close_complex',\n+ ('FFff?->?', 'DDdd?->?'),\n+ '''\n+ bool equal_nan = in4;\n+ if (isfinite(in0) && isfinite(in1)) {\n+ out0 = abs(in0 - in1) <= in3 + in2 * abs(in1);\n+ } else if (equal_nan) {\n+ out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n+ } else {\n+ out0 = (in0 == in1);\n+ }\n+ '''\n+)\n+\n \n def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns True if two arrays are element-wise equal within a tolerance.\n@@ -70,7 +91,13 @@\n .. seealso:: :func:`numpy.isclose`\n \n \"\"\"\n- return _is_close(a, b, rtol, atol, equal_nan)\n+ a = cupy.asanyarray(a)\n+ b = cupy.asanyarray(b)\n+ if (a.dtype in [numpy.complex64, numpy.complex128]) or \\\n+ (b.dtype in [numpy.complex64, numpy.complex128]):\n+ return _is_close_complex(a, b, rtol, atol, equal_nan)\n+ else:\n+ return _is_close(a, b, rtol, atol, equal_nan)\n \n \n # TODO(okuta): Implement array_equal\n", "issue": "`cupy.allclose` does not support comparison of complex-number arrays\nAs title. The reason is that in this line of the ufunc helper \r\nhttps://github.com/cupy/cupy/blob/bb99716ffee178368ec71c875ace0553053cadc2/cupy/logic/comparison.py#L6\r\nonly `float16` (`e`), `float32` (`f`), and `float64` (`d`) arrays are included. Note that the NumPy counterpart does support comparing complex arrays using the same comparison logic, and I believe this can be easily patched by adding another ufunc helper for `complex64` and `complex128` arrays. PR to follow.\n", "before_files": [{"content": "from cupy import core\n\n\n_is_close = core.create_ufunc(\n 'cupy_is_close',\n ('eeee?->?', 'ffff?->?', 'dddd?->?'),\n '''\n bool equal_nan = in4;\n if (isfinite(in0) && isfinite(in1)) {\n out0 = fabs(in0 - in1) <= in3 + in2 * fabs(in1);\n } else if (equal_nan) {\n out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n } else {\n out0 = (in0 == in1);\n }\n '''\n)\n\n\ndef allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns True if two arrays are element-wise equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n bool: if ``True``, two arrays are element-wise equal within a\n tolerance.\n\n .. seealso:: :func:`numpy.allclose`\n\n \"\"\"\n res = isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan).all()\n return bool(res)\n\n\ndef isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns a boolean array where two arrays are equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n cupy.ndarray: A boolean array storing where ``a`` and ``b`` are equal.\n\n .. seealso:: :func:`numpy.isclose`\n\n \"\"\"\n return _is_close(a, b, rtol, atol, equal_nan)\n\n\n# TODO(okuta): Implement array_equal\n\n\n# TODO(okuta): Implement array_equiv\n\n\ngreater = core.greater\n\n\ngreater_equal = core.greater_equal\n\n\nless = core.less\n\n\nless_equal = core.less_equal\n\n\nequal = core.equal\n\n\nnot_equal = core.not_equal\n", "path": "cupy/logic/comparison.py"}], "after_files": [{"content": "import numpy\n\nimport cupy\nfrom cupy import core\n\n\n_is_close = core.create_ufunc(\n 'cupy_is_close',\n ('eeee?->?', 'ffff?->?', 'dddd?->?'),\n '''\n bool equal_nan = in4;\n if (isfinite(in0) && isfinite(in1)) {\n out0 = fabs(in0 - in1) <= in3 + in2 * fabs(in1);\n } else if (equal_nan) {\n out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n } else {\n out0 = (in0 == in1);\n }\n '''\n)\n\n# Note that in cupy/core/include/cupy/complex.cuh, we already got isfinite and\n# isnan working for complex numbers, so just replace fabs above by abs (from\n# thrust) and we are ready to go\n_is_close_complex = core.create_ufunc(\n 'cupy_is_close_complex',\n ('FFff?->?', 'DDdd?->?'),\n '''\n bool equal_nan = in4;\n if (isfinite(in0) && isfinite(in1)) {\n out0 = abs(in0 - in1) <= in3 + in2 * abs(in1);\n } else if (equal_nan) {\n out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n } else {\n out0 = (in0 == in1);\n }\n '''\n)\n\n\ndef allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns True if two arrays are element-wise equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n bool: if ``True``, two arrays are element-wise equal within a\n tolerance.\n\n .. seealso:: :func:`numpy.allclose`\n\n \"\"\"\n res = isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan).all()\n return bool(res)\n\n\ndef isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns a boolean array where two arrays are equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n cupy.ndarray: A boolean array storing where ``a`` and ``b`` are equal.\n\n .. seealso:: :func:`numpy.isclose`\n\n \"\"\"\n a = cupy.asanyarray(a)\n b = cupy.asanyarray(b)\n if (a.dtype in [numpy.complex64, numpy.complex128]) or \\\n (b.dtype in [numpy.complex64, numpy.complex128]):\n return _is_close_complex(a, b, rtol, atol, equal_nan)\n else:\n return _is_close(a, b, rtol, atol, equal_nan)\n\n\n# TODO(okuta): Implement array_equal\n\n\n# TODO(okuta): Implement array_equiv\n\n\ngreater = core.greater\n\n\ngreater_equal = core.greater_equal\n\n\nless = core.less\n\n\nless_equal = core.less_equal\n\n\nequal = core.equal\n\n\nnot_equal = core.not_equal\n", "path": "cupy/logic/comparison.py"}]} | 1,309 | 484 |
gh_patches_debug_5142 | rasdani/github-patches | git_diff | microsoft__qlib-1246 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Misleading Error "Please install necessary libs for CatBoostModel."
## 🐛 Bug Description
Qlib does not require the installation of packages like `CatBoostModel`
But the output looks a little misleading.
## To Reproduce
Run `examples/workflow_by_code.ipynb` in jupyter notebook.
## Expected Behavior
Successfully run the script without installing CatBoostModel and warning.
## Screenshot

<!-- A screenshot of the error message or anything shouldn't appear-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qlib/contrib/model/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT License.
3 try:
4 from .catboost_model import CatBoostModel
5 except ModuleNotFoundError:
6 CatBoostModel = None
7 print("Please install necessary libs for CatBoostModel.")
8 try:
9 from .double_ensemble import DEnsembleModel
10 from .gbdt import LGBModel
11 except ModuleNotFoundError:
12 DEnsembleModel, LGBModel = None, None
13 print(
14 "ModuleNotFoundError. DEnsembleModel and LGBModel are skipped. (optional: maybe installing lightgbm can fix it.)"
15 )
16 try:
17 from .xgboost import XGBModel
18 except ModuleNotFoundError:
19 XGBModel = None
20 print("ModuleNotFoundError. XGBModel is skipped(optional: maybe installing xgboost can fix it).")
21 try:
22 from .linear import LinearModel
23 except ModuleNotFoundError:
24 LinearModel = None
25 print("ModuleNotFoundError. LinearModel is skipped(optional: maybe installing scipy and sklearn can fix it).")
26 # import pytorch models
27 try:
28 from .pytorch_alstm import ALSTM
29 from .pytorch_gats import GATs
30 from .pytorch_gru import GRU
31 from .pytorch_lstm import LSTM
32 from .pytorch_nn import DNNModelPytorch
33 from .pytorch_tabnet import TabnetModel
34 from .pytorch_sfm import SFM_Model
35 from .pytorch_tcn import TCN
36 from .pytorch_add import ADD
37
38 pytorch_classes = (ALSTM, GATs, GRU, LSTM, DNNModelPytorch, TabnetModel, SFM_Model, TCN, ADD)
39 except ModuleNotFoundError:
40 pytorch_classes = ()
41 print("ModuleNotFoundError. PyTorch models are skipped (optional: maybe installing pytorch can fix it).")
42
43 all_model_classes = (CatBoostModel, DEnsembleModel, LGBModel, XGBModel, LinearModel) + pytorch_classes
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qlib/contrib/model/__init__.py b/qlib/contrib/model/__init__.py
--- a/qlib/contrib/model/__init__.py
+++ b/qlib/contrib/model/__init__.py
@@ -4,7 +4,7 @@
from .catboost_model import CatBoostModel
except ModuleNotFoundError:
CatBoostModel = None
- print("Please install necessary libs for CatBoostModel.")
+ print("ModuleNotFoundError. CatBoostModel are skipped. (optional: maybe installing CatBoostModel can fix it.)")
try:
from .double_ensemble import DEnsembleModel
from .gbdt import LGBModel
| {"golden_diff": "diff --git a/qlib/contrib/model/__init__.py b/qlib/contrib/model/__init__.py\n--- a/qlib/contrib/model/__init__.py\n+++ b/qlib/contrib/model/__init__.py\n@@ -4,7 +4,7 @@\n from .catboost_model import CatBoostModel\n except ModuleNotFoundError:\n CatBoostModel = None\n- print(\"Please install necessary libs for CatBoostModel.\")\n+ print(\"ModuleNotFoundError. CatBoostModel are skipped. (optional: maybe installing CatBoostModel can fix it.)\")\n try:\n from .double_ensemble import DEnsembleModel\n from .gbdt import LGBModel\n", "issue": "Misleading Error \"Please install necessary libs for CatBoostModel.\"\n## \ud83d\udc1b Bug Description\r\n\r\nQlib does not require the installation of packages like `CatBoostModel`\r\n\r\nBut the output looks a little misleading.\r\n\r\n\r\n## To Reproduce\r\nRun `examples/workflow_by_code.ipynb` in jupyter notebook.\r\n\r\n## Expected Behavior\r\n\r\nSuccessfully run the script without installing CatBoostModel and warning.\r\n\r\n## Screenshot\r\n\r\n\r\n<!-- A screenshot of the error message or anything shouldn't appear-->\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT License.\ntry:\n from .catboost_model import CatBoostModel\nexcept ModuleNotFoundError:\n CatBoostModel = None\n print(\"Please install necessary libs for CatBoostModel.\")\ntry:\n from .double_ensemble import DEnsembleModel\n from .gbdt import LGBModel\nexcept ModuleNotFoundError:\n DEnsembleModel, LGBModel = None, None\n print(\n \"ModuleNotFoundError. DEnsembleModel and LGBModel are skipped. (optional: maybe installing lightgbm can fix it.)\"\n )\ntry:\n from .xgboost import XGBModel\nexcept ModuleNotFoundError:\n XGBModel = None\n print(\"ModuleNotFoundError. XGBModel is skipped(optional: maybe installing xgboost can fix it).\")\ntry:\n from .linear import LinearModel\nexcept ModuleNotFoundError:\n LinearModel = None\n print(\"ModuleNotFoundError. LinearModel is skipped(optional: maybe installing scipy and sklearn can fix it).\")\n# import pytorch models\ntry:\n from .pytorch_alstm import ALSTM\n from .pytorch_gats import GATs\n from .pytorch_gru import GRU\n from .pytorch_lstm import LSTM\n from .pytorch_nn import DNNModelPytorch\n from .pytorch_tabnet import TabnetModel\n from .pytorch_sfm import SFM_Model\n from .pytorch_tcn import TCN\n from .pytorch_add import ADD\n\n pytorch_classes = (ALSTM, GATs, GRU, LSTM, DNNModelPytorch, TabnetModel, SFM_Model, TCN, ADD)\nexcept ModuleNotFoundError:\n pytorch_classes = ()\n print(\"ModuleNotFoundError. PyTorch models are skipped (optional: maybe installing pytorch can fix it).\")\n\nall_model_classes = (CatBoostModel, DEnsembleModel, LGBModel, XGBModel, LinearModel) + pytorch_classes\n", "path": "qlib/contrib/model/__init__.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT License.\ntry:\n from .catboost_model import CatBoostModel\nexcept ModuleNotFoundError:\n CatBoostModel = None\n print(\"ModuleNotFoundError. CatBoostModel are skipped. (optional: maybe installing CatBoostModel can fix it.)\")\ntry:\n from .double_ensemble import DEnsembleModel\n from .gbdt import LGBModel\nexcept ModuleNotFoundError:\n DEnsembleModel, LGBModel = None, None\n print(\n \"ModuleNotFoundError. DEnsembleModel and LGBModel are skipped. (optional: maybe installing lightgbm can fix it.)\"\n )\ntry:\n from .xgboost import XGBModel\nexcept ModuleNotFoundError:\n XGBModel = None\n print(\"ModuleNotFoundError. XGBModel is skipped(optional: maybe installing xgboost can fix it).\")\ntry:\n from .linear import LinearModel\nexcept ModuleNotFoundError:\n LinearModel = None\n print(\"ModuleNotFoundError. LinearModel is skipped(optional: maybe installing scipy and sklearn can fix it).\")\n# import pytorch models\ntry:\n from .pytorch_alstm import ALSTM\n from .pytorch_gats import GATs\n from .pytorch_gru import GRU\n from .pytorch_lstm import LSTM\n from .pytorch_nn import DNNModelPytorch\n from .pytorch_tabnet import TabnetModel\n from .pytorch_sfm import SFM_Model\n from .pytorch_tcn import TCN\n from .pytorch_add import ADD\n\n pytorch_classes = (ALSTM, GATs, GRU, LSTM, DNNModelPytorch, TabnetModel, SFM_Model, TCN, ADD)\nexcept ModuleNotFoundError:\n pytorch_classes = ()\n print(\"ModuleNotFoundError. PyTorch models are skipped (optional: maybe installing pytorch can fix it).\")\n\nall_model_classes = (CatBoostModel, DEnsembleModel, LGBModel, XGBModel, LinearModel) + pytorch_classes\n", "path": "qlib/contrib/model/__init__.py"}]} | 934 | 147 |
gh_patches_debug_41207 | rasdani/github-patches | git_diff | svthalia__concrexit-2399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add sales order API
### Describe the solution you'd like
- An api endpoint `api/v2/sales/order/<uuid>/` that returns the order information that is shown on the website when you pay for the url from a qr code.
- It should be possible to pay the order through `api/v2/payments/sales/order/<uuid>`. This might already be possible, I haven't checked.
### Motivation
This way people will be able to pay from the app, instead of through the website where they may even need to log in. I think this is an obvious case where the ease of use of an app is very welcome.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/api/v2/views.py`
Content:
```
1 from django.db.models import Q
2 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
3 from rest_framework.exceptions import PermissionDenied
4 from rest_framework.generics import (
5 ListAPIView,
6 RetrieveAPIView,
7 CreateAPIView,
8 UpdateAPIView,
9 DestroyAPIView,
10 )
11 from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
12
13 from sales.api.v2.admin.serializers.order import OrderSerializer, OrderListSerializer
14 from sales.api.v2.admin.views import (
15 OrderListView,
16 OrderDetailView,
17 ShiftDetailView,
18 ShiftListView,
19 )
20 from sales.api.v2.serializers.user_order import UserOrderSerializer
21 from sales.api.v2.serializers.user_shift import UserShiftSerializer
22 from sales.models.shift import SelfOrderPeriod, Shift
23 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
24
25
26 class UserShiftListView(ShiftListView):
27 serializer_class = UserShiftSerializer
28 # queryset = SelfOrderPeriod.objects.all()
29 permission_classes = [
30 IsAuthenticatedOrTokenHasScope,
31 DjangoModelPermissionsOrAnonReadOnly,
32 ]
33 required_scopes = ["sales:read"]
34
35
36 class UserShiftDetailView(ShiftDetailView):
37 serializer_class = UserShiftSerializer
38 # queryset = SelfOrderPeriod.objects.all()
39 permission_classes = [
40 IsAuthenticatedOrTokenHasScope,
41 DjangoModelPermissionsOrAnonReadOnly,
42 ]
43 required_scopes = ["sales:read"]
44
45
46 class UserOrderListView(OrderListView):
47 permission_classes = [
48 IsAuthenticatedOrTokenHasScopeForMethod,
49 ]
50 required_scopes_per_method = {
51 "GET": ["sales:read"],
52 "POST": ["sales:order"],
53 }
54 method_serializer_classes = {
55 ("GET",): OrderListSerializer,
56 ("POST",): UserOrderSerializer,
57 }
58
59 def create(self, request, *args, **kwargs):
60 shift = Shift.objects.get(pk=kwargs["pk"])
61 if not shift.user_orders_allowed:
62 raise PermissionDenied
63 return super(UserOrderListView, self).create(request, *args, **kwargs)
64
65 def perform_create(self, serializer):
66 serializer.save(
67 payer_id=self.request.member.pk, created_by_id=self.request.member.pk
68 )
69
70 def get_queryset(self):
71 queryset = super(UserOrderListView, self).get_queryset()
72 return queryset.filter(
73 Q(payer=self.request.member) | Q(created_by=self.request.member)
74 )
75
76
77 class UserOrderDetailView(OrderDetailView):
78 serializer_class = UserOrderSerializer
79 permission_classes = [
80 IsAuthenticatedOrTokenHasScopeForMethod,
81 ]
82 required_scopes_per_method = {
83 "GET": ["sales:read"],
84 "PATCH": ["sales:order"],
85 "PUT": ["sales:order"],
86 "DELETE": ["sales:order"],
87 }
88
89 def get_queryset(self):
90 queryset = super(UserOrderDetailView, self).get_queryset()
91 return queryset.filter(
92 Q(payer=self.request.member) | Q(created_by=self.request.member)
93 )
94
95 def update(self, request, *args, **kwargs):
96 if not self.get_object().shift.user_orders_allowed:
97 raise PermissionDenied
98 if self.get_object().payment:
99 raise PermissionDenied
100 return super(UserOrderDetailView, self).update(request, *args, **kwargs)
101
102 def partial_update(self, request, *args, **kwargs):
103 if not self.get_object().shift.user_orders_allowed:
104 raise PermissionDenied
105 if self.get_object().payment:
106 raise PermissionDenied
107 return super(UserOrderDetailView, self).partial_update(request, *args, **kwargs)
108
109 def destroy(self, request, *args, **kwargs):
110 if not self.get_object().shift.user_orders_allowed:
111 raise PermissionDenied
112 if self.get_object().payment:
113 raise PermissionDenied
114 return super(UserOrderDetailView, self).destroy(request, *args, **kwargs)
115
```
Path: `website/sales/api/v2/urls.py`
Content:
```
1 from django.urls import path
2
3 from sales.api.v2.views import (
4 UserShiftListView,
5 UserShiftDetailView,
6 UserOrderListView,
7 UserOrderDetailView,
8 )
9
10 app_name = "sales"
11
12 urlpatterns = [
13 path("sales/shifts/", UserShiftListView.as_view(), name="user-shift-list"),
14 path(
15 "sales/shifts/<int:pk>/",
16 UserShiftDetailView.as_view(),
17 name="user-shift-detail",
18 ),
19 path(
20 "sales/shifts/<int:pk>/orders/",
21 UserOrderListView.as_view(),
22 name="user-order-list",
23 ),
24 path(
25 "sales/orders/<uuid:pk>/",
26 UserOrderDetailView.as_view(),
27 name="user-order-detail",
28 ),
29 ]
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/sales/api/v2/urls.py b/website/sales/api/v2/urls.py
--- a/website/sales/api/v2/urls.py
+++ b/website/sales/api/v2/urls.py
@@ -1,6 +1,7 @@
from django.urls import path
from sales.api.v2.views import (
+ OrderClaimView,
UserShiftListView,
UserShiftDetailView,
UserOrderListView,
@@ -10,6 +11,7 @@
app_name = "sales"
urlpatterns = [
+ path("sales/order/<uuid:pk>/claim/", OrderClaimView.as_view(), name="order-claim"),
path("sales/shifts/", UserShiftListView.as_view(), name="user-shift-list"),
path(
"sales/shifts/<int:pk>/",
diff --git a/website/sales/api/v2/views.py b/website/sales/api/v2/views.py
--- a/website/sales/api/v2/views.py
+++ b/website/sales/api/v2/views.py
@@ -1,25 +1,23 @@
from django.db.models import Q
from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
from rest_framework.exceptions import PermissionDenied
-from rest_framework.generics import (
- ListAPIView,
- RetrieveAPIView,
- CreateAPIView,
- UpdateAPIView,
- DestroyAPIView,
-)
+from rest_framework.generics import GenericAPIView
from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
+from rest_framework.response import Response
+from rest_framework.schemas.openapi import AutoSchema
-from sales.api.v2.admin.serializers.order import OrderSerializer, OrderListSerializer
+from sales.api.v2.admin.serializers.order import OrderListSerializer
from sales.api.v2.admin.views import (
- OrderListView,
OrderDetailView,
+ OrderListView,
ShiftDetailView,
ShiftListView,
)
+from sales import services
from sales.api.v2.serializers.user_order import UserOrderSerializer
from sales.api.v2.serializers.user_shift import UserShiftSerializer
-from sales.models.shift import SelfOrderPeriod, Shift
+from sales.models.shift import Shift
+from sales.models.order import Order
from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
@@ -111,4 +109,40 @@
raise PermissionDenied
if self.get_object().payment:
raise PermissionDenied
- return super(UserOrderDetailView, self).destroy(request, *args, **kwargs)
+
+
+class OrderClaimView(GenericAPIView):
+ """Claims an order to be paid by the current user."""
+
+ class OrderClaimViewSchema(AutoSchema):
+ def get_request_serializer(self, path, method):
+ # This endpoint does not expect any content in the request body.
+ return None
+
+ queryset = Order.objects.all()
+ serializer_class = UserOrderSerializer
+ schema = OrderClaimViewSchema(operation_id_base="claimOrder")
+ permission_classes = [IsAuthenticatedOrTokenHasScope]
+ required_scopes = ["sales:order"]
+
+ def patch(self, request, *args, **kwargs):
+ if request.member is None:
+ raise PermissionDenied("You need to be a member to pay for an order.")
+
+ order = self.get_object()
+ if order.payment:
+ raise PermissionDenied(detail="This order was already paid for.")
+
+ if order.payer is not None and order.payer != request.member:
+ raise PermissionDenied(detail="This order is not yours.")
+
+ order.payer = request.member
+ order.save()
+
+ if order.age_restricted and not services.is_adult(request.member):
+ raise PermissionDenied(
+ "The age restrictions on this order do not allow you to pay for this order."
+ )
+
+ serializer = self.get_serializer(order)
+ return Response(serializer.data)
| {"golden_diff": "diff --git a/website/sales/api/v2/urls.py b/website/sales/api/v2/urls.py\n--- a/website/sales/api/v2/urls.py\n+++ b/website/sales/api/v2/urls.py\n@@ -1,6 +1,7 @@\n from django.urls import path\n \n from sales.api.v2.views import (\n+ OrderClaimView,\n UserShiftListView,\n UserShiftDetailView,\n UserOrderListView,\n@@ -10,6 +11,7 @@\n app_name = \"sales\"\n \n urlpatterns = [\n+ path(\"sales/order/<uuid:pk>/claim/\", OrderClaimView.as_view(), name=\"order-claim\"),\n path(\"sales/shifts/\", UserShiftListView.as_view(), name=\"user-shift-list\"),\n path(\n \"sales/shifts/<int:pk>/\",\ndiff --git a/website/sales/api/v2/views.py b/website/sales/api/v2/views.py\n--- a/website/sales/api/v2/views.py\n+++ b/website/sales/api/v2/views.py\n@@ -1,25 +1,23 @@\n from django.db.models import Q\n from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\n from rest_framework.exceptions import PermissionDenied\n-from rest_framework.generics import (\n- ListAPIView,\n- RetrieveAPIView,\n- CreateAPIView,\n- UpdateAPIView,\n- DestroyAPIView,\n-)\n+from rest_framework.generics import GenericAPIView\n from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\n+from rest_framework.response import Response\n+from rest_framework.schemas.openapi import AutoSchema\n \n-from sales.api.v2.admin.serializers.order import OrderSerializer, OrderListSerializer\n+from sales.api.v2.admin.serializers.order import OrderListSerializer\n from sales.api.v2.admin.views import (\n- OrderListView,\n OrderDetailView,\n+ OrderListView,\n ShiftDetailView,\n ShiftListView,\n )\n+from sales import services\n from sales.api.v2.serializers.user_order import UserOrderSerializer\n from sales.api.v2.serializers.user_shift import UserShiftSerializer\n-from sales.models.shift import SelfOrderPeriod, Shift\n+from sales.models.shift import Shift\n+from sales.models.order import Order\n from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n \n \n@@ -111,4 +109,40 @@\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n- return super(UserOrderDetailView, self).destroy(request, *args, **kwargs)\n+\n+\n+class OrderClaimView(GenericAPIView):\n+ \"\"\"Claims an order to be paid by the current user.\"\"\"\n+\n+ class OrderClaimViewSchema(AutoSchema):\n+ def get_request_serializer(self, path, method):\n+ # This endpoint does not expect any content in the request body.\n+ return None\n+\n+ queryset = Order.objects.all()\n+ serializer_class = UserOrderSerializer\n+ schema = OrderClaimViewSchema(operation_id_base=\"claimOrder\")\n+ permission_classes = [IsAuthenticatedOrTokenHasScope]\n+ required_scopes = [\"sales:order\"]\n+\n+ def patch(self, request, *args, **kwargs):\n+ if request.member is None:\n+ raise PermissionDenied(\"You need to be a member to pay for an order.\")\n+\n+ order = self.get_object()\n+ if order.payment:\n+ raise PermissionDenied(detail=\"This order was already paid for.\")\n+\n+ if order.payer is not None and order.payer != request.member:\n+ raise PermissionDenied(detail=\"This order is not yours.\")\n+\n+ order.payer = request.member\n+ order.save()\n+\n+ if order.age_restricted and not services.is_adult(request.member):\n+ raise PermissionDenied(\n+ \"The age restrictions on this order do not allow you to pay for this order.\"\n+ )\n+\n+ serializer = self.get_serializer(order)\n+ return Response(serializer.data)\n", "issue": "Add sales order API\n### Describe the solution you'd like\r\n- An api endpoint `api/v2/sales/order/<uuid>/` that returns the order information that is shown on the website when you pay for the url from a qr code.\r\n- It should be possible to pay the order through `api/v2/payments/sales/order/<uuid>`. This might already be possible, I haven't checked.\r\n\r\n### Motivation\r\nThis way people will be able to pay from the app, instead of through the website where they may even need to log in. I think this is an obvious case where the ease of use of an app is very welcome.\r\n\r\n\n", "before_files": [{"content": "from django.db.models import Q\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n CreateAPIView,\n UpdateAPIView,\n DestroyAPIView,\n)\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\n\nfrom sales.api.v2.admin.serializers.order import OrderSerializer, OrderListSerializer\nfrom sales.api.v2.admin.views import (\n OrderListView,\n OrderDetailView,\n ShiftDetailView,\n ShiftListView,\n)\nfrom sales.api.v2.serializers.user_order import UserOrderSerializer\nfrom sales.api.v2.serializers.user_shift import UserShiftSerializer\nfrom sales.models.shift import SelfOrderPeriod, Shift\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass UserShiftListView(ShiftListView):\n serializer_class = UserShiftSerializer\n # queryset = SelfOrderPeriod.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"sales:read\"]\n\n\nclass UserShiftDetailView(ShiftDetailView):\n serializer_class = UserShiftSerializer\n # queryset = SelfOrderPeriod.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"sales:read\"]\n\n\nclass UserOrderListView(OrderListView):\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"sales:read\"],\n \"POST\": [\"sales:order\"],\n }\n method_serializer_classes = {\n (\"GET\",): OrderListSerializer,\n (\"POST\",): UserOrderSerializer,\n }\n\n def create(self, request, *args, **kwargs):\n shift = Shift.objects.get(pk=kwargs[\"pk\"])\n if not shift.user_orders_allowed:\n raise PermissionDenied\n return super(UserOrderListView, self).create(request, *args, **kwargs)\n\n def perform_create(self, serializer):\n serializer.save(\n payer_id=self.request.member.pk, created_by_id=self.request.member.pk\n )\n\n def get_queryset(self):\n queryset = super(UserOrderListView, self).get_queryset()\n return queryset.filter(\n Q(payer=self.request.member) | Q(created_by=self.request.member)\n )\n\n\nclass UserOrderDetailView(OrderDetailView):\n serializer_class = UserOrderSerializer\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"sales:read\"],\n \"PATCH\": [\"sales:order\"],\n \"PUT\": [\"sales:order\"],\n \"DELETE\": [\"sales:order\"],\n }\n\n def get_queryset(self):\n queryset = super(UserOrderDetailView, self).get_queryset()\n return queryset.filter(\n Q(payer=self.request.member) | Q(created_by=self.request.member)\n )\n\n def update(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n return super(UserOrderDetailView, self).update(request, *args, **kwargs)\n\n def partial_update(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n return super(UserOrderDetailView, self).partial_update(request, *args, **kwargs)\n\n def destroy(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n return super(UserOrderDetailView, self).destroy(request, *args, **kwargs)\n", "path": "website/sales/api/v2/views.py"}, {"content": "from django.urls import path\n\nfrom sales.api.v2.views import (\n UserShiftListView,\n UserShiftDetailView,\n UserOrderListView,\n UserOrderDetailView,\n)\n\napp_name = \"sales\"\n\nurlpatterns = [\n path(\"sales/shifts/\", UserShiftListView.as_view(), name=\"user-shift-list\"),\n path(\n \"sales/shifts/<int:pk>/\",\n UserShiftDetailView.as_view(),\n name=\"user-shift-detail\",\n ),\n path(\n \"sales/shifts/<int:pk>/orders/\",\n UserOrderListView.as_view(),\n name=\"user-order-list\",\n ),\n path(\n \"sales/orders/<uuid:pk>/\",\n UserOrderDetailView.as_view(),\n name=\"user-order-detail\",\n ),\n]\n", "path": "website/sales/api/v2/urls.py"}], "after_files": [{"content": "from django.db.models import Q\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.generics import GenericAPIView\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\nfrom rest_framework.response import Response\nfrom rest_framework.schemas.openapi import AutoSchema\n\nfrom sales.api.v2.admin.serializers.order import OrderListSerializer\nfrom sales.api.v2.admin.views import (\n OrderDetailView,\n OrderListView,\n ShiftDetailView,\n ShiftListView,\n)\nfrom sales import services\nfrom sales.api.v2.serializers.user_order import UserOrderSerializer\nfrom sales.api.v2.serializers.user_shift import UserShiftSerializer\nfrom sales.models.shift import Shift\nfrom sales.models.order import Order\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass UserShiftListView(ShiftListView):\n serializer_class = UserShiftSerializer\n # queryset = SelfOrderPeriod.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"sales:read\"]\n\n\nclass UserShiftDetailView(ShiftDetailView):\n serializer_class = UserShiftSerializer\n # queryset = SelfOrderPeriod.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"sales:read\"]\n\n\nclass UserOrderListView(OrderListView):\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"sales:read\"],\n \"POST\": [\"sales:order\"],\n }\n method_serializer_classes = {\n (\"GET\",): OrderListSerializer,\n (\"POST\",): UserOrderSerializer,\n }\n\n def create(self, request, *args, **kwargs):\n shift = Shift.objects.get(pk=kwargs[\"pk\"])\n if not shift.user_orders_allowed:\n raise PermissionDenied\n return super(UserOrderListView, self).create(request, *args, **kwargs)\n\n def perform_create(self, serializer):\n serializer.save(\n payer_id=self.request.member.pk, created_by_id=self.request.member.pk\n )\n\n def get_queryset(self):\n queryset = super(UserOrderListView, self).get_queryset()\n return queryset.filter(\n Q(payer=self.request.member) | Q(created_by=self.request.member)\n )\n\n\nclass UserOrderDetailView(OrderDetailView):\n serializer_class = UserOrderSerializer\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n ]\n required_scopes_per_method = {\n \"GET\": [\"sales:read\"],\n \"PATCH\": [\"sales:order\"],\n \"PUT\": [\"sales:order\"],\n \"DELETE\": [\"sales:order\"],\n }\n\n def get_queryset(self):\n queryset = super(UserOrderDetailView, self).get_queryset()\n return queryset.filter(\n Q(payer=self.request.member) | Q(created_by=self.request.member)\n )\n\n def update(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n return super(UserOrderDetailView, self).update(request, *args, **kwargs)\n\n def partial_update(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n return super(UserOrderDetailView, self).partial_update(request, *args, **kwargs)\n\n def destroy(self, request, *args, **kwargs):\n if not self.get_object().shift.user_orders_allowed:\n raise PermissionDenied\n if self.get_object().payment:\n raise PermissionDenied\n\n\nclass OrderClaimView(GenericAPIView):\n \"\"\"Claims an order to be paid by the current user.\"\"\"\n\n class OrderClaimViewSchema(AutoSchema):\n def get_request_serializer(self, path, method):\n # This endpoint does not expect any content in the request body.\n return None\n\n queryset = Order.objects.all()\n serializer_class = UserOrderSerializer\n schema = OrderClaimViewSchema(operation_id_base=\"claimOrder\")\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"sales:order\"]\n\n def patch(self, request, *args, **kwargs):\n if request.member is None:\n raise PermissionDenied(\"You need to be a member to pay for an order.\")\n\n order = self.get_object()\n if order.payment:\n raise PermissionDenied(detail=\"This order was already paid for.\")\n\n if order.payer is not None and order.payer != request.member:\n raise PermissionDenied(detail=\"This order is not yours.\")\n\n order.payer = request.member\n order.save()\n\n if order.age_restricted and not services.is_adult(request.member):\n raise PermissionDenied(\n \"The age restrictions on this order do not allow you to pay for this order.\"\n )\n\n serializer = self.get_serializer(order)\n return Response(serializer.data)\n", "path": "website/sales/api/v2/views.py"}, {"content": "from django.urls import path\n\nfrom sales.api.v2.views import (\n OrderClaimView,\n UserShiftListView,\n UserShiftDetailView,\n UserOrderListView,\n UserOrderDetailView,\n)\n\napp_name = \"sales\"\n\nurlpatterns = [\n path(\"sales/order/<uuid:pk>/claim/\", OrderClaimView.as_view(), name=\"order-claim\"),\n path(\"sales/shifts/\", UserShiftListView.as_view(), name=\"user-shift-list\"),\n path(\n \"sales/shifts/<int:pk>/\",\n UserShiftDetailView.as_view(),\n name=\"user-shift-detail\",\n ),\n path(\n \"sales/shifts/<int:pk>/orders/\",\n UserOrderListView.as_view(),\n name=\"user-order-list\",\n ),\n path(\n \"sales/orders/<uuid:pk>/\",\n UserOrderDetailView.as_view(),\n name=\"user-order-detail\",\n ),\n]\n", "path": "website/sales/api/v2/urls.py"}]} | 1,681 | 844 |
gh_patches_debug_37530 | rasdani/github-patches | git_diff | keras-team__autokeras-286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a class to generate an MLP
<!---
Please label your issue with `new_task_module`.
-->
### Suggested Name
<!---
-->
MlpGenerator
### Task Description
<!---
A clear and concise description of the machine learning task to be added, its problem statement and learning outcome.
-->
Add a class named MlpGenerator. Create a superclass that would be inherited by CnnGenerator and MlpGenerator.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autokeras/constant.py`
Content:
```
1 class Constant:
2 # Data
3
4 VALIDATION_SET_SIZE = 0.08333
5
6 # Searcher
7
8 MAX_MODEL_NUM = 1000
9 BETA = 2.576
10 KERNEL_LAMBDA = 0.1
11 T_MIN = 0.0001
12 N_NEIGHBOURS = 8
13 MAX_MODEL_SIZE = (1 << 25)
14 MAX_LAYER_WIDTH = 4096
15 MAX_LAYERS = 100
16
17 # Model Defaults
18
19 DENSE_DROPOUT_RATE = 0.5
20 CONV_DROPOUT_RATE = 0.25
21 CONV_BLOCK_DISTANCE = 2
22 DENSE_BLOCK_DISTANCE = 1
23 MODEL_LEN = 3
24 MODEL_WIDTH = 64
25
26 # ModelTrainer
27
28 DATA_AUGMENTATION = True
29 MAX_ITER_NUM = 200
30 MIN_LOSS_DEC = 1e-4
31 MAX_NO_IMPROVEMENT_NUM = 5
32 MAX_BATCH_SIZE = 128
33 LIMIT_MEMORY = False
34 SEARCH_MAX_ITER = 200
35
36 # text preprocessor
37
38 EMBEDDING_DIM = 100
39 MAX_SEQUENCE_LENGTH = 400
40 MAX_NB_WORDS = 5000
41 EXTRACT_PATH = "glove/"
42 # Download file name
43 FILE_PATH = "glove.zip"
44 PRE_TRAIN_FILE_LINK = "http://nlp.stanford.edu/data/glove.6B.zip"
45 PRE_TRAIN_FILE_NAME = "glove.6B.100d.txt"
46
```
Path: `autokeras/nn/generator.py`
Content:
```
1 from autokeras.constant import Constant
2 from autokeras.nn.graph import Graph
3 from autokeras.nn.layers import StubBatchNormalization, StubConv, StubDropout, StubPooling, StubDense, StubFlatten, \
4 StubReLU, StubGlobalPooling
5
6
7 class CnnGenerator:
8 def __init__(self, n_output_node, input_shape):
9 self.n_output_node = n_output_node
10 self.input_shape = input_shape
11 if len(self.input_shape) > 4:
12 raise ValueError('The input dimension is too high.')
13 if len(self.input_shape) < 2:
14 raise ValueError('The input dimension is too low.')
15
16 def generate(self, model_len=Constant.MODEL_LEN, model_width=Constant.MODEL_WIDTH):
17 pooling_len = int(model_len / 4)
18 graph = Graph(self.input_shape, False)
19 temp_input_channel = self.input_shape[-1]
20 output_node_id = 0
21 for i in range(model_len):
22 output_node_id = graph.add_layer(StubReLU(), output_node_id)
23 output_node_id = graph.add_layer(StubConv(temp_input_channel, model_width, kernel_size=3), output_node_id)
24 output_node_id = graph.add_layer(StubBatchNormalization(model_width), output_node_id)
25 temp_input_channel = model_width
26 if pooling_len == 0 or ((i + 1) % pooling_len == 0 and i != model_len - 1):
27 output_node_id = graph.add_layer(StubPooling(), output_node_id)
28
29 output_node_id = graph.add_layer(StubGlobalPooling(), output_node_id)
30 output_node_id = graph.add_layer(StubDropout(Constant.CONV_DROPOUT_RATE), output_node_id)
31 output_node_id = graph.add_layer(StubDense(graph.node_list[output_node_id].shape[0], model_width),
32 output_node_id)
33 output_node_id = graph.add_layer(StubReLU(), output_node_id)
34 graph.add_layer(StubDense(model_width, self.n_output_node), output_node_id)
35 return graph
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/autokeras/constant.py b/autokeras/constant.py
--- a/autokeras/constant.py
+++ b/autokeras/constant.py
@@ -18,9 +18,12 @@
DENSE_DROPOUT_RATE = 0.5
CONV_DROPOUT_RATE = 0.25
+ MLP_DROPOUT_RATE = 0.25
CONV_BLOCK_DISTANCE = 2
DENSE_BLOCK_DISTANCE = 1
MODEL_LEN = 3
+ MLP_MODEL_LEN = 3
+ MLP_MODEL_WIDTH = 5
MODEL_WIDTH = 64
# ModelTrainer
diff --git a/autokeras/nn/generator.py b/autokeras/nn/generator.py
--- a/autokeras/nn/generator.py
+++ b/autokeras/nn/generator.py
@@ -2,12 +2,22 @@
from autokeras.nn.graph import Graph
from autokeras.nn.layers import StubBatchNormalization, StubConv, StubDropout, StubPooling, StubDense, StubFlatten, \
StubReLU, StubGlobalPooling
+from abc import abstractmethod
-class CnnGenerator:
+class NetworkGenerator:
def __init__(self, n_output_node, input_shape):
self.n_output_node = n_output_node
self.input_shape = input_shape
+
+ @abstractmethod
+ def generate(self, model_len, model_width):
+ pass
+
+
+class CnnGenerator(NetworkGenerator):
+ def __init__(self, n_output_node, input_shape):
+ super(CnnGenerator, self).__init__(n_output_node, input_shape)
if len(self.input_shape) > 4:
raise ValueError('The input dimension is too high.')
if len(self.input_shape) < 2:
@@ -33,3 +43,28 @@
output_node_id = graph.add_layer(StubReLU(), output_node_id)
graph.add_layer(StubDense(model_width, self.n_output_node), output_node_id)
return graph
+
+
+class MlpGenerator(NetworkGenerator):
+ def __init__(self, n_output_node, input_shape):
+ super(MlpGenerator, self).__init__(n_output_node, input_shape)
+ if len(self.input_shape) > 1:
+ raise ValueError('The input dimension is too high.')
+
+ def generate(self, model_len=Constant.MLP_MODEL_LEN, model_width=Constant.MLP_MODEL_WIDTH):
+ if type(model_width) is list and not len(model_width) == model_len:
+ raise ValueError('The length of \'model_width\' does not match \'model_len\'')
+ elif type(model_width) is int:
+ model_width = [model_width] * model_len
+
+ graph = Graph(self.input_shape[0], False)
+ output_node_id = 0
+ n_nodes_prev_layer = self.input_shape[0]
+ for width in model_width:
+ output_node_id = graph.add_layer(StubDense(n_nodes_prev_layer, width), output_node_id)
+ output_node_id = graph.add_layer(StubDropout(Constant.MLP_DROPOUT_RATE), output_node_id)
+ output_node_id = graph.add_layer(StubReLU(), output_node_id)
+ n_nodes_prev_layer = width
+
+ graph.add_layer(StubDense(n_nodes_prev_layer, self.n_output_node), output_node_id)
+ return graph
| {"golden_diff": "diff --git a/autokeras/constant.py b/autokeras/constant.py\n--- a/autokeras/constant.py\n+++ b/autokeras/constant.py\n@@ -18,9 +18,12 @@\n \n DENSE_DROPOUT_RATE = 0.5\n CONV_DROPOUT_RATE = 0.25\n+ MLP_DROPOUT_RATE = 0.25\n CONV_BLOCK_DISTANCE = 2\n DENSE_BLOCK_DISTANCE = 1\n MODEL_LEN = 3\n+ MLP_MODEL_LEN = 3\n+ MLP_MODEL_WIDTH = 5\n MODEL_WIDTH = 64\n \n # ModelTrainer\ndiff --git a/autokeras/nn/generator.py b/autokeras/nn/generator.py\n--- a/autokeras/nn/generator.py\n+++ b/autokeras/nn/generator.py\n@@ -2,12 +2,22 @@\n from autokeras.nn.graph import Graph\n from autokeras.nn.layers import StubBatchNormalization, StubConv, StubDropout, StubPooling, StubDense, StubFlatten, \\\n StubReLU, StubGlobalPooling\n+from abc import abstractmethod\n \n \n-class CnnGenerator:\n+class NetworkGenerator:\n def __init__(self, n_output_node, input_shape):\n self.n_output_node = n_output_node\n self.input_shape = input_shape\n+\n+ @abstractmethod\n+ def generate(self, model_len, model_width):\n+ pass\n+\n+\n+class CnnGenerator(NetworkGenerator):\n+ def __init__(self, n_output_node, input_shape):\n+ super(CnnGenerator, self).__init__(n_output_node, input_shape)\n if len(self.input_shape) > 4:\n raise ValueError('The input dimension is too high.')\n if len(self.input_shape) < 2:\n@@ -33,3 +43,28 @@\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n graph.add_layer(StubDense(model_width, self.n_output_node), output_node_id)\n return graph\n+\n+\n+class MlpGenerator(NetworkGenerator):\n+ def __init__(self, n_output_node, input_shape):\n+ super(MlpGenerator, self).__init__(n_output_node, input_shape)\n+ if len(self.input_shape) > 1:\n+ raise ValueError('The input dimension is too high.')\n+\n+ def generate(self, model_len=Constant.MLP_MODEL_LEN, model_width=Constant.MLP_MODEL_WIDTH):\n+ if type(model_width) is list and not len(model_width) == model_len:\n+ raise ValueError('The length of \\'model_width\\' does not match \\'model_len\\'')\n+ elif type(model_width) is int:\n+ model_width = [model_width] * model_len\n+\n+ graph = Graph(self.input_shape[0], False)\n+ output_node_id = 0\n+ n_nodes_prev_layer = self.input_shape[0]\n+ for width in model_width:\n+ output_node_id = graph.add_layer(StubDense(n_nodes_prev_layer, width), output_node_id)\n+ output_node_id = graph.add_layer(StubDropout(Constant.MLP_DROPOUT_RATE), output_node_id)\n+ output_node_id = graph.add_layer(StubReLU(), output_node_id)\n+ n_nodes_prev_layer = width\n+\n+ graph.add_layer(StubDense(n_nodes_prev_layer, self.n_output_node), output_node_id)\n+ return graph\n", "issue": "Add a class to generate an MLP\n<!---\r\nPlease label your issue with `new_task_module`.\r\n-->\r\n\r\n### Suggested Name\r\n<!---\r\n-->\r\nMlpGenerator\r\n\r\n### Task Description\r\n<!---\r\nA clear and concise description of the machine learning task to be added, its problem statement and learning outcome.\r\n-->\r\nAdd a class named MlpGenerator. Create a superclass that would be inherited by CnnGenerator and MlpGenerator.\r\n\n", "before_files": [{"content": "class Constant:\n # Data\n\n VALIDATION_SET_SIZE = 0.08333\n\n # Searcher\n\n MAX_MODEL_NUM = 1000\n BETA = 2.576\n KERNEL_LAMBDA = 0.1\n T_MIN = 0.0001\n N_NEIGHBOURS = 8\n MAX_MODEL_SIZE = (1 << 25)\n MAX_LAYER_WIDTH = 4096\n MAX_LAYERS = 100\n\n # Model Defaults\n\n DENSE_DROPOUT_RATE = 0.5\n CONV_DROPOUT_RATE = 0.25\n CONV_BLOCK_DISTANCE = 2\n DENSE_BLOCK_DISTANCE = 1\n MODEL_LEN = 3\n MODEL_WIDTH = 64\n\n # ModelTrainer\n\n DATA_AUGMENTATION = True\n MAX_ITER_NUM = 200\n MIN_LOSS_DEC = 1e-4\n MAX_NO_IMPROVEMENT_NUM = 5\n MAX_BATCH_SIZE = 128\n LIMIT_MEMORY = False\n SEARCH_MAX_ITER = 200\n\n # text preprocessor\n\n EMBEDDING_DIM = 100\n MAX_SEQUENCE_LENGTH = 400\n MAX_NB_WORDS = 5000\n EXTRACT_PATH = \"glove/\"\n # Download file name\n FILE_PATH = \"glove.zip\"\n PRE_TRAIN_FILE_LINK = \"http://nlp.stanford.edu/data/glove.6B.zip\"\n PRE_TRAIN_FILE_NAME = \"glove.6B.100d.txt\"\n", "path": "autokeras/constant.py"}, {"content": "from autokeras.constant import Constant\nfrom autokeras.nn.graph import Graph\nfrom autokeras.nn.layers import StubBatchNormalization, StubConv, StubDropout, StubPooling, StubDense, StubFlatten, \\\n StubReLU, StubGlobalPooling\n\n\nclass CnnGenerator:\n def __init__(self, n_output_node, input_shape):\n self.n_output_node = n_output_node\n self.input_shape = input_shape\n if len(self.input_shape) > 4:\n raise ValueError('The input dimension is too high.')\n if len(self.input_shape) < 2:\n raise ValueError('The input dimension is too low.')\n\n def generate(self, model_len=Constant.MODEL_LEN, model_width=Constant.MODEL_WIDTH):\n pooling_len = int(model_len / 4)\n graph = Graph(self.input_shape, False)\n temp_input_channel = self.input_shape[-1]\n output_node_id = 0\n for i in range(model_len):\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n output_node_id = graph.add_layer(StubConv(temp_input_channel, model_width, kernel_size=3), output_node_id)\n output_node_id = graph.add_layer(StubBatchNormalization(model_width), output_node_id)\n temp_input_channel = model_width\n if pooling_len == 0 or ((i + 1) % pooling_len == 0 and i != model_len - 1):\n output_node_id = graph.add_layer(StubPooling(), output_node_id)\n\n output_node_id = graph.add_layer(StubGlobalPooling(), output_node_id)\n output_node_id = graph.add_layer(StubDropout(Constant.CONV_DROPOUT_RATE), output_node_id)\n output_node_id = graph.add_layer(StubDense(graph.node_list[output_node_id].shape[0], model_width),\n output_node_id)\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n graph.add_layer(StubDense(model_width, self.n_output_node), output_node_id)\n return graph\n", "path": "autokeras/nn/generator.py"}], "after_files": [{"content": "class Constant:\n # Data\n\n VALIDATION_SET_SIZE = 0.08333\n\n # Searcher\n\n MAX_MODEL_NUM = 1000\n BETA = 2.576\n KERNEL_LAMBDA = 0.1\n T_MIN = 0.0001\n N_NEIGHBOURS = 8\n MAX_MODEL_SIZE = (1 << 25)\n MAX_LAYER_WIDTH = 4096\n MAX_LAYERS = 100\n\n # Model Defaults\n\n DENSE_DROPOUT_RATE = 0.5\n CONV_DROPOUT_RATE = 0.25\n MLP_DROPOUT_RATE = 0.25\n CONV_BLOCK_DISTANCE = 2\n DENSE_BLOCK_DISTANCE = 1\n MODEL_LEN = 3\n MLP_MODEL_LEN = 3\n MLP_MODEL_WIDTH = 5\n MODEL_WIDTH = 64\n\n # ModelTrainer\n\n DATA_AUGMENTATION = True\n MAX_ITER_NUM = 200\n MIN_LOSS_DEC = 1e-4\n MAX_NO_IMPROVEMENT_NUM = 5\n MAX_BATCH_SIZE = 128\n LIMIT_MEMORY = False\n SEARCH_MAX_ITER = 200\n\n # text preprocessor\n\n EMBEDDING_DIM = 100\n MAX_SEQUENCE_LENGTH = 400\n MAX_NB_WORDS = 5000\n EXTRACT_PATH = \"glove/\"\n # Download file name\n FILE_PATH = \"glove.zip\"\n PRE_TRAIN_FILE_LINK = \"http://nlp.stanford.edu/data/glove.6B.zip\"\n PRE_TRAIN_FILE_NAME = \"glove.6B.100d.txt\"\n", "path": "autokeras/constant.py"}, {"content": "from autokeras.constant import Constant\nfrom autokeras.nn.graph import Graph\nfrom autokeras.nn.layers import StubBatchNormalization, StubConv, StubDropout, StubPooling, StubDense, StubFlatten, \\\n StubReLU, StubGlobalPooling\nfrom abc import abstractmethod\n\n\nclass NetworkGenerator:\n def __init__(self, n_output_node, input_shape):\n self.n_output_node = n_output_node\n self.input_shape = input_shape\n\n @abstractmethod\n def generate(self, model_len, model_width):\n pass\n\n\nclass CnnGenerator(NetworkGenerator):\n def __init__(self, n_output_node, input_shape):\n super(CnnGenerator, self).__init__(n_output_node, input_shape)\n if len(self.input_shape) > 4:\n raise ValueError('The input dimension is too high.')\n if len(self.input_shape) < 2:\n raise ValueError('The input dimension is too low.')\n\n def generate(self, model_len=Constant.MODEL_LEN, model_width=Constant.MODEL_WIDTH):\n pooling_len = int(model_len / 4)\n graph = Graph(self.input_shape, False)\n temp_input_channel = self.input_shape[-1]\n output_node_id = 0\n for i in range(model_len):\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n output_node_id = graph.add_layer(StubConv(temp_input_channel, model_width, kernel_size=3), output_node_id)\n output_node_id = graph.add_layer(StubBatchNormalization(model_width), output_node_id)\n temp_input_channel = model_width\n if pooling_len == 0 or ((i + 1) % pooling_len == 0 and i != model_len - 1):\n output_node_id = graph.add_layer(StubPooling(), output_node_id)\n\n output_node_id = graph.add_layer(StubGlobalPooling(), output_node_id)\n output_node_id = graph.add_layer(StubDropout(Constant.CONV_DROPOUT_RATE), output_node_id)\n output_node_id = graph.add_layer(StubDense(graph.node_list[output_node_id].shape[0], model_width),\n output_node_id)\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n graph.add_layer(StubDense(model_width, self.n_output_node), output_node_id)\n return graph\n\n\nclass MlpGenerator(NetworkGenerator):\n def __init__(self, n_output_node, input_shape):\n super(MlpGenerator, self).__init__(n_output_node, input_shape)\n if len(self.input_shape) > 1:\n raise ValueError('The input dimension is too high.')\n\n def generate(self, model_len=Constant.MLP_MODEL_LEN, model_width=Constant.MLP_MODEL_WIDTH):\n if type(model_width) is list and not len(model_width) == model_len:\n raise ValueError('The length of \\'model_width\\' does not match \\'model_len\\'')\n elif type(model_width) is int:\n model_width = [model_width] * model_len\n\n graph = Graph(self.input_shape[0], False)\n output_node_id = 0\n n_nodes_prev_layer = self.input_shape[0]\n for width in model_width:\n output_node_id = graph.add_layer(StubDense(n_nodes_prev_layer, width), output_node_id)\n output_node_id = graph.add_layer(StubDropout(Constant.MLP_DROPOUT_RATE), output_node_id)\n output_node_id = graph.add_layer(StubReLU(), output_node_id)\n n_nodes_prev_layer = width\n\n graph.add_layer(StubDense(n_nodes_prev_layer, self.n_output_node), output_node_id)\n return graph\n", "path": "autokeras/nn/generator.py"}]} | 1,322 | 766 |
gh_patches_debug_9892 | rasdani/github-patches | git_diff | docker__docker-py-2795 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
version requirements for cryptography should be consistent
Hi
It seems that version requirements for cryptography in setup.py and requirements.txt are not consistent
In setup.py, it is cryptography>=1.3.4
In requirements.txt, it is cryptography==3.2
Note that in pypi, the version of cryptography is always updating( now 3.4.6). Inconsistent version requirements will result in installing different version for cryptography if I use different ways of installation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from __future__ import print_function
3
4 import codecs
5 import os
6
7 from setuptools import find_packages
8 from setuptools import setup
9
10 ROOT_DIR = os.path.dirname(__file__)
11 SOURCE_DIR = os.path.join(ROOT_DIR)
12
13 requirements = [
14 'websocket-client >= 0.32.0',
15 'requests >= 2.14.2, != 2.18.0',
16 ]
17
18 extras_require = {
19 # win32 APIs if on Windows (required for npipe support)
20 ':sys_platform == "win32"': 'pywin32==227',
21
22 # If using docker-py over TLS, highly recommend this option is
23 # pip-installed or pinned.
24
25 # TODO: if pip installing both "requests" and "requests[security]", the
26 # extra package from the "security" option are not installed (see
27 # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of
28 # installing the extra dependencies, install the following instead:
29 # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'
30 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=1.3.4', 'idna>=2.0.0'],
31
32 # Only required when connecting using the ssh:// protocol
33 'ssh': ['paramiko>=2.4.2'],
34
35 }
36
37 version = None
38 exec(open('docker/version.py').read())
39
40 with open('./test-requirements.txt') as test_reqs_txt:
41 test_requirements = [line for line in test_reqs_txt]
42
43
44 long_description = ''
45 with codecs.open('./README.md', encoding='utf-8') as readme_md:
46 long_description = readme_md.read()
47
48 setup(
49 name="docker",
50 version=version,
51 description="A Python library for the Docker Engine API.",
52 long_description=long_description,
53 long_description_content_type='text/markdown',
54 url='https://github.com/docker/docker-py',
55 project_urls={
56 'Documentation': 'https://docker-py.readthedocs.io',
57 'Changelog': 'https://docker-py.readthedocs.io/en/stable/change-log.html', # noqa: E501
58 'Source': 'https://github.com/docker/docker-py',
59 'Tracker': 'https://github.com/docker/docker-py/issues',
60 },
61 packages=find_packages(exclude=["tests.*", "tests"]),
62 install_requires=requirements,
63 tests_require=test_requirements,
64 extras_require=extras_require,
65 python_requires='>=3.6',
66 zip_safe=False,
67 test_suite='tests',
68 classifiers=[
69 'Development Status :: 5 - Production/Stable',
70 'Environment :: Other Environment',
71 'Intended Audience :: Developers',
72 'Operating System :: OS Independent',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 3',
75 'Programming Language :: Python :: 3.6',
76 'Programming Language :: Python :: 3.7',
77 'Programming Language :: Python :: 3.8',
78 'Programming Language :: Python :: 3.9',
79 'Topic :: Software Development',
80 'Topic :: Utilities',
81 'License :: OSI Approved :: Apache Software License',
82 ],
83 maintainer='Joffrey F',
84 maintainer_email='[email protected]',
85 )
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,7 +27,7 @@
# https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of
# installing the extra dependencies, install the following instead:
# 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'
- 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=1.3.4', 'idna>=2.0.0'],
+ 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=3.4.7', 'idna>=2.0.0'],
# Only required when connecting using the ssh:// protocol
'ssh': ['paramiko>=2.4.2'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,7 +27,7 @@\n # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of\n # installing the extra dependencies, install the following instead:\n # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'\n- 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=1.3.4', 'idna>=2.0.0'],\n+ 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=3.4.7', 'idna>=2.0.0'],\n \n # Only required when connecting using the ssh:// protocol\n 'ssh': ['paramiko>=2.4.2'],\n", "issue": "version requirements for cryptography should be consistent\nHi\r\nIt seems that version requirements for cryptography in setup.py and requirements.txt are not consistent\r\nIn setup.py, it is cryptography>=1.3.4\r\nIn requirements.txt, it is cryptography==3.2\r\nNote that in pypi, the version of cryptography is always updating( now 3.4.6). Inconsistent version requirements will result in installing different version for cryptography if I use different ways of installation. \n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport codecs\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'websocket-client >= 0.32.0',\n 'requests >= 2.14.2, != 2.18.0',\n]\n\nextras_require = {\n # win32 APIs if on Windows (required for npipe support)\n ':sys_platform == \"win32\"': 'pywin32==227',\n\n # If using docker-py over TLS, highly recommend this option is\n # pip-installed or pinned.\n\n # TODO: if pip installing both \"requests\" and \"requests[security]\", the\n # extra package from the \"security\" option are not installed (see\n # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of\n # installing the extra dependencies, install the following instead:\n # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'\n 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=1.3.4', 'idna>=2.0.0'],\n\n # Only required when connecting using the ssh:// protocol\n 'ssh': ['paramiko>=2.4.2'],\n\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nlong_description = ''\nwith codecs.open('./README.md', encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nsetup(\n name=\"docker\",\n version=version,\n description=\"A Python library for the Docker Engine API.\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/docker/docker-py',\n project_urls={\n 'Documentation': 'https://docker-py.readthedocs.io',\n 'Changelog': 'https://docker-py.readthedocs.io/en/stable/change-log.html', # noqa: E501\n 'Source': 'https://github.com/docker/docker-py',\n 'Tracker': 'https://github.com/docker/docker-py/issues',\n },\n packages=find_packages(exclude=[\"tests.*\", \"tests\"]),\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n python_requires='>=3.6',\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n maintainer='Joffrey F',\n maintainer_email='[email protected]',\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import print_function\n\nimport codecs\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'websocket-client >= 0.32.0',\n 'requests >= 2.14.2, != 2.18.0',\n]\n\nextras_require = {\n # win32 APIs if on Windows (required for npipe support)\n ':sys_platform == \"win32\"': 'pywin32==227',\n\n # If using docker-py over TLS, highly recommend this option is\n # pip-installed or pinned.\n\n # TODO: if pip installing both \"requests\" and \"requests[security]\", the\n # extra package from the \"security\" option are not installed (see\n # https://github.com/pypa/pip/issues/4391). Once that's fixed, instead of\n # installing the extra dependencies, install the following instead:\n # 'requests[security] >= 2.5.2, != 2.11.0, != 2.12.2'\n 'tls': ['pyOpenSSL>=17.5.0', 'cryptography>=3.4.7', 'idna>=2.0.0'],\n\n # Only required when connecting using the ssh:// protocol\n 'ssh': ['paramiko>=2.4.2'],\n\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nlong_description = ''\nwith codecs.open('./README.md', encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nsetup(\n name=\"docker\",\n version=version,\n description=\"A Python library for the Docker Engine API.\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/docker/docker-py',\n project_urls={\n 'Documentation': 'https://docker-py.readthedocs.io',\n 'Changelog': 'https://docker-py.readthedocs.io/en/stable/change-log.html', # noqa: E501\n 'Source': 'https://github.com/docker/docker-py',\n 'Tracker': 'https://github.com/docker/docker-py/issues',\n },\n packages=find_packages(exclude=[\"tests.*\", \"tests\"]),\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n python_requires='>=3.6',\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Software Development',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n maintainer='Joffrey F',\n maintainer_email='[email protected]',\n)\n", "path": "setup.py"}]} | 1,260 | 205 |
gh_patches_debug_3267 | rasdani/github-patches | git_diff | cupy__cupy-4711 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.9 support
- CI
- cupy-release-tools
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import glob
4 import os
5 from setuptools import setup, find_packages
6 import sys
7
8 import cupy_setup_build
9
10
11 for submodule in ('cupy/core/include/cupy/cub/',
12 'cupy/core/include/cupy/jitify'):
13 if len(os.listdir(submodule)) == 0:
14 msg = '''
15 The folder %s is a git submodule but is
16 currently empty. Please use the command
17
18 git submodule update --init
19
20 to populate the folder before building from source.
21 ''' % submodule
22 print(msg, file=sys.stderr)
23 sys.exit(1)
24
25
26 requirements = {
27 'setup': [
28 'fastrlock>=0.5',
29 ],
30 'install': [
31 'numpy>=1.17',
32 'fastrlock>=0.5',
33 ],
34 'all': [
35 'scipy>=1.4',
36 'optuna>=2.0',
37 ],
38
39 'stylecheck': [
40 'autopep8==1.4.4',
41 'flake8==3.7.9',
42 'pbr==4.0.4',
43 'pycodestyle==2.5.0',
44 ],
45 'test': [
46 # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.
47 'pytest>=6.2',
48 ],
49 'appveyor': [
50 '-r test',
51 ],
52 'jenkins': [
53 '-r test',
54 'pytest-timeout',
55 'pytest-cov',
56 'coveralls',
57 'codecov',
58 'coverage<5', # Otherwise, Python must be built with sqlite
59 ],
60 }
61
62
63 def reduce_requirements(key):
64 # Resolve recursive requirements notation (-r)
65 reqs = requirements[key]
66 resolved_reqs = []
67 for req in reqs:
68 if req.startswith('-r'):
69 depend_key = req[2:].lstrip()
70 reduce_requirements(depend_key)
71 resolved_reqs += requirements[depend_key]
72 else:
73 resolved_reqs.append(req)
74 requirements[key] = resolved_reqs
75
76
77 for k in requirements.keys():
78 reduce_requirements(k)
79
80
81 extras_require = {k: v for k, v in requirements.items() if k != 'install'}
82
83
84 setup_requires = requirements['setup']
85 install_requires = requirements['install']
86 tests_require = requirements['test']
87
88 # List of files that needs to be in the distribution (sdist/wheel).
89 # Notes:
90 # - Files only needed in sdist should be added to `MANIFEST.in`.
91 # - The following glob (`**`) ignores items starting with `.`.
92 cupy_package_data = [
93 'cupy/cuda/cupy_thrust.cu',
94 'cupy/cuda/cupy_cub.cu',
95 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback
96 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback
97 'cupy/cuda/cupy_cufft.h', # for cuFFT callback
98 'cupy/cuda/cufft.pxd', # for cuFFT callback
99 'cupy/cuda/cufft.pyx', # for cuFFT callback
100 'cupy/random/cupy_distributions.cu',
101 'cupy/random/cupy_distributions.cuh',
102 ] + [
103 x for x in glob.glob('cupy/core/include/cupy/**', recursive=True)
104 if os.path.isfile(x)
105 ]
106
107 package_data = {
108 'cupy': [
109 os.path.relpath(x, 'cupy') for x in cupy_package_data
110 ],
111 }
112
113 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs()
114
115 package_name = cupy_setup_build.get_package_name()
116 long_description = cupy_setup_build.get_long_description()
117 ext_modules = cupy_setup_build.get_ext_modules()
118 build_ext = cupy_setup_build.custom_build_ext
119 sdist = cupy_setup_build.sdist_with_cython
120
121 here = os.path.abspath(os.path.dirname(__file__))
122 # Get __version__ variable
123 with open(os.path.join(here, 'cupy', '_version.py')) as f:
124 exec(f.read())
125
126 CLASSIFIERS = """\
127 Development Status :: 5 - Production/Stable
128 Intended Audience :: Science/Research
129 Intended Audience :: Developers
130 License :: OSI Approved :: MIT License
131 Programming Language :: Python
132 Programming Language :: Python :: 3
133 Programming Language :: Python :: 3.6
134 Programming Language :: Python :: 3.7
135 Programming Language :: Python :: 3.8
136 Programming Language :: Python :: 3 :: Only
137 Programming Language :: Cython
138 Topic :: Software Development
139 Topic :: Scientific/Engineering
140 Operating System :: POSIX
141 Operating System :: Microsoft :: Windows
142 """
143
144
145 setup(
146 name=package_name,
147 version=__version__, # NOQA
148 description='CuPy: A NumPy-compatible array library accelerated by CUDA',
149 long_description=long_description,
150 author='Seiya Tokui',
151 author_email='[email protected]',
152 url='https://cupy.dev/',
153 license='MIT License',
154 project_urls={
155 "Bug Tracker": "https://github.com/cupy/cupy/issues",
156 "Documentation": "https://docs.cupy.dev/",
157 "Source Code": "https://github.com/cupy/cupy",
158 },
159 classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
160 packages=find_packages(exclude=['install', 'tests']),
161 package_data=package_data,
162 zip_safe=False,
163 python_requires='>=3.6.0',
164 setup_requires=setup_requires,
165 install_requires=install_requires,
166 tests_require=tests_require,
167 extras_require=extras_require,
168 ext_modules=ext_modules,
169 cmdclass={'build_ext': build_ext,
170 'sdist': sdist},
171 )
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -133,6 +133,7 @@
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
+Programming Language :: Python :: 3.9
Programming Language :: Python :: 3 :: Only
Programming Language :: Cython
Topic :: Software Development
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -133,6 +133,7 @@\n Programming Language :: Python :: 3.6\n Programming Language :: Python :: 3.7\n Programming Language :: Python :: 3.8\n+Programming Language :: Python :: 3.9\n Programming Language :: Python :: 3 :: Only\n Programming Language :: Cython\n Topic :: Software Development\n", "issue": "Python 3.9 support\n- CI\r\n- cupy-release-tools\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nimport cupy_setup_build\n\n\nfor submodule in ('cupy/core/include/cupy/cub/',\n 'cupy/core/include/cupy/jitify'):\n if len(os.listdir(submodule)) == 0:\n msg = '''\n The folder %s is a git submodule but is\n currently empty. Please use the command\n\n git submodule update --init\n\n to populate the folder before building from source.\n ''' % submodule\n print(msg, file=sys.stderr)\n sys.exit(1)\n\n\nrequirements = {\n 'setup': [\n 'fastrlock>=0.5',\n ],\n 'install': [\n 'numpy>=1.17',\n 'fastrlock>=0.5',\n ],\n 'all': [\n 'scipy>=1.4',\n 'optuna>=2.0',\n ],\n\n 'stylecheck': [\n 'autopep8==1.4.4',\n 'flake8==3.7.9',\n 'pbr==4.0.4',\n 'pycodestyle==2.5.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n 'appveyor': [\n '-r test',\n ],\n 'jenkins': [\n '-r test',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\nsdist = cupy_setup_build.sdist_with_cython\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nwith open(os.path.join(here, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: A NumPy-compatible array library accelerated by CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.6.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext,\n 'sdist': sdist},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nimport cupy_setup_build\n\n\nfor submodule in ('cupy/core/include/cupy/cub/',\n 'cupy/core/include/cupy/jitify'):\n if len(os.listdir(submodule)) == 0:\n msg = '''\n The folder %s is a git submodule but is\n currently empty. Please use the command\n\n git submodule update --init\n\n to populate the folder before building from source.\n ''' % submodule\n print(msg, file=sys.stderr)\n sys.exit(1)\n\n\nrequirements = {\n 'setup': [\n 'fastrlock>=0.5',\n ],\n 'install': [\n 'numpy>=1.17',\n 'fastrlock>=0.5',\n ],\n 'all': [\n 'scipy>=1.4',\n 'optuna>=2.0',\n ],\n\n 'stylecheck': [\n 'autopep8==1.4.4',\n 'flake8==3.7.9',\n 'pbr==4.0.4',\n 'pycodestyle==2.5.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n 'appveyor': [\n '-r test',\n ],\n 'jenkins': [\n '-r test',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\nsdist = cupy_setup_build.sdist_with_cython\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nwith open(os.path.join(here, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3.9\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: A NumPy-compatible array library accelerated by CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.6.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext,\n 'sdist': sdist},\n)\n", "path": "setup.py"}]} | 1,944 | 96 |
gh_patches_debug_56080 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3107 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: typing miss of kwargs
### 🐛 Describe the bug
When I tried to train prompts with opt model, the following error was occurred.
```
Traceback (most recent call last):
File "/home/xxx/workspace/ColossalAI/applications/ChatGPT/examples/train_prompts.py", line 127, in <module>
main(args)
File "/home/xxx/workspace/ColossalAI/applications/ChatGPT/examples/train_prompts.py", line 42, in main
critic = OPTCritic(lora_rank=args.lora_rank).cuda()
File "/home/xxx/.pyenv/versions/3.9.9/lib/python3.9/site-packages/chatgpt/models/opt/opt_critic.py", line 38, in __init__
super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
NameError: name 'kwargs' is not defined
```
To Reproduce
```
torchrun --standalone --nproc_per_node=2 train_prompts.py prompts.csv --model opt --strategy colossalai_zero2
```
### Environment
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/ChatGPT/chatgpt/models/opt/opt_critic.py`
Content:
```
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.opt.configuration_opt import OPTConfig
5 from transformers.models.opt.modeling_opt import OPTModel
6
7 from ..base import Critic
8
9
10 class OPTCritic(Critic):
11 """
12 OPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (OPTConfig): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the low-rank approximation.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[OPTConfig] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none',
28 **kargs) -> None:
29 if pretrained is not None:
30 model = OPTModel.from_pretrained(pretrained)
31 elif config is not None:
32 model = OPTModel(config)
33 else:
34 model = OPTModel(OPTConfig())
35 if checkpoint:
36 model.gradient_checkpointing_enable()
37 value_head = nn.Linear(model.config.hidden_size, 1)
38 super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
--- a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
+++ b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
@@ -25,7 +25,7 @@
checkpoint: bool = False,
lora_rank: int = 0,
lora_train_bias: str = 'none',
- **kargs) -> None:
+ **kwargs) -> None:
if pretrained is not None:
model = OPTModel.from_pretrained(pretrained)
elif config is not None:
| {"golden_diff": "diff --git a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n--- a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n+++ b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n@@ -25,7 +25,7 @@\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n- **kargs) -> None:\n+ **kwargs) -> None:\n if pretrained is not None:\n model = OPTModel.from_pretrained(pretrained)\n elif config is not None:\n", "issue": "[BUG]: typing miss of kwargs\n### \ud83d\udc1b Describe the bug\n\nWhen I tried to train prompts with opt model, the following error was occurred.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/xxx/workspace/ColossalAI/applications/ChatGPT/examples/train_prompts.py\", line 127, in <module>\r\n main(args)\r\n File \"/home/xxx/workspace/ColossalAI/applications/ChatGPT/examples/train_prompts.py\", line 42, in main\r\n critic = OPTCritic(lora_rank=args.lora_rank).cuda()\r\n File \"/home/xxx/.pyenv/versions/3.9.9/lib/python3.9/site-packages/chatgpt/models/opt/opt_critic.py\", line 38, in __init__\r\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\r\nNameError: name 'kwargs' is not defined\r\n```\r\n\r\nTo Reproduce\r\n\r\n```\r\ntorchrun --standalone --nproc_per_node=2 train_prompts.py prompts.csv --model opt --strategy colossalai_zero2\r\n```\r\n\r\n\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.opt.configuration_opt import OPTConfig\nfrom transformers.models.opt.modeling_opt import OPTModel\n\nfrom ..base import Critic\n\n\nclass OPTCritic(Critic):\n \"\"\"\n OPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (OPTConfig): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the low-rank approximation.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[OPTConfig] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kargs) -> None:\n if pretrained is not None:\n model = OPTModel.from_pretrained(pretrained)\n elif config is not None:\n model = OPTModel(config)\n else:\n model = OPTModel(OPTConfig())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.hidden_size, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/ChatGPT/chatgpt/models/opt/opt_critic.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.opt.configuration_opt import OPTConfig\nfrom transformers.models.opt.modeling_opt import OPTModel\n\nfrom ..base import Critic\n\n\nclass OPTCritic(Critic):\n \"\"\"\n OPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (OPTConfig): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the low-rank approximation.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[OPTConfig] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = OPTModel.from_pretrained(pretrained)\n elif config is not None:\n model = OPTModel(config)\n else:\n model = OPTModel(OPTConfig())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.hidden_size, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/ChatGPT/chatgpt/models/opt/opt_critic.py"}]} | 864 | 154 |
gh_patches_debug_9664 | rasdani/github-patches | git_diff | conda__conda-build-2271 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set target in package section [feature request]
## Problem
Currently the only way to change the target platform is using the built in way of handling variants. This involves including an additional file `conda_build_config.yaml`. When you try to set `target` in the package section it is completely ignored and when you try to set `target` in the outputs section it throws an error. Something like:
```
Expecting win-64 got linux-64
```
## Request
Ideally we would like to be able to set target directly in the package section, but our needs could be met as long as that functionality is exposed and not bound to the use of `conda_build_config.yaml`. I took a look at doing this myself but I am unfamiliar with the code base and the reliance on `target_platform` seems to be entrenched in the variants logic and spread across quite a few files.
Please let me know what you think!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/tarcheck.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import json
4 from os.path import basename
5 import re
6 import tarfile
7
8 from conda_build.utils import codec
9
10
11 def dist_fn(fn):
12 if fn.endswith('.tar'):
13 return fn[:-4]
14 elif fn.endswith('.tar.bz2'):
15 return fn[:-8]
16 else:
17 raise Exception('did not expect filename: %r' % fn)
18
19
20 class TarCheck(object):
21 def __init__(self, path, config):
22 self.t = tarfile.open(path)
23 self.paths = set(m.path for m in self.t.getmembers())
24 self.dist = dist_fn(basename(path))
25 self.name, self.version, self.build = self.dist.split('::', 1)[-1].rsplit('-', 2)
26 self.config = config
27
28 def __enter__(self):
29 return self
30
31 def __exit__(self, e_type, e_value, traceback):
32 self.t.close()
33
34 def info_files(self):
35 if re.search('pyh[0-9a-f]{%d}_' % self.config.hash_length, self.build):
36 return
37 lista = [p.strip().decode('utf-8') for p in
38 self.t.extractfile('info/files').readlines()]
39 seta = set(lista)
40 if len(lista) != len(seta):
41 raise Exception('info/files: duplicates')
42
43 listb = [m.path for m in self.t.getmembers()
44 if not (m.path.startswith('info/') or m.isdir())]
45 setb = set(listb)
46 if len(listb) != len(setb):
47 raise Exception('info_files: duplicate members')
48
49 if seta == setb:
50 return
51 for p in sorted(seta | setb):
52 if p not in seta:
53 print('%r not in info/files' % p)
54 if p not in setb:
55 print('%r not in tarball' % p)
56 raise Exception('info/files')
57
58 def index_json(self):
59 info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))
60 for varname in 'name', 'version':
61 if info[varname] != getattr(self, varname):
62 raise Exception('%s: %r != %r' % (varname, info[varname],
63 getattr(self, varname)))
64 assert isinstance(info['build_number'], int)
65
66 def prefix_length(self):
67 prefix_length = None
68 if 'info/has_prefix' in self.t.getnames():
69 prefix_files = self.t.extractfile('info/has_prefix').readlines()
70 for line in prefix_files:
71 try:
72 prefix, file_type, _ = line.split()
73 # lines not conforming to the split
74 except ValueError:
75 continue
76 if hasattr(file_type, 'decode'):
77 file_type = file_type.decode(codec)
78 if file_type == 'binary':
79 prefix_length = len(prefix)
80 break
81 return prefix_length
82
83 def correct_subdir(self):
84 info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))
85 assert info['subdir'] in [self.config.host_subdir, 'noarch'], \
86 ("Inconsistent subdir in package - index.json expecting {0},"
87 " got {1}".format(self.config.host_subdir, info['subdir']))
88
89
90 def check_all(path, config):
91 x = TarCheck(path, config)
92 x.info_files()
93 x.index_json()
94 x.correct_subdir()
95 x.t.close()
96
97
98 def check_prefix_lengths(files, config):
99 lengths = {}
100 for f in files:
101 length = TarCheck(f, config).prefix_length()
102 if length and length < config.prefix_length:
103 lengths[f] = length
104 return lengths
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_build/tarcheck.py b/conda_build/tarcheck.py
--- a/conda_build/tarcheck.py
+++ b/conda_build/tarcheck.py
@@ -82,7 +82,7 @@
def correct_subdir(self):
info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))
- assert info['subdir'] in [self.config.host_subdir, 'noarch'], \
+ assert info['subdir'] in [self.config.host_subdir, 'noarch', self.config.target_subdir], \
("Inconsistent subdir in package - index.json expecting {0},"
" got {1}".format(self.config.host_subdir, info['subdir']))
| {"golden_diff": "diff --git a/conda_build/tarcheck.py b/conda_build/tarcheck.py\n--- a/conda_build/tarcheck.py\n+++ b/conda_build/tarcheck.py\n@@ -82,7 +82,7 @@\n \n def correct_subdir(self):\n info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))\n- assert info['subdir'] in [self.config.host_subdir, 'noarch'], \\\n+ assert info['subdir'] in [self.config.host_subdir, 'noarch', self.config.target_subdir], \\\n (\"Inconsistent subdir in package - index.json expecting {0},\"\n \" got {1}\".format(self.config.host_subdir, info['subdir']))\n", "issue": "Set target in package section [feature request]\n## Problem\r\n\r\nCurrently the only way to change the target platform is using the built in way of handling variants. This involves including an additional file `conda_build_config.yaml`. When you try to set `target` in the package section it is completely ignored and when you try to set `target` in the outputs section it throws an error. Something like:\r\n```\r\nExpecting win-64 got linux-64\r\n```\r\n\r\n## Request\r\n\r\nIdeally we would like to be able to set target directly in the package section, but our needs could be met as long as that functionality is exposed and not bound to the use of `conda_build_config.yaml`. I took a look at doing this myself but I am unfamiliar with the code base and the reliance on `target_platform` seems to be entrenched in the variants logic and spread across quite a few files.\r\n\r\nPlease let me know what you think!\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport json\nfrom os.path import basename\nimport re\nimport tarfile\n\nfrom conda_build.utils import codec\n\n\ndef dist_fn(fn):\n if fn.endswith('.tar'):\n return fn[:-4]\n elif fn.endswith('.tar.bz2'):\n return fn[:-8]\n else:\n raise Exception('did not expect filename: %r' % fn)\n\n\nclass TarCheck(object):\n def __init__(self, path, config):\n self.t = tarfile.open(path)\n self.paths = set(m.path for m in self.t.getmembers())\n self.dist = dist_fn(basename(path))\n self.name, self.version, self.build = self.dist.split('::', 1)[-1].rsplit('-', 2)\n self.config = config\n\n def __enter__(self):\n return self\n\n def __exit__(self, e_type, e_value, traceback):\n self.t.close()\n\n def info_files(self):\n if re.search('pyh[0-9a-f]{%d}_' % self.config.hash_length, self.build):\n return\n lista = [p.strip().decode('utf-8') for p in\n self.t.extractfile('info/files').readlines()]\n seta = set(lista)\n if len(lista) != len(seta):\n raise Exception('info/files: duplicates')\n\n listb = [m.path for m in self.t.getmembers()\n if not (m.path.startswith('info/') or m.isdir())]\n setb = set(listb)\n if len(listb) != len(setb):\n raise Exception('info_files: duplicate members')\n\n if seta == setb:\n return\n for p in sorted(seta | setb):\n if p not in seta:\n print('%r not in info/files' % p)\n if p not in setb:\n print('%r not in tarball' % p)\n raise Exception('info/files')\n\n def index_json(self):\n info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))\n for varname in 'name', 'version':\n if info[varname] != getattr(self, varname):\n raise Exception('%s: %r != %r' % (varname, info[varname],\n getattr(self, varname)))\n assert isinstance(info['build_number'], int)\n\n def prefix_length(self):\n prefix_length = None\n if 'info/has_prefix' in self.t.getnames():\n prefix_files = self.t.extractfile('info/has_prefix').readlines()\n for line in prefix_files:\n try:\n prefix, file_type, _ = line.split()\n # lines not conforming to the split\n except ValueError:\n continue\n if hasattr(file_type, 'decode'):\n file_type = file_type.decode(codec)\n if file_type == 'binary':\n prefix_length = len(prefix)\n break\n return prefix_length\n\n def correct_subdir(self):\n info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))\n assert info['subdir'] in [self.config.host_subdir, 'noarch'], \\\n (\"Inconsistent subdir in package - index.json expecting {0},\"\n \" got {1}\".format(self.config.host_subdir, info['subdir']))\n\n\ndef check_all(path, config):\n x = TarCheck(path, config)\n x.info_files()\n x.index_json()\n x.correct_subdir()\n x.t.close()\n\n\ndef check_prefix_lengths(files, config):\n lengths = {}\n for f in files:\n length = TarCheck(f, config).prefix_length()\n if length and length < config.prefix_length:\n lengths[f] = length\n return lengths\n", "path": "conda_build/tarcheck.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport json\nfrom os.path import basename\nimport re\nimport tarfile\n\nfrom conda_build.utils import codec\n\n\ndef dist_fn(fn):\n if fn.endswith('.tar'):\n return fn[:-4]\n elif fn.endswith('.tar.bz2'):\n return fn[:-8]\n else:\n raise Exception('did not expect filename: %r' % fn)\n\n\nclass TarCheck(object):\n def __init__(self, path, config):\n self.t = tarfile.open(path)\n self.paths = set(m.path for m in self.t.getmembers())\n self.dist = dist_fn(basename(path))\n self.name, self.version, self.build = self.dist.split('::', 1)[-1].rsplit('-', 2)\n self.config = config\n\n def __enter__(self):\n return self\n\n def __exit__(self, e_type, e_value, traceback):\n self.t.close()\n\n def info_files(self):\n if re.search('pyh[0-9a-f]{%d}_' % self.config.hash_length, self.build):\n return\n lista = [p.strip().decode('utf-8') for p in\n self.t.extractfile('info/files').readlines()]\n seta = set(lista)\n if len(lista) != len(seta):\n raise Exception('info/files: duplicates')\n\n listb = [m.path for m in self.t.getmembers()\n if not (m.path.startswith('info/') or m.isdir())]\n setb = set(listb)\n if len(listb) != len(setb):\n raise Exception('info_files: duplicate members')\n\n if seta == setb:\n return\n for p in sorted(seta | setb):\n if p not in seta:\n print('%r not in info/files' % p)\n if p not in setb:\n print('%r not in tarball' % p)\n raise Exception('info/files')\n\n def index_json(self):\n info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))\n for varname in 'name', 'version':\n if info[varname] != getattr(self, varname):\n raise Exception('%s: %r != %r' % (varname, info[varname],\n getattr(self, varname)))\n assert isinstance(info['build_number'], int)\n\n def prefix_length(self):\n prefix_length = None\n if 'info/has_prefix' in self.t.getnames():\n prefix_files = self.t.extractfile('info/has_prefix').readlines()\n for line in prefix_files:\n try:\n prefix, file_type, _ = line.split()\n # lines not conforming to the split\n except ValueError:\n continue\n if hasattr(file_type, 'decode'):\n file_type = file_type.decode(codec)\n if file_type == 'binary':\n prefix_length = len(prefix)\n break\n return prefix_length\n\n def correct_subdir(self):\n info = json.loads(self.t.extractfile('info/index.json').read().decode('utf-8'))\n assert info['subdir'] in [self.config.host_subdir, 'noarch', self.config.target_subdir], \\\n (\"Inconsistent subdir in package - index.json expecting {0},\"\n \" got {1}\".format(self.config.host_subdir, info['subdir']))\n\n\ndef check_all(path, config):\n x = TarCheck(path, config)\n x.info_files()\n x.index_json()\n x.correct_subdir()\n x.t.close()\n\n\ndef check_prefix_lengths(files, config):\n lengths = {}\n for f in files:\n length = TarCheck(f, config).prefix_length()\n if length and length < config.prefix_length:\n lengths[f] = length\n return lengths\n", "path": "conda_build/tarcheck.py"}]} | 1,480 | 167 |
gh_patches_debug_21519 | rasdani/github-patches | git_diff | google__flax-596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
QoL: better print for FrozenDict
The best way I'm aware of to get an overview of model shape is via `jax.tree_map(jnp.shape, params)`. FrozenDicts have no concept of pretty printing the way dicts do, so large models are unwieldy to parse at a glance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flax/core/frozen_dict.py`
Content:
```
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Frozen Dictionary."""
16
17 from typing import Any, TypeVar, Mapping, Dict, Tuple
18
19 from flax import serialization
20 import jax
21
22
23 K = TypeVar('K')
24 V = TypeVar('V')
25
26
27 @jax.tree_util.register_pytree_node_class
28 class FrozenDict(Mapping[K, V]):
29 """An immutable variant of the Python dict."""
30 __slots__ = ('_dict', '_hash')
31
32 def __init__(self, *args, **kwargs):
33 # make sure the dict is as
34 xs = dict(*args, **kwargs)
35 self._dict = _prepare_freeze(xs)
36
37 self._hash = None
38
39 def __getitem__(self, key):
40 v = self._dict[key]
41 if isinstance(v, dict):
42 return FrozenDict(v)
43 return v
44
45 def __setitem__(self, key, value):
46 raise ValueError('FrozenDict is immutable.')
47
48 def __contains__(self, key):
49 return key in self._dict
50
51 def __iter__(self):
52 return iter(self._dict)
53
54 def __len__(self):
55 return len(self._dict)
56
57 def __repr__(self):
58 return 'FrozenDict(%r)' % self._dict
59
60 def __hash__(self):
61 if self._hash is None:
62 h = 0
63 for key, value in self.items():
64 h ^= hash((key, value))
65 self._hash = h
66 return self._hash
67
68 def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':
69 """Create a new FrozenDict with additional or replaced entries."""
70 return type(self)(self, **unfreeze(add_or_replace))
71
72 def items(self):
73 for key in self._dict:
74 yield (key, self[key])
75
76 def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:
77 """Create a new FrozenDict where one entry is removed.
78
79 Example::
80
81 state, params = variables.pop('params')
82
83 Args:
84 key: the key to remove from the dict
85 Returns:
86 A pair with the new FrozenDict and the removed value.
87 """
88 value = self[key]
89 new_dict = dict(self._dict)
90 new_dict.pop(key)
91 new_self = type(self)(new_dict)
92 return new_self, value
93
94 def unfreeze(self) -> Dict[K, V]:
95 return unfreeze(self)
96
97 def tree_flatten(self):
98 return (self._dict,), ()
99
100 @classmethod
101 def tree_unflatten(cls, _, data):
102 return cls(*data)
103
104
105 def _prepare_freeze(xs: Any) -> Any:
106 """Deep copy unfrozen dicts to make the dictionary FrozenDict safe."""
107 if isinstance(xs, FrozenDict):
108 # we can safely ref share the internal state of a FrozenDict
109 # because it is immutable.
110 return xs._dict # pylint: disable=protected-access
111 if not isinstance(xs, dict):
112 # return a leaf as is.
113 return xs
114 # recursively copy dictionary to avoid ref sharing
115 return {key: _prepare_freeze(val) for key, val in xs.items()}
116
117
118 def freeze(xs: Dict[K, V]) -> FrozenDict[K, V]:
119 """Freeze a nested dict.
120
121 Makes a nested `dict` immutable by transforming it into `FrozenDict`.
122 """
123 return FrozenDict(xs)
124
125
126 def unfreeze(x: FrozenDict[K, V]) -> Dict[K, V]:
127 """Unfreeze a FrozenDict.
128
129 Makes a mutable copy of a `FrozenDict` mutable by transforming
130 it into (nested) dict.
131 """
132 if not isinstance(x, (FrozenDict, dict)):
133 return x
134 ys = {}
135 for key, value in x.items():
136 ys[key] = unfreeze(value)
137 return ys
138
139
140 def _frozen_dict_state_dict(xs):
141 return {key: serialization.to_state_dict(value) for key, value in xs.items()}
142
143
144 def _restore_frozen_dict(xs, states):
145 return FrozenDict(
146 {key: serialization.from_state_dict(value, states[key])
147 for key, value in xs.items()})
148
149
150 serialization.register_serialization_state(
151 FrozenDict,
152 _frozen_dict_state_dict,
153 _restore_frozen_dict)
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py
--- a/flax/core/frozen_dict.py
+++ b/flax/core/frozen_dict.py
@@ -24,6 +24,14 @@
V = TypeVar('V')
+def _indent(x, num_spaces):
+ indent_str = ' ' * num_spaces
+ lines = x.split('\n')
+ assert lines[-1] == ''
+ # skip the final line because it's empty and should not be indented.
+ return '\n'.join(indent_str + line for line in lines[:-1]) + '\n'
+
+
@jax.tree_util.register_pytree_node_class
class FrozenDict(Mapping[K, V]):
"""An immutable variant of the Python dict."""
@@ -55,7 +63,21 @@
return len(self._dict)
def __repr__(self):
- return 'FrozenDict(%r)' % self._dict
+ return self.pretty_repr()
+
+ def pretty_repr(self, num_spaces=4):
+ """Returns an indented representation of the nested dictionary."""
+ def pretty_dict(x):
+ if not isinstance(x, dict):
+ return repr(x)
+ rep = ''
+ for key, val in x.items():
+ rep += f'{key}: {pretty_dict(val)},\n'
+ if rep:
+ return '{\n' + _indent(rep, num_spaces) + '}'
+ else:
+ return '{}'
+ return f'FrozenDict({pretty_dict(self._dict)})'
def __hash__(self):
if self._hash is None:
| {"golden_diff": "diff --git a/flax/core/frozen_dict.py b/flax/core/frozen_dict.py\n--- a/flax/core/frozen_dict.py\n+++ b/flax/core/frozen_dict.py\n@@ -24,6 +24,14 @@\n V = TypeVar('V')\n \n \n+def _indent(x, num_spaces):\n+ indent_str = ' ' * num_spaces\n+ lines = x.split('\\n')\n+ assert lines[-1] == ''\n+ # skip the final line because it's empty and should not be indented.\n+ return '\\n'.join(indent_str + line for line in lines[:-1]) + '\\n'\n+\n+\n @jax.tree_util.register_pytree_node_class\n class FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n@@ -55,7 +63,21 @@\n return len(self._dict)\n \n def __repr__(self):\n- return 'FrozenDict(%r)' % self._dict\n+ return self.pretty_repr()\n+\n+ def pretty_repr(self, num_spaces=4):\n+ \"\"\"Returns an indented representation of the nested dictionary.\"\"\"\n+ def pretty_dict(x):\n+ if not isinstance(x, dict):\n+ return repr(x)\n+ rep = ''\n+ for key, val in x.items():\n+ rep += f'{key}: {pretty_dict(val)},\\n'\n+ if rep:\n+ return '{\\n' + _indent(rep, num_spaces) + '}'\n+ else:\n+ return '{}'\n+ return f'FrozenDict({pretty_dict(self._dict)})'\n \n def __hash__(self):\n if self._hash is None:\n", "issue": "QoL: better print for FrozenDict\nThe best way I'm aware of to get an overview of model shape is via `jax.tree_map(jnp.shape, params)`. FrozenDicts have no concept of pretty printing the way dicts do, so large models are unwieldy to parse at a glance. \n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Frozen Dictionary.\"\"\"\n\nfrom typing import Any, TypeVar, Mapping, Dict, Tuple\n\nfrom flax import serialization\nimport jax\n\n\nK = TypeVar('K')\nV = TypeVar('V')\n\n\[email protected]_util.register_pytree_node_class\nclass FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n __slots__ = ('_dict', '_hash')\n\n def __init__(self, *args, **kwargs):\n # make sure the dict is as\n xs = dict(*args, **kwargs)\n self._dict = _prepare_freeze(xs)\n\n self._hash = None\n\n def __getitem__(self, key):\n v = self._dict[key]\n if isinstance(v, dict):\n return FrozenDict(v)\n return v\n\n def __setitem__(self, key, value):\n raise ValueError('FrozenDict is immutable.')\n\n def __contains__(self, key):\n return key in self._dict\n\n def __iter__(self):\n return iter(self._dict)\n\n def __len__(self):\n return len(self._dict)\n\n def __repr__(self):\n return 'FrozenDict(%r)' % self._dict\n\n def __hash__(self):\n if self._hash is None:\n h = 0\n for key, value in self.items():\n h ^= hash((key, value))\n self._hash = h\n return self._hash\n\n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n return type(self)(self, **unfreeze(add_or_replace))\n\n def items(self):\n for key in self._dict:\n yield (key, self[key])\n\n def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:\n \"\"\"Create a new FrozenDict where one entry is removed.\n\n Example::\n\n state, params = variables.pop('params')\n\n Args:\n key: the key to remove from the dict\n Returns:\n A pair with the new FrozenDict and the removed value.\n \"\"\"\n value = self[key]\n new_dict = dict(self._dict)\n new_dict.pop(key)\n new_self = type(self)(new_dict)\n return new_self, value\n\n def unfreeze(self) -> Dict[K, V]:\n return unfreeze(self)\n\n def tree_flatten(self):\n return (self._dict,), ()\n\n @classmethod\n def tree_unflatten(cls, _, data):\n return cls(*data)\n\n\ndef _prepare_freeze(xs: Any) -> Any:\n \"\"\"Deep copy unfrozen dicts to make the dictionary FrozenDict safe.\"\"\"\n if isinstance(xs, FrozenDict):\n # we can safely ref share the internal state of a FrozenDict\n # because it is immutable.\n return xs._dict # pylint: disable=protected-access\n if not isinstance(xs, dict):\n # return a leaf as is.\n return xs\n # recursively copy dictionary to avoid ref sharing\n return {key: _prepare_freeze(val) for key, val in xs.items()}\n\n\ndef freeze(xs: Dict[K, V]) -> FrozenDict[K, V]:\n \"\"\"Freeze a nested dict.\n\n Makes a nested `dict` immutable by transforming it into `FrozenDict`.\n \"\"\"\n return FrozenDict(xs)\n\n\ndef unfreeze(x: FrozenDict[K, V]) -> Dict[K, V]:\n \"\"\"Unfreeze a FrozenDict.\n\n Makes a mutable copy of a `FrozenDict` mutable by transforming\n it into (nested) dict.\n \"\"\"\n if not isinstance(x, (FrozenDict, dict)):\n return x\n ys = {}\n for key, value in x.items():\n ys[key] = unfreeze(value)\n return ys\n\n\ndef _frozen_dict_state_dict(xs):\n return {key: serialization.to_state_dict(value) for key, value in xs.items()}\n\n\ndef _restore_frozen_dict(xs, states):\n return FrozenDict(\n {key: serialization.from_state_dict(value, states[key])\n for key, value in xs.items()})\n\n\nserialization.register_serialization_state(\n FrozenDict,\n _frozen_dict_state_dict,\n _restore_frozen_dict)\n", "path": "flax/core/frozen_dict.py"}], "after_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Frozen Dictionary.\"\"\"\n\nfrom typing import Any, TypeVar, Mapping, Dict, Tuple\n\nfrom flax import serialization\nimport jax\n\n\nK = TypeVar('K')\nV = TypeVar('V')\n\n\ndef _indent(x, num_spaces):\n indent_str = ' ' * num_spaces\n lines = x.split('\\n')\n assert lines[-1] == ''\n # skip the final line because it's empty and should not be indented.\n return '\\n'.join(indent_str + line for line in lines[:-1]) + '\\n'\n\n\[email protected]_util.register_pytree_node_class\nclass FrozenDict(Mapping[K, V]):\n \"\"\"An immutable variant of the Python dict.\"\"\"\n __slots__ = ('_dict', '_hash')\n\n def __init__(self, *args, **kwargs):\n # make sure the dict is as\n xs = dict(*args, **kwargs)\n self._dict = _prepare_freeze(xs)\n\n self._hash = None\n\n def __getitem__(self, key):\n v = self._dict[key]\n if isinstance(v, dict):\n return FrozenDict(v)\n return v\n\n def __setitem__(self, key, value):\n raise ValueError('FrozenDict is immutable.')\n\n def __contains__(self, key):\n return key in self._dict\n\n def __iter__(self):\n return iter(self._dict)\n\n def __len__(self):\n return len(self._dict)\n\n def __repr__(self):\n return self.pretty_repr()\n\n def pretty_repr(self, num_spaces=4):\n \"\"\"Returns an indented representation of the nested dictionary.\"\"\"\n def pretty_dict(x):\n if not isinstance(x, dict):\n return repr(x)\n rep = ''\n for key, val in x.items():\n rep += f'{key}: {pretty_dict(val)},\\n'\n if rep:\n return '{\\n' + _indent(rep, num_spaces) + '}'\n else:\n return '{}'\n return f'FrozenDict({pretty_dict(self._dict)})'\n\n def __hash__(self):\n if self._hash is None:\n h = 0\n for key, value in self.items():\n h ^= hash((key, value))\n self._hash = h\n return self._hash\n\n def copy(self, add_or_replace: Mapping[K, V]) -> 'FrozenDict[K, V]':\n \"\"\"Create a new FrozenDict with additional or replaced entries.\"\"\"\n return type(self)(self, **unfreeze(add_or_replace))\n\n def items(self):\n for key in self._dict:\n yield (key, self[key])\n\n def pop(self, key: K) -> Tuple['FrozenDict[K, V]', V]:\n \"\"\"Create a new FrozenDict where one entry is removed.\n\n Example::\n\n state, params = variables.pop('params')\n\n Args:\n key: the key to remove from the dict\n Returns:\n A pair with the new FrozenDict and the removed value.\n \"\"\"\n value = self[key]\n new_dict = dict(self._dict)\n new_dict.pop(key)\n new_self = type(self)(new_dict)\n return new_self, value\n\n def unfreeze(self) -> Dict[K, V]:\n return unfreeze(self)\n\n def tree_flatten(self):\n return (self._dict,), ()\n\n @classmethod\n def tree_unflatten(cls, _, data):\n return cls(*data)\n\n\ndef _prepare_freeze(xs: Any) -> Any:\n \"\"\"Deep copy unfrozen dicts to make the dictionary FrozenDict safe.\"\"\"\n if isinstance(xs, FrozenDict):\n # we can safely ref share the internal state of a FrozenDict\n # because it is immutable.\n return xs._dict # pylint: disable=protected-access\n if not isinstance(xs, dict):\n # return a leaf as is.\n return xs\n # recursively copy dictionary to avoid ref sharing\n return {key: _prepare_freeze(val) for key, val in xs.items()}\n\n\ndef freeze(xs: Dict[K, V]) -> FrozenDict[K, V]:\n \"\"\"Freeze a nested dict.\n\n Makes a nested `dict` immutable by transforming it into `FrozenDict`.\n \"\"\"\n return FrozenDict(xs)\n\n\ndef unfreeze(x: FrozenDict[K, V]) -> Dict[K, V]:\n \"\"\"Unfreeze a FrozenDict.\n\n Makes a mutable copy of a `FrozenDict` mutable by transforming\n it into (nested) dict.\n \"\"\"\n if not isinstance(x, (FrozenDict, dict)):\n return x\n ys = {}\n for key, value in x.items():\n ys[key] = unfreeze(value)\n return ys\n\n\ndef _frozen_dict_state_dict(xs):\n return {key: serialization.to_state_dict(value) for key, value in xs.items()}\n\n\ndef _restore_frozen_dict(xs, states):\n return FrozenDict(\n {key: serialization.from_state_dict(value, states[key])\n for key, value in xs.items()})\n\n\nserialization.register_serialization_state(\n FrozenDict,\n _frozen_dict_state_dict,\n _restore_frozen_dict)\n", "path": "flax/core/frozen_dict.py"}]} | 1,746 | 361 |
gh_patches_debug_11172 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-2464 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enhance parser "satellite_installer_configurations"
As a default, the file "/etc/foreman-installer/custom-hiera.yaml" is empty, it means customers haven't done any tuning, it shouldn't be skipped.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/parsers/satellite_installer_configurations.py`
Content:
```
1 """
2 Satellite installer configuration files
3 =======================================
4
5 Parsers included in this module are:
6
7 CustomHiera - file ``/etc/foreman-installer/custom-hiera.yaml``
8 ---------------------------------------------------------------
9 Parsers the file `/etc/foreman-installer/custom-hiera.yaml`
10
11 """
12
13 from insights import parser, YAMLParser
14 from insights.specs import Specs
15
16
17 @parser(Specs.satellite_custom_hiera)
18 class CustomHiera(YAMLParser):
19 """
20 Class to parse ``/etc/foreman-installer/custom-hiera.yaml``
21
22 Examples:
23 >>> 'apache::mod::prefork::serverlimit' in custom_hiera
24 True
25 >>> custom_hiera['apache::mod::prefork::serverlimit']
26 582
27 """
28
29 pass
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/parsers/satellite_installer_configurations.py b/insights/parsers/satellite_installer_configurations.py
--- a/insights/parsers/satellite_installer_configurations.py
+++ b/insights/parsers/satellite_installer_configurations.py
@@ -12,6 +12,7 @@
from insights import parser, YAMLParser
from insights.specs import Specs
+from insights.parsers import SkipException
@parser(Specs.satellite_custom_hiera)
@@ -25,5 +26,8 @@
>>> custom_hiera['apache::mod::prefork::serverlimit']
582
"""
-
- pass
+ def parse_content(self, content):
+ try:
+ super(CustomHiera, self).parse_content(content)
+ except SkipException:
+ pass
| {"golden_diff": "diff --git a/insights/parsers/satellite_installer_configurations.py b/insights/parsers/satellite_installer_configurations.py\n--- a/insights/parsers/satellite_installer_configurations.py\n+++ b/insights/parsers/satellite_installer_configurations.py\n@@ -12,6 +12,7 @@\n \n from insights import parser, YAMLParser\n from insights.specs import Specs\n+from insights.parsers import SkipException\n \n \n @parser(Specs.satellite_custom_hiera)\n@@ -25,5 +26,8 @@\n >>> custom_hiera['apache::mod::prefork::serverlimit']\n 582\n \"\"\"\n-\n- pass\n+ def parse_content(self, content):\n+ try:\n+ super(CustomHiera, self).parse_content(content)\n+ except SkipException:\n+ pass\n", "issue": "Enhance parser \"satellite_installer_configurations\"\nAs a default, the file \"/etc/foreman-installer/custom-hiera.yaml\" is empty, it means customers haven't done any tuning, it shouldn't be skipped.\n", "before_files": [{"content": "\"\"\"\nSatellite installer configuration files\n=======================================\n\nParsers included in this module are:\n\nCustomHiera - file ``/etc/foreman-installer/custom-hiera.yaml``\n---------------------------------------------------------------\nParsers the file `/etc/foreman-installer/custom-hiera.yaml`\n\n\"\"\"\n\nfrom insights import parser, YAMLParser\nfrom insights.specs import Specs\n\n\n@parser(Specs.satellite_custom_hiera)\nclass CustomHiera(YAMLParser):\n \"\"\"\n Class to parse ``/etc/foreman-installer/custom-hiera.yaml``\n\n Examples:\n >>> 'apache::mod::prefork::serverlimit' in custom_hiera\n True\n >>> custom_hiera['apache::mod::prefork::serverlimit']\n 582\n \"\"\"\n\n pass\n", "path": "insights/parsers/satellite_installer_configurations.py"}], "after_files": [{"content": "\"\"\"\nSatellite installer configuration files\n=======================================\n\nParsers included in this module are:\n\nCustomHiera - file ``/etc/foreman-installer/custom-hiera.yaml``\n---------------------------------------------------------------\nParsers the file `/etc/foreman-installer/custom-hiera.yaml`\n\n\"\"\"\n\nfrom insights import parser, YAMLParser\nfrom insights.specs import Specs\nfrom insights.parsers import SkipException\n\n\n@parser(Specs.satellite_custom_hiera)\nclass CustomHiera(YAMLParser):\n \"\"\"\n Class to parse ``/etc/foreman-installer/custom-hiera.yaml``\n\n Examples:\n >>> 'apache::mod::prefork::serverlimit' in custom_hiera\n True\n >>> custom_hiera['apache::mod::prefork::serverlimit']\n 582\n \"\"\"\n def parse_content(self, content):\n try:\n super(CustomHiera, self).parse_content(content)\n except SkipException:\n pass\n", "path": "insights/parsers/satellite_installer_configurations.py"}]} | 529 | 190 |
gh_patches_debug_14651 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-763 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding pytest in an MPI environment
There are some functions that run in a distributed environment that have not been fully verified, so some pytests will be run in MPI to verify them.
https://github.com/pfnet/pytorch-pfn-extras/blob/266e8bde2c2a1a6aa3f8648d49e758975c8b436a/tests/pytorch_pfn_extras_tests/training_tests/extensions_tests/test_distributed_snapshot.py#L40-L46
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2
3 import setuptools
4
5 here = os.path.abspath(os.path.dirname(__file__))
6 # Get __version__ variable
7 exec(open(os.path.join(here, "pytorch_pfn_extras", "_version.py")).read())
8
9
10 setuptools.setup(
11 name="pytorch-pfn-extras",
12 version=__version__, # NOQA
13 description="Supplementary components to accelerate research and "
14 "development in PyTorch.",
15 author="Preferred Networks, Inc.",
16 license="MIT License",
17 install_requires=["numpy", "packaging", "torch", "typing-extensions>=3.10"],
18 extras_require={
19 "test": [
20 "pytest",
21 "onnxruntime",
22 "torchvision",
23 "torchaudio",
24 "pysen",
25 "black==23.3.0",
26 "flake8==4.0.1",
27 "isort==5.10.1",
28 "mypy==1.3.0",
29 "types-PyYAML",
30 "types-setuptools",
31 "matplotlib",
32 "tensorboard",
33 "ipython",
34 "ipywidgets",
35 "pandas",
36 "optuna",
37 "onnx",
38 "pytorch-ignite",
39 ],
40 "onnx": ["onnx"],
41 },
42 python_requires=">=3.6.0",
43 packages=setuptools.find_packages(exclude=["tests", "tests.*"]),
44 package_data={"pytorch_pfn_extras": ["py.typed"]},
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,27 +16,6 @@
license="MIT License",
install_requires=["numpy", "packaging", "torch", "typing-extensions>=3.10"],
extras_require={
- "test": [
- "pytest",
- "onnxruntime",
- "torchvision",
- "torchaudio",
- "pysen",
- "black==23.3.0",
- "flake8==4.0.1",
- "isort==5.10.1",
- "mypy==1.3.0",
- "types-PyYAML",
- "types-setuptools",
- "matplotlib",
- "tensorboard",
- "ipython",
- "ipywidgets",
- "pandas",
- "optuna",
- "onnx",
- "pytorch-ignite",
- ],
"onnx": ["onnx"],
},
python_requires=">=3.6.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,27 +16,6 @@\n license=\"MIT License\",\n install_requires=[\"numpy\", \"packaging\", \"torch\", \"typing-extensions>=3.10\"],\n extras_require={\n- \"test\": [\n- \"pytest\",\n- \"onnxruntime\",\n- \"torchvision\",\n- \"torchaudio\",\n- \"pysen\",\n- \"black==23.3.0\",\n- \"flake8==4.0.1\",\n- \"isort==5.10.1\",\n- \"mypy==1.3.0\",\n- \"types-PyYAML\",\n- \"types-setuptools\",\n- \"matplotlib\",\n- \"tensorboard\",\n- \"ipython\",\n- \"ipywidgets\",\n- \"pandas\",\n- \"optuna\",\n- \"onnx\",\n- \"pytorch-ignite\",\n- ],\n \"onnx\": [\"onnx\"],\n },\n python_requires=\">=3.6.0\",\n", "issue": "Adding pytest in an MPI environment\nThere are some functions that run in a distributed environment that have not been fully verified, so some pytests will be run in MPI to verify them.\r\n\r\nhttps://github.com/pfnet/pytorch-pfn-extras/blob/266e8bde2c2a1a6aa3f8648d49e758975c8b436a/tests/pytorch_pfn_extras_tests/training_tests/extensions_tests/test_distributed_snapshot.py#L40-L46\r\n\n", "before_files": [{"content": "import os\n\nimport setuptools\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, \"pytorch_pfn_extras\", \"_version.py\")).read())\n\n\nsetuptools.setup(\n name=\"pytorch-pfn-extras\",\n version=__version__, # NOQA\n description=\"Supplementary components to accelerate research and \"\n \"development in PyTorch.\",\n author=\"Preferred Networks, Inc.\",\n license=\"MIT License\",\n install_requires=[\"numpy\", \"packaging\", \"torch\", \"typing-extensions>=3.10\"],\n extras_require={\n \"test\": [\n \"pytest\",\n \"onnxruntime\",\n \"torchvision\",\n \"torchaudio\",\n \"pysen\",\n \"black==23.3.0\",\n \"flake8==4.0.1\",\n \"isort==5.10.1\",\n \"mypy==1.3.0\",\n \"types-PyYAML\",\n \"types-setuptools\",\n \"matplotlib\",\n \"tensorboard\",\n \"ipython\",\n \"ipywidgets\",\n \"pandas\",\n \"optuna\",\n \"onnx\",\n \"pytorch-ignite\",\n ],\n \"onnx\": [\"onnx\"],\n },\n python_requires=\">=3.6.0\",\n packages=setuptools.find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"pytorch_pfn_extras\": [\"py.typed\"]},\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\n\nimport setuptools\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, \"pytorch_pfn_extras\", \"_version.py\")).read())\n\n\nsetuptools.setup(\n name=\"pytorch-pfn-extras\",\n version=__version__, # NOQA\n description=\"Supplementary components to accelerate research and \"\n \"development in PyTorch.\",\n author=\"Preferred Networks, Inc.\",\n license=\"MIT License\",\n install_requires=[\"numpy\", \"packaging\", \"torch\", \"typing-extensions>=3.10\"],\n extras_require={\n \"onnx\": [\"onnx\"],\n },\n python_requires=\">=3.6.0\",\n packages=setuptools.find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"pytorch_pfn_extras\": [\"py.typed\"]},\n)\n", "path": "setup.py"}]} | 778 | 243 |
gh_patches_debug_21746 | rasdani/github-patches | git_diff | explosion__spaCy-3389 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[feature request] Factory default for extension attributes
## Feature description
As usual with Python, mutable defaults are a big no-no for extension attributes, since they are shared by all instances, which leads to subtle and ~~quick to anger~~ hard to root out bugs (see e.g. #2581).
The documentation mentions that pitfall, but doesn't offer a convenient solution: if I want to keep a static list of interesting spans in my document in a `Doc._.interesting`, it is not clear where the getter and setter that I am supposed to use for this property should store the state. (From what I understand, it should probably be somewhere in `Doc.user_data`, but I have not found a lot of doc on that either)
I propose a `factory` argument to `set_extension` that would be called the first time that the value for the corresponding extension property is retrieved for a given instance (as `collections.defaultdict` does), so one could just write
```python
spacy.tokens.Doc.set_extension('interesting', factory=list)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spacy/tokens/underscore.py`
Content:
```
1 # coding: utf8
2 from __future__ import unicode_literals
3
4 import functools
5
6 from ..errors import Errors
7
8
9 class Underscore(object):
10 doc_extensions = {}
11 span_extensions = {}
12 token_extensions = {}
13
14 def __init__(self, extensions, obj, start=None, end=None):
15 object.__setattr__(self, "_extensions", extensions)
16 object.__setattr__(self, "_obj", obj)
17 # Assumption is that for doc values, _start and _end will both be None
18 # Span will set non-None values for _start and _end
19 # Token will have _start be non-None, _end be None
20 # This lets us key everything into the doc.user_data dictionary,
21 # (see _get_key), and lets us use a single Underscore class.
22 object.__setattr__(self, "_doc", obj.doc)
23 object.__setattr__(self, "_start", start)
24 object.__setattr__(self, "_end", end)
25
26 def __getattr__(self, name):
27 if name not in self._extensions:
28 raise AttributeError(Errors.E046.format(name=name))
29 default, method, getter, setter = self._extensions[name]
30 if getter is not None:
31 return getter(self._obj)
32 elif method is not None:
33 return functools.partial(method, self._obj)
34 else:
35 return self._doc.user_data.get(self._get_key(name), default)
36
37 def __setattr__(self, name, value):
38 if name not in self._extensions:
39 raise AttributeError(Errors.E047.format(name=name))
40 default, method, getter, setter = self._extensions[name]
41 if setter is not None:
42 return setter(self._obj, value)
43 else:
44 self._doc.user_data[self._get_key(name)] = value
45
46 def set(self, name, value):
47 return self.__setattr__(name, value)
48
49 def get(self, name):
50 return self.__getattr__(name)
51
52 def has(self, name):
53 return name in self._extensions
54
55 def _get_key(self, name):
56 return ("._.", name, self._start, self._end)
57
58
59 def get_ext_args(**kwargs):
60 """Validate and convert arguments. Reused in Doc, Token and Span."""
61 default = kwargs.get("default")
62 getter = kwargs.get("getter")
63 setter = kwargs.get("setter")
64 method = kwargs.get("method")
65 if getter is None and setter is not None:
66 raise ValueError(Errors.E089)
67 valid_opts = ("default" in kwargs, method is not None, getter is not None)
68 nr_defined = sum(t is True for t in valid_opts)
69 if nr_defined != 1:
70 raise ValueError(Errors.E083.format(nr_defined=nr_defined))
71 if setter is not None and not hasattr(setter, "__call__"):
72 raise ValueError(Errors.E091.format(name="setter", value=repr(setter)))
73 if getter is not None and not hasattr(getter, "__call__"):
74 raise ValueError(Errors.E091.format(name="getter", value=repr(getter)))
75 if method is not None and not hasattr(method, "__call__"):
76 raise ValueError(Errors.E091.format(name="method", value=repr(method)))
77 return (default, method, getter, setter)
78
79
80 def is_writable_attr(ext):
81 """Check if an extension attribute is writable.
82 ext (tuple): The (default, getter, setter, method) tuple available via
83 {Doc,Span,Token}.get_extension.
84 RETURNS (bool): Whether the attribute is writable.
85 """
86 default, method, getter, setter = ext
87 # Extension is writable if it has a setter (getter + setter), if it has a
88 # default value (or, if its default value is none, none of the other values
89 # should be set).
90 if setter is not None or default is not None or all(e is None for e in ext):
91 return True
92 return False
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spacy/tokens/underscore.py b/spacy/tokens/underscore.py
--- a/spacy/tokens/underscore.py
+++ b/spacy/tokens/underscore.py
@@ -2,11 +2,13 @@
from __future__ import unicode_literals
import functools
+import copy
from ..errors import Errors
class Underscore(object):
+ mutable_types = (dict, list, set)
doc_extensions = {}
span_extensions = {}
token_extensions = {}
@@ -32,7 +34,15 @@
elif method is not None:
return functools.partial(method, self._obj)
else:
- return self._doc.user_data.get(self._get_key(name), default)
+ key = self._get_key(name)
+ if key in self._doc.user_data:
+ return self._doc.user_data[key]
+ elif isinstance(default, self.mutable_types):
+ # Handle mutable default arguments (see #2581)
+ new_default = copy.copy(default)
+ self.__setattr__(name, new_default)
+ return new_default
+ return default
def __setattr__(self, name, value):
if name not in self._extensions:
| {"golden_diff": "diff --git a/spacy/tokens/underscore.py b/spacy/tokens/underscore.py\n--- a/spacy/tokens/underscore.py\n+++ b/spacy/tokens/underscore.py\n@@ -2,11 +2,13 @@\n from __future__ import unicode_literals\n \n import functools\n+import copy\n \n from ..errors import Errors\n \n \n class Underscore(object):\n+ mutable_types = (dict, list, set)\n doc_extensions = {}\n span_extensions = {}\n token_extensions = {}\n@@ -32,7 +34,15 @@\n elif method is not None:\n return functools.partial(method, self._obj)\n else:\n- return self._doc.user_data.get(self._get_key(name), default)\n+ key = self._get_key(name)\n+ if key in self._doc.user_data:\n+ return self._doc.user_data[key]\n+ elif isinstance(default, self.mutable_types):\n+ # Handle mutable default arguments (see #2581)\n+ new_default = copy.copy(default)\n+ self.__setattr__(name, new_default)\n+ return new_default\n+ return default\n \n def __setattr__(self, name, value):\n if name not in self._extensions:\n", "issue": "[feature request] Factory default for extension attributes\n## Feature description\r\nAs usual with Python, mutable defaults are a big no-no for extension attributes, since they are shared by all instances, which leads to subtle and ~~quick to anger~~ hard to root out bugs (see e.g. #2581).\r\nThe documentation mentions that pitfall, but doesn't offer a convenient solution: if I want to keep a static list of interesting spans in my document in a `Doc._.interesting`, it is not clear where the getter and setter that I am supposed to use for this property should store the state. (From what I understand, it should probably be somewhere in `Doc.user_data`, but I have not found a lot of doc on that either)\r\n\r\nI propose a `factory` argument to `set_extension` that would be called the first time that the value for the corresponding extension property is retrieved for a given instance (as `collections.defaultdict` does), so one could just write\r\n\r\n```python\r\nspacy.tokens.Doc.set_extension('interesting', factory=list)\r\n```\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport functools\n\nfrom ..errors import Errors\n\n\nclass Underscore(object):\n doc_extensions = {}\n span_extensions = {}\n token_extensions = {}\n\n def __init__(self, extensions, obj, start=None, end=None):\n object.__setattr__(self, \"_extensions\", extensions)\n object.__setattr__(self, \"_obj\", obj)\n # Assumption is that for doc values, _start and _end will both be None\n # Span will set non-None values for _start and _end\n # Token will have _start be non-None, _end be None\n # This lets us key everything into the doc.user_data dictionary,\n # (see _get_key), and lets us use a single Underscore class.\n object.__setattr__(self, \"_doc\", obj.doc)\n object.__setattr__(self, \"_start\", start)\n object.__setattr__(self, \"_end\", end)\n\n def __getattr__(self, name):\n if name not in self._extensions:\n raise AttributeError(Errors.E046.format(name=name))\n default, method, getter, setter = self._extensions[name]\n if getter is not None:\n return getter(self._obj)\n elif method is not None:\n return functools.partial(method, self._obj)\n else:\n return self._doc.user_data.get(self._get_key(name), default)\n\n def __setattr__(self, name, value):\n if name not in self._extensions:\n raise AttributeError(Errors.E047.format(name=name))\n default, method, getter, setter = self._extensions[name]\n if setter is not None:\n return setter(self._obj, value)\n else:\n self._doc.user_data[self._get_key(name)] = value\n\n def set(self, name, value):\n return self.__setattr__(name, value)\n\n def get(self, name):\n return self.__getattr__(name)\n\n def has(self, name):\n return name in self._extensions\n\n def _get_key(self, name):\n return (\"._.\", name, self._start, self._end)\n\n\ndef get_ext_args(**kwargs):\n \"\"\"Validate and convert arguments. Reused in Doc, Token and Span.\"\"\"\n default = kwargs.get(\"default\")\n getter = kwargs.get(\"getter\")\n setter = kwargs.get(\"setter\")\n method = kwargs.get(\"method\")\n if getter is None and setter is not None:\n raise ValueError(Errors.E089)\n valid_opts = (\"default\" in kwargs, method is not None, getter is not None)\n nr_defined = sum(t is True for t in valid_opts)\n if nr_defined != 1:\n raise ValueError(Errors.E083.format(nr_defined=nr_defined))\n if setter is not None and not hasattr(setter, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"setter\", value=repr(setter)))\n if getter is not None and not hasattr(getter, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"getter\", value=repr(getter)))\n if method is not None and not hasattr(method, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"method\", value=repr(method)))\n return (default, method, getter, setter)\n\n\ndef is_writable_attr(ext):\n \"\"\"Check if an extension attribute is writable.\n ext (tuple): The (default, getter, setter, method) tuple available via\n {Doc,Span,Token}.get_extension.\n RETURNS (bool): Whether the attribute is writable.\n \"\"\"\n default, method, getter, setter = ext\n # Extension is writable if it has a setter (getter + setter), if it has a\n # default value (or, if its default value is none, none of the other values\n # should be set).\n if setter is not None or default is not None or all(e is None for e in ext):\n return True\n return False\n", "path": "spacy/tokens/underscore.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport functools\nimport copy\n\nfrom ..errors import Errors\n\n\nclass Underscore(object):\n mutable_types = (dict, list, set)\n doc_extensions = {}\n span_extensions = {}\n token_extensions = {}\n\n def __init__(self, extensions, obj, start=None, end=None):\n object.__setattr__(self, \"_extensions\", extensions)\n object.__setattr__(self, \"_obj\", obj)\n # Assumption is that for doc values, _start and _end will both be None\n # Span will set non-None values for _start and _end\n # Token will have _start be non-None, _end be None\n # This lets us key everything into the doc.user_data dictionary,\n # (see _get_key), and lets us use a single Underscore class.\n object.__setattr__(self, \"_doc\", obj.doc)\n object.__setattr__(self, \"_start\", start)\n object.__setattr__(self, \"_end\", end)\n\n def __getattr__(self, name):\n if name not in self._extensions:\n raise AttributeError(Errors.E046.format(name=name))\n default, method, getter, setter = self._extensions[name]\n if getter is not None:\n return getter(self._obj)\n elif method is not None:\n return functools.partial(method, self._obj)\n else:\n key = self._get_key(name)\n if key in self._doc.user_data:\n return self._doc.user_data[key]\n elif isinstance(default, self.mutable_types):\n # Handle mutable default arguments (see #2581)\n new_default = copy.copy(default)\n self.__setattr__(name, new_default)\n return new_default\n return default\n\n def __setattr__(self, name, value):\n if name not in self._extensions:\n raise AttributeError(Errors.E047.format(name=name))\n default, method, getter, setter = self._extensions[name]\n if setter is not None:\n return setter(self._obj, value)\n else:\n self._doc.user_data[self._get_key(name)] = value\n\n def set(self, name, value):\n return self.__setattr__(name, value)\n\n def get(self, name):\n return self.__getattr__(name)\n\n def has(self, name):\n return name in self._extensions\n\n def _get_key(self, name):\n return (\"._.\", name, self._start, self._end)\n\n\ndef get_ext_args(**kwargs):\n \"\"\"Validate and convert arguments. Reused in Doc, Token and Span.\"\"\"\n default = kwargs.get(\"default\")\n getter = kwargs.get(\"getter\")\n setter = kwargs.get(\"setter\")\n method = kwargs.get(\"method\")\n if getter is None and setter is not None:\n raise ValueError(Errors.E089)\n valid_opts = (\"default\" in kwargs, method is not None, getter is not None)\n nr_defined = sum(t is True for t in valid_opts)\n if nr_defined != 1:\n raise ValueError(Errors.E083.format(nr_defined=nr_defined))\n if setter is not None and not hasattr(setter, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"setter\", value=repr(setter)))\n if getter is not None and not hasattr(getter, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"getter\", value=repr(getter)))\n if method is not None and not hasattr(method, \"__call__\"):\n raise ValueError(Errors.E091.format(name=\"method\", value=repr(method)))\n return (default, method, getter, setter)\n\n\ndef is_writable_attr(ext):\n \"\"\"Check if an extension attribute is writable.\n ext (tuple): The (default, getter, setter, method) tuple available via\n {Doc,Span,Token}.get_extension.\n RETURNS (bool): Whether the attribute is writable.\n \"\"\"\n default, method, getter, setter = ext\n # Extension is writable if it has a setter (getter + setter), if it has a\n # default value (or, if its default value is none, none of the other values\n # should be set).\n if setter is not None or default is not None or all(e is None for e in ext):\n return True\n return False\n", "path": "spacy/tokens/underscore.py"}]} | 1,527 | 264 |
gh_patches_debug_33005 | rasdani/github-patches | git_diff | weecology__retriever-378 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
v1.6.0 will break if download scripts are added to version.txt in master
We have now done this twice (see #180 and #199).
In v1.6.0 `__init__.py` line 16: `MASTER = True`. This results in the retriever always checking `master` for `version.txt` and discovering scripts that it doesn't know how to handle. In the future, the retriever will handle this gracefully thanks to #204, but it's unclear how we should go about introducing the download only functionality since it will break a number of existing installations.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/MammalSuperTree.py`
Content:
```
1 #retriever
2 from retriever.lib.templates import DownloadOnlyTemplate
3
4 SCRIPT = DownloadOnlyTemplate(name="Mammal Super Tree",
5 shortname='mammsupertree',
6 ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',
7 citation = "Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x",
8 description="Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549",
9 urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/MammalSuperTree.py b/scripts/MammalSuperTree.py
--- a/scripts/MammalSuperTree.py
+++ b/scripts/MammalSuperTree.py
@@ -1,9 +1,22 @@
#retriever
-from retriever.lib.templates import DownloadOnlyTemplate
-
-SCRIPT = DownloadOnlyTemplate(name="Mammal Super Tree",
- shortname='mammsupertree',
- ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',
- citation = "Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x",
- description="Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549",
- urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})
+from retriever import VERSION
+if (VERSION == 'v1.6') or (VERSION == 'v1.6.0'):
+ #If v1.6 is running use a dummy script to avoid retriever errors
+ #See https://github.com/weecology/retriever/issues/208 for details
+ from retriever.lib.templates import Script
+ class main(Script):
+ def __init(self):
+ Script.__init__(self,
+ name="Mammal Super Tree",
+ shortname='mammsupertree',
+ )
+ SCRIPT = main()
+else:
+ #For all versions other than 1.6 run as normal
+ from retriever.lib.templates import DownloadOnlyTemplate
+ SCRIPT = DownloadOnlyTemplate(name="Mammal Super Tree",
+ shortname='mammsupertree',
+ ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',
+ citation = "Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x",
+ description="Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549",
+ urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})
| {"golden_diff": "diff --git a/scripts/MammalSuperTree.py b/scripts/MammalSuperTree.py\n--- a/scripts/MammalSuperTree.py\n+++ b/scripts/MammalSuperTree.py\n@@ -1,9 +1,22 @@\n #retriever\n-from retriever.lib.templates import DownloadOnlyTemplate\n-\n-SCRIPT = DownloadOnlyTemplate(name=\"Mammal Super Tree\",\n- shortname='mammsupertree',\n- ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',\n- citation = \"Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x\",\n- description=\"Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549\",\n- urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})\n+from retriever import VERSION\n+if (VERSION == 'v1.6') or (VERSION == 'v1.6.0'):\n+ #If v1.6 is running use a dummy script to avoid retriever errors\n+ #See https://github.com/weecology/retriever/issues/208 for details\n+ from retriever.lib.templates import Script\n+ class main(Script):\n+ def __init(self):\n+ Script.__init__(self,\n+ name=\"Mammal Super Tree\",\n+ shortname='mammsupertree',\n+ )\n+ SCRIPT = main()\n+else:\n+ #For all versions other than 1.6 run as normal\n+ from retriever.lib.templates import DownloadOnlyTemplate\n+ SCRIPT = DownloadOnlyTemplate(name=\"Mammal Super Tree\",\n+ shortname='mammsupertree',\n+ ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',\n+ citation = \"Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x\",\n+ description=\"Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549\",\n+ urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})\n", "issue": "v1.6.0 will break if download scripts are added to version.txt in master\nWe have now done this twice (see #180 and #199).\n\nIn v1.6.0 `__init__.py` line 16: `MASTER = True`. This results in the retriever always checking `master` for `version.txt` and discovering scripts that it doesn't know how to handle. In the future, the retriever will handle this gracefully thanks to #204, but it's unclear how we should go about introducing the download only functionality since it will break a number of existing installations.\n\n", "before_files": [{"content": "#retriever\nfrom retriever.lib.templates import DownloadOnlyTemplate\n\nSCRIPT = DownloadOnlyTemplate(name=\"Mammal Super Tree\",\n shortname='mammsupertree',\n ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',\n citation = \"Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x\",\n description=\"Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549\",\n urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})\n", "path": "scripts/MammalSuperTree.py"}], "after_files": [{"content": "#retriever\nfrom retriever import VERSION\nif (VERSION == 'v1.6') or (VERSION == 'v1.6.0'):\n #If v1.6 is running use a dummy script to avoid retriever errors\n #See https://github.com/weecology/retriever/issues/208 for details\n from retriever.lib.templates import Script\n class main(Script):\n def __init(self):\n Script.__init__(self,\n name=\"Mammal Super Tree\",\n shortname='mammsupertree',\n )\n SCRIPT = main()\nelse:\n #For all versions other than 1.6 run as normal\n from retriever.lib.templates import DownloadOnlyTemplate\n SCRIPT = DownloadOnlyTemplate(name=\"Mammal Super Tree\",\n shortname='mammsupertree',\n ref='http://doi.org/10.1111/j.1461-0248.2009.01307.x',\n citation = \"Fritz, S. A., Bininda-Emonds, O. R. P. and Purvis, A. (2009), Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters, 12: 538-549. doi:10.1111/j.1461-0248.2009.01307.x\",\n description=\"Mammal Super Tree from Fritz, S.A., O.R.P Bininda-Emonds, and A. Purvis. 2009. Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology Letters 12:538-549\",\n urls ={'mammal_super_tree_fritz2009.tre': 'http://onlinelibrary.wiley.com/store/10.1111/j.1461-0248.2009.01307.x/asset/supinfo/ELE_1307_sm_SA1.tre?v=1&s=366b28651a9b5d1a3148ef9a8620f8aa31a7df44'})\n", "path": "scripts/MammalSuperTree.py"}]} | 775 | 959 |
gh_patches_debug_5038 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1105 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pyhf contrib download fails gracelessly with invalid URL
# Description
calling `pyhf contrib download` (just to see what it does) fails pretty violently. hould we make itt a bit nicer @matthewfeickert
?
```
pyhf contrib download
Traceback (most recent call last):
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/bin/pyhf", line 33, in <module>
sys.exit(load_entry_point('pyhf', 'console_scripts', 'pyhf')())
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfsrc/src/pyhf/cli/contrib.py", line 60, in download
utils.download(archive_url, output_directory, force, compress)
File "/Users/lukasheinrich/Code/pyhfdev/dev/pyhfsrc/src/pyhf/contrib/utils.py", line 47, in download
+ "To download an archive from this host use the --force option."
pyhf.exceptions.InvalidArchiveHost: is not an approved archive host: www.hepdata.net, doi.org
To download an archive from this host use the --force option.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/cli/contrib.py`
Content:
```
1 """CLI for functionality that will get migrated out eventually."""
2 import logging
3 import click
4 from pathlib import Path
5
6 from ..contrib import utils
7
8 logging.basicConfig()
9 log = logging.getLogger(__name__)
10
11
12 @click.group(name="contrib")
13 def cli():
14 """
15 Contrib experimental operations.
16
17 .. note::
18
19 Requires installation of the ``contrib`` extra.
20
21 .. code-block:: shell
22
23 $ python -m pip install pyhf[contrib]
24 """
25
26
27 @cli.command()
28 @click.argument("archive-url", default="-")
29 @click.argument("output-directory", default="-")
30 @click.option("-v", "--verbose", is_flag=True, help="Enables verbose mode")
31 @click.option(
32 "-f", "--force", is_flag=True, help="Force download from non-approved host"
33 )
34 @click.option(
35 "-c",
36 "--compress",
37 is_flag=True,
38 help="Keep the archive in a compressed tar.gz form",
39 )
40 def download(archive_url, output_directory, verbose, force, compress):
41 """
42 Download the patchset archive from the remote URL and extract it in a
43 directory at the path given.
44
45 Example:
46
47 .. code-block:: shell
48
49 $ pyhf contrib download --verbose https://www.hepdata.net/record/resource/1408476?view=true 1Lbb-likelihoods
50
51 \b
52 1Lbb-likelihoods/patchset.json
53 1Lbb-likelihoods/README.md
54 1Lbb-likelihoods/BkgOnly.json
55
56 Raises:
57 :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid
58 """
59 try:
60 utils.download(archive_url, output_directory, force, compress)
61
62 if verbose:
63 file_list = [str(file) for file in list(Path(output_directory).glob("*"))]
64 print("\n".join(file_list))
65 except AttributeError as excep:
66 exception_info = (
67 str(excep)
68 + "\nInstallation of the contrib extra is required to use the contrib CLI API"
69 + "\nPlease install with: python -m pip install pyhf[contrib]\n"
70 )
71 log.error(exception_info)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/cli/contrib.py b/src/pyhf/cli/contrib.py
--- a/src/pyhf/cli/contrib.py
+++ b/src/pyhf/cli/contrib.py
@@ -25,8 +25,8 @@
@cli.command()
[email protected]("archive-url", default="-")
[email protected]("output-directory", default="-")
[email protected]("archive-url")
[email protected]("output-directory")
@click.option("-v", "--verbose", is_flag=True, help="Enables verbose mode")
@click.option(
"-f", "--force", is_flag=True, help="Force download from non-approved host"
| {"golden_diff": "diff --git a/src/pyhf/cli/contrib.py b/src/pyhf/cli/contrib.py\n--- a/src/pyhf/cli/contrib.py\n+++ b/src/pyhf/cli/contrib.py\n@@ -25,8 +25,8 @@\n \n \n @cli.command()\[email protected](\"archive-url\", default=\"-\")\[email protected](\"output-directory\", default=\"-\")\[email protected](\"archive-url\")\[email protected](\"output-directory\")\n @click.option(\"-v\", \"--verbose\", is_flag=True, help=\"Enables verbose mode\")\n @click.option(\n \"-f\", \"--force\", is_flag=True, help=\"Force download from non-approved host\"\n", "issue": "pyhf contrib download fails gracelessly with invalid URL\n# Description\r\n\r\ncalling `pyhf contrib download` (just to see what it does) fails pretty violently. hould we make itt a bit nicer @matthewfeickert \r\n?\r\n\r\n```\r\npyhf contrib download\r\nTraceback (most recent call last):\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/bin/pyhf\", line 33, in <module>\r\n sys.exit(load_entry_point('pyhf', 'console_scripts', 'pyhf')())\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfdevenv/lib/python3.7/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfsrc/src/pyhf/cli/contrib.py\", line 60, in download\r\n utils.download(archive_url, output_directory, force, compress)\r\n File \"/Users/lukasheinrich/Code/pyhfdev/dev/pyhfsrc/src/pyhf/contrib/utils.py\", line 47, in download\r\n + \"To download an archive from this host use the --force option.\"\r\npyhf.exceptions.InvalidArchiveHost: is not an approved archive host: www.hepdata.net, doi.org\r\nTo download an archive from this host use the --force option.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"CLI for functionality that will get migrated out eventually.\"\"\"\nimport logging\nimport click\nfrom pathlib import Path\n\nfrom ..contrib import utils\n\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\n\[email protected](name=\"contrib\")\ndef cli():\n \"\"\"\n Contrib experimental operations.\n\n .. note::\n\n Requires installation of the ``contrib`` extra.\n\n .. code-block:: shell\n\n $ python -m pip install pyhf[contrib]\n \"\"\"\n\n\[email protected]()\[email protected](\"archive-url\", default=\"-\")\[email protected](\"output-directory\", default=\"-\")\[email protected](\"-v\", \"--verbose\", is_flag=True, help=\"Enables verbose mode\")\[email protected](\n \"-f\", \"--force\", is_flag=True, help=\"Force download from non-approved host\"\n)\[email protected](\n \"-c\",\n \"--compress\",\n is_flag=True,\n help=\"Keep the archive in a compressed tar.gz form\",\n)\ndef download(archive_url, output_directory, verbose, force, compress):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n .. code-block:: shell\n\n $ pyhf contrib download --verbose https://www.hepdata.net/record/resource/1408476?view=true 1Lbb-likelihoods\n\n \\b\n 1Lbb-likelihoods/patchset.json\n 1Lbb-likelihoods/README.md\n 1Lbb-likelihoods/BkgOnly.json\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n try:\n utils.download(archive_url, output_directory, force, compress)\n\n if verbose:\n file_list = [str(file) for file in list(Path(output_directory).glob(\"*\"))]\n print(\"\\n\".join(file_list))\n except AttributeError as excep:\n exception_info = (\n str(excep)\n + \"\\nInstallation of the contrib extra is required to use the contrib CLI API\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\"\n )\n log.error(exception_info)\n", "path": "src/pyhf/cli/contrib.py"}], "after_files": [{"content": "\"\"\"CLI for functionality that will get migrated out eventually.\"\"\"\nimport logging\nimport click\nfrom pathlib import Path\n\nfrom ..contrib import utils\n\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\n\[email protected](name=\"contrib\")\ndef cli():\n \"\"\"\n Contrib experimental operations.\n\n .. note::\n\n Requires installation of the ``contrib`` extra.\n\n .. code-block:: shell\n\n $ python -m pip install pyhf[contrib]\n \"\"\"\n\n\[email protected]()\[email protected](\"archive-url\")\[email protected](\"output-directory\")\[email protected](\"-v\", \"--verbose\", is_flag=True, help=\"Enables verbose mode\")\[email protected](\n \"-f\", \"--force\", is_flag=True, help=\"Force download from non-approved host\"\n)\[email protected](\n \"-c\",\n \"--compress\",\n is_flag=True,\n help=\"Keep the archive in a compressed tar.gz form\",\n)\ndef download(archive_url, output_directory, verbose, force, compress):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n .. code-block:: shell\n\n $ pyhf contrib download --verbose https://www.hepdata.net/record/resource/1408476?view=true 1Lbb-likelihoods\n\n \\b\n 1Lbb-likelihoods/patchset.json\n 1Lbb-likelihoods/README.md\n 1Lbb-likelihoods/BkgOnly.json\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n try:\n utils.download(archive_url, output_directory, force, compress)\n\n if verbose:\n file_list = [str(file) for file in list(Path(output_directory).glob(\"*\"))]\n print(\"\\n\".join(file_list))\n except AttributeError as excep:\n exception_info = (\n str(excep)\n + \"\\nInstallation of the contrib extra is required to use the contrib CLI API\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\"\n )\n log.error(exception_info)\n", "path": "src/pyhf/cli/contrib.py"}]} | 1,447 | 139 |
gh_patches_debug_11587 | rasdani/github-patches | git_diff | saulpw__visidata-967 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no loader for url scheme: postgresql
**Small description**
When attempting to start visidata with
```
vd postgresql:///localdb
```
it fails with
```
Error: no loader for url scheme: postgresql
```
**Expected result**
I would have expected it to work the same way
```
vd postgres:///localdb
```
works, as [both URL schemes are valid](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING).
P.S.: This is somewhere in between a bug and a feature request. As it's super small and about something existing, I called it a bug. I will raise a PR to fix this shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/loaders/postgres.py`
Content:
```
1 from visidata import *
2
3 __all__ = ['openurl_postgres', 'openurl_rds', 'PgTable', 'PgTablesSheet']
4
5 option('postgres_schema', 'public', 'The desired schema for the Postgres database')
6
7 def codeToType(type_code, colname):
8 import psycopg2
9 try:
10 tname = psycopg2._psycopg.string_types[type_code].name
11 if 'INTEGER' in tname:
12 return int
13 if 'STRING' in tname:
14 return str
15 except KeyError:
16 vd.status('unknown postgres type_code %s for %s' % (type_code, colname))
17 return anytype
18
19
20 def openurl_rds(url, filetype=None):
21 import boto3
22 import psycopg2
23
24 rds = boto3.client('rds')
25 url = urlparse(url.given)
26
27 _, region, dbname = url.path.split('/')
28 token = rds.generate_db_auth_token(url.hostname, url.port, url.username, region)
29
30 conn = psycopg2.connect(
31 user=url.username,
32 dbname=dbname,
33 host=url.hostname,
34 port=url.port,
35 password=token)
36
37 return PgTablesSheet(dbname+"_tables", sql=SQL(conn))
38
39
40 def openurl_postgres(url, filetype=None):
41 import psycopg2
42
43 url = urlparse(url.given)
44 dbname = url.path[1:]
45 conn = psycopg2.connect(
46 user=url.username,
47 dbname=dbname,
48 host=url.hostname,
49 port=url.port,
50 password=url.password)
51
52 return PgTablesSheet(dbname+"_tables", sql=SQL(conn))
53
54
55 class SQL:
56 def __init__(self, conn):
57 self.conn = conn
58
59 def cur(self, qstr):
60 import string
61 randomname = ''.join(random.choice(string.ascii_uppercase) for _ in range(6))
62 cur = self.conn.cursor(randomname)
63 cur.execute(qstr)
64 return cur
65
66 @asyncthread
67 def query_async(self, qstr, callback=None):
68 with self.cur(qstr) as cur:
69 callback(cur)
70 cur.close()
71
72
73 def cursorToColumns(cur, sheet):
74 sheet.columns = []
75 for i, coldesc in enumerate(cur.description):
76 sheet.addColumn(ColumnItem(coldesc.name, i, type=codeToType(coldesc.type_code, coldesc.name)))
77
78
79 # rowdef: (table_name, ncols)
80 class PgTablesSheet(Sheet):
81 rowtype = 'tables'
82
83 def reload(self):
84 schema = options.postgres_schema
85 qstr = f'''
86 SELECT relname table_name, column_count.ncols, reltuples::bigint est_nrows
87 FROM pg_class, pg_namespace, (
88 SELECT table_name, COUNT(column_name) AS ncols FROM information_schema.COLUMNS WHERE table_schema = '{schema}' GROUP BY table_name
89 ) AS column_count
90 WHERE pg_class.relnamespace = pg_namespace.oid AND pg_namespace.nspname = '{schema}' AND column_count.table_name = relname;
91 '''
92
93 with self.sql.cur(qstr) as cur:
94 self.nrowsPerTable = {}
95
96 self.rows = []
97 # try to get first row to make cur.description available
98 r = cur.fetchone()
99 if r:
100 self.addRow(r)
101 cursorToColumns(cur, self)
102 self.setKeys(self.columns[0:1]) # table_name is the key
103
104 for r in cur:
105 self.addRow(r)
106
107 def openRow(self, row):
108 return PgTable(self.name+"."+row[0], source=row[0], sql=self.sql)
109
110
111 # rowdef: tuple of values as returned by fetchone()
112 class PgTable(Sheet):
113 @asyncthread
114 def reload(self):
115 if self.options.postgres_schema:
116 source = f"{self.options.postgres_schema}.{self.source}"
117 else:
118 source = self.source
119 with self.sql.cur(f"SELECT * FROM {source}") as cur:
120 self.rows = []
121 r = cur.fetchone()
122 if r:
123 self.addRow(r)
124 cursorToColumns(cur, self)
125 for r in cur:
126 self.addRow(r)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/visidata/loaders/postgres.py b/visidata/loaders/postgres.py
--- a/visidata/loaders/postgres.py
+++ b/visidata/loaders/postgres.py
@@ -1,6 +1,6 @@
from visidata import *
-__all__ = ['openurl_postgres', 'openurl_rds', 'PgTable', 'PgTablesSheet']
+__all__ = ['openurl_postgres', 'openurl_postgresql', 'openurl_rds', 'PgTable', 'PgTablesSheet']
option('postgres_schema', 'public', 'The desired schema for the Postgres database')
@@ -52,6 +52,9 @@
return PgTablesSheet(dbname+"_tables", sql=SQL(conn))
+openurl_postgresql=openurl_postgres
+
+
class SQL:
def __init__(self, conn):
self.conn = conn
| {"golden_diff": "diff --git a/visidata/loaders/postgres.py b/visidata/loaders/postgres.py\n--- a/visidata/loaders/postgres.py\n+++ b/visidata/loaders/postgres.py\n@@ -1,6 +1,6 @@\n from visidata import *\n \n-__all__ = ['openurl_postgres', 'openurl_rds', 'PgTable', 'PgTablesSheet']\n+__all__ = ['openurl_postgres', 'openurl_postgresql', 'openurl_rds', 'PgTable', 'PgTablesSheet']\n \n option('postgres_schema', 'public', 'The desired schema for the Postgres database')\n \n@@ -52,6 +52,9 @@\n return PgTablesSheet(dbname+\"_tables\", sql=SQL(conn))\n \n \n+openurl_postgresql=openurl_postgres\n+\n+\n class SQL:\n def __init__(self, conn):\n self.conn = conn\n", "issue": "no loader for url scheme: postgresql\n**Small description**\r\n\r\nWhen attempting to start visidata with\r\n```\r\nvd postgresql:///localdb\r\n```\r\nit fails with\r\n```\r\nError: no loader for url scheme: postgresql\r\n```\r\n\r\n**Expected result**\r\n\r\nI would have expected it to work the same way \r\n```\r\nvd postgres:///localdb\r\n```\r\nworks, as [both URL schemes are valid](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING).\r\n\r\nP.S.: This is somewhere in between a bug and a feature request. As it's super small and about something existing, I called it a bug. I will raise a PR to fix this shortly.\n", "before_files": [{"content": "from visidata import *\n\n__all__ = ['openurl_postgres', 'openurl_rds', 'PgTable', 'PgTablesSheet']\n\noption('postgres_schema', 'public', 'The desired schema for the Postgres database')\n\ndef codeToType(type_code, colname):\n import psycopg2\n try:\n tname = psycopg2._psycopg.string_types[type_code].name\n if 'INTEGER' in tname:\n return int\n if 'STRING' in tname:\n return str\n except KeyError:\n vd.status('unknown postgres type_code %s for %s' % (type_code, colname))\n return anytype\n\n\ndef openurl_rds(url, filetype=None):\n import boto3\n import psycopg2\n\n rds = boto3.client('rds')\n url = urlparse(url.given)\n\n _, region, dbname = url.path.split('/')\n token = rds.generate_db_auth_token(url.hostname, url.port, url.username, region)\n\n conn = psycopg2.connect(\n user=url.username,\n dbname=dbname,\n host=url.hostname,\n port=url.port,\n password=token)\n\n return PgTablesSheet(dbname+\"_tables\", sql=SQL(conn))\n\n\ndef openurl_postgres(url, filetype=None):\n import psycopg2\n\n url = urlparse(url.given)\n dbname = url.path[1:]\n conn = psycopg2.connect(\n user=url.username,\n dbname=dbname,\n host=url.hostname,\n port=url.port,\n password=url.password)\n\n return PgTablesSheet(dbname+\"_tables\", sql=SQL(conn))\n\n\nclass SQL:\n def __init__(self, conn):\n self.conn = conn\n\n def cur(self, qstr):\n import string\n randomname = ''.join(random.choice(string.ascii_uppercase) for _ in range(6))\n cur = self.conn.cursor(randomname)\n cur.execute(qstr)\n return cur\n\n @asyncthread\n def query_async(self, qstr, callback=None):\n with self.cur(qstr) as cur:\n callback(cur)\n cur.close()\n\n\ndef cursorToColumns(cur, sheet):\n sheet.columns = []\n for i, coldesc in enumerate(cur.description):\n sheet.addColumn(ColumnItem(coldesc.name, i, type=codeToType(coldesc.type_code, coldesc.name)))\n\n\n# rowdef: (table_name, ncols)\nclass PgTablesSheet(Sheet):\n rowtype = 'tables'\n\n def reload(self):\n schema = options.postgres_schema\n qstr = f'''\n SELECT relname table_name, column_count.ncols, reltuples::bigint est_nrows\n FROM pg_class, pg_namespace, (\n SELECT table_name, COUNT(column_name) AS ncols FROM information_schema.COLUMNS WHERE table_schema = '{schema}' GROUP BY table_name\n ) AS column_count\n WHERE pg_class.relnamespace = pg_namespace.oid AND pg_namespace.nspname = '{schema}' AND column_count.table_name = relname;\n '''\n\n with self.sql.cur(qstr) as cur:\n self.nrowsPerTable = {}\n\n self.rows = []\n # try to get first row to make cur.description available\n r = cur.fetchone()\n if r:\n self.addRow(r)\n cursorToColumns(cur, self)\n self.setKeys(self.columns[0:1]) # table_name is the key\n\n for r in cur:\n self.addRow(r)\n\n def openRow(self, row):\n return PgTable(self.name+\".\"+row[0], source=row[0], sql=self.sql)\n\n\n# rowdef: tuple of values as returned by fetchone()\nclass PgTable(Sheet):\n @asyncthread\n def reload(self):\n if self.options.postgres_schema:\n source = f\"{self.options.postgres_schema}.{self.source}\"\n else:\n source = self.source\n with self.sql.cur(f\"SELECT * FROM {source}\") as cur:\n self.rows = []\n r = cur.fetchone()\n if r:\n self.addRow(r)\n cursorToColumns(cur, self)\n for r in cur:\n self.addRow(r)\n", "path": "visidata/loaders/postgres.py"}], "after_files": [{"content": "from visidata import *\n\n__all__ = ['openurl_postgres', 'openurl_postgresql', 'openurl_rds', 'PgTable', 'PgTablesSheet']\n\noption('postgres_schema', 'public', 'The desired schema for the Postgres database')\n\ndef codeToType(type_code, colname):\n import psycopg2\n try:\n tname = psycopg2._psycopg.string_types[type_code].name\n if 'INTEGER' in tname:\n return int\n if 'STRING' in tname:\n return str\n except KeyError:\n vd.status('unknown postgres type_code %s for %s' % (type_code, colname))\n return anytype\n\n\ndef openurl_rds(url, filetype=None):\n import boto3\n import psycopg2\n\n rds = boto3.client('rds')\n url = urlparse(url.given)\n\n _, region, dbname = url.path.split('/')\n token = rds.generate_db_auth_token(url.hostname, url.port, url.username, region)\n\n conn = psycopg2.connect(\n user=url.username,\n dbname=dbname,\n host=url.hostname,\n port=url.port,\n password=token)\n\n return PgTablesSheet(dbname+\"_tables\", sql=SQL(conn))\n\n\ndef openurl_postgres(url, filetype=None):\n import psycopg2\n\n url = urlparse(url.given)\n dbname = url.path[1:]\n conn = psycopg2.connect(\n user=url.username,\n dbname=dbname,\n host=url.hostname,\n port=url.port,\n password=url.password)\n\n return PgTablesSheet(dbname+\"_tables\", sql=SQL(conn))\n\n\nopenurl_postgresql=openurl_postgres\n\n\nclass SQL:\n def __init__(self, conn):\n self.conn = conn\n\n def cur(self, qstr):\n import string\n randomname = ''.join(random.choice(string.ascii_uppercase) for _ in range(6))\n cur = self.conn.cursor(randomname)\n cur.execute(qstr)\n return cur\n\n @asyncthread\n def query_async(self, qstr, callback=None):\n with self.cur(qstr) as cur:\n callback(cur)\n cur.close()\n\n\ndef cursorToColumns(cur, sheet):\n sheet.columns = []\n for i, coldesc in enumerate(cur.description):\n sheet.addColumn(ColumnItem(coldesc.name, i, type=codeToType(coldesc.type_code, coldesc.name)))\n\n\n# rowdef: (table_name, ncols)\nclass PgTablesSheet(Sheet):\n rowtype = 'tables'\n\n def reload(self):\n schema = options.postgres_schema\n qstr = f'''\n SELECT relname table_name, column_count.ncols, reltuples::bigint est_nrows\n FROM pg_class, pg_namespace, (\n SELECT table_name, COUNT(column_name) AS ncols FROM information_schema.COLUMNS WHERE table_schema = '{schema}' GROUP BY table_name\n ) AS column_count\n WHERE pg_class.relnamespace = pg_namespace.oid AND pg_namespace.nspname = '{schema}' AND column_count.table_name = relname;\n '''\n\n with self.sql.cur(qstr) as cur:\n self.nrowsPerTable = {}\n\n self.rows = []\n # try to get first row to make cur.description available\n r = cur.fetchone()\n if r:\n self.addRow(r)\n cursorToColumns(cur, self)\n self.setKeys(self.columns[0:1]) # table_name is the key\n\n for r in cur:\n self.addRow(r)\n\n def openRow(self, row):\n return PgTable(self.name+\".\"+row[0], source=row[0], sql=self.sql)\n\n\n# rowdef: tuple of values as returned by fetchone()\nclass PgTable(Sheet):\n @asyncthread\n def reload(self):\n if self.options.postgres_schema:\n source = f\"{self.options.postgres_schema}.{self.source}\"\n else:\n source = self.source\n with self.sql.cur(f\"SELECT * FROM {source}\") as cur:\n self.rows = []\n r = cur.fetchone()\n if r:\n self.addRow(r)\n cursorToColumns(cur, self)\n for r in cur:\n self.addRow(r)\n", "path": "visidata/loaders/postgres.py"}]} | 1,578 | 196 |
gh_patches_debug_37490 | rasdani/github-patches | git_diff | getsentry__sentry-62640 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Convert digest notifications to use block kit
Convert the `DigestNotificationMessageBuilder` (code [here](https://github.com/getsentry/sentry/blob/master/src/sentry/integrations/slack/message_builder/notifications/digest.py)) to use block kit. This may be harder to test as I have personally never received one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/integrations/slack/message_builder/notifications/digest.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any, Mapping
4
5 from sentry.digests import Digest
6 from sentry.digests.utils import get_groups
7 from sentry.integrations.slack.message_builder import SlackBody
8 from sentry.integrations.slack.message_builder.issues import SlackIssuesMessageBuilder
9 from sentry.notifications.notifications.digest import DigestNotification
10 from sentry.services.hybrid_cloud.actor import RpcActor
11
12 from .base import SlackNotificationsMessageBuilder
13
14
15 class DigestNotificationMessageBuilder(SlackNotificationsMessageBuilder):
16 def __init__(
17 self,
18 notification: DigestNotification,
19 context: Mapping[str, Any],
20 recipient: RpcActor,
21 ) -> None:
22 super().__init__(notification, context, recipient)
23 self.notification: DigestNotification = notification
24
25 def build(self) -> SlackBody:
26 """
27 It's currently impossible in mypy to have recursive types so we need a
28 hack to get this to return a SlackBody.
29 """
30 digest: Digest = self.context.get("digest", {})
31 return [
32 SlackIssuesMessageBuilder(
33 group=group,
34 event=event,
35 rules=[rule],
36 issue_details=True,
37 notification=self.notification,
38 recipient=self.recipient,
39 ).build()
40 for rule, group, event in get_groups(digest)
41 ]
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/integrations/slack/message_builder/notifications/digest.py b/src/sentry/integrations/slack/message_builder/notifications/digest.py
--- a/src/sentry/integrations/slack/message_builder/notifications/digest.py
+++ b/src/sentry/integrations/slack/message_builder/notifications/digest.py
@@ -2,9 +2,10 @@
from typing import Any, Mapping
+from sentry import features
from sentry.digests import Digest
from sentry.digests.utils import get_groups
-from sentry.integrations.slack.message_builder import SlackBody
+from sentry.integrations.slack.message_builder import SlackAttachment, SlackBlock
from sentry.integrations.slack.message_builder.issues import SlackIssuesMessageBuilder
from sentry.notifications.notifications.digest import DigestNotification
from sentry.services.hybrid_cloud.actor import RpcActor
@@ -22,14 +23,28 @@
super().__init__(notification, context, recipient)
self.notification: DigestNotification = notification
- def build(self) -> SlackBody:
+ def build(self) -> SlackAttachment | SlackBlock:
"""
It's currently impossible in mypy to have recursive types so we need a
hack to get this to return a SlackBody.
"""
digest: Digest = self.context.get("digest", {})
- return [
- SlackIssuesMessageBuilder(
+ digest_groups = get_groups(digest)
+ if not features.has("organizations:slack-block-kit", self.notification.organization):
+ return [
+ SlackIssuesMessageBuilder(
+ group=group,
+ event=event,
+ rules=[rule],
+ issue_details=True,
+ notification=self.notification,
+ recipient=self.recipient,
+ ).build()
+ for rule, group, event in digest_groups
+ ]
+ blocks = []
+ for rule, group, event in digest_groups:
+ alert_as_blocks = SlackIssuesMessageBuilder(
group=group,
event=event,
rules=[rule],
@@ -37,5 +52,8 @@
notification=self.notification,
recipient=self.recipient,
).build()
- for rule, group, event in get_groups(digest)
- ]
+ # we iterate through the list of blocks created for each alert in the digest and add
+ # each block to the list of blocks which is used for the entire digest notification
+ for block in alert_as_blocks.get("blocks"):
+ blocks.append(block)
+ return self._build_blocks(*blocks)
| {"golden_diff": "diff --git a/src/sentry/integrations/slack/message_builder/notifications/digest.py b/src/sentry/integrations/slack/message_builder/notifications/digest.py\n--- a/src/sentry/integrations/slack/message_builder/notifications/digest.py\n+++ b/src/sentry/integrations/slack/message_builder/notifications/digest.py\n@@ -2,9 +2,10 @@\n \n from typing import Any, Mapping\n \n+from sentry import features\n from sentry.digests import Digest\n from sentry.digests.utils import get_groups\n-from sentry.integrations.slack.message_builder import SlackBody\n+from sentry.integrations.slack.message_builder import SlackAttachment, SlackBlock\n from sentry.integrations.slack.message_builder.issues import SlackIssuesMessageBuilder\n from sentry.notifications.notifications.digest import DigestNotification\n from sentry.services.hybrid_cloud.actor import RpcActor\n@@ -22,14 +23,28 @@\n super().__init__(notification, context, recipient)\n self.notification: DigestNotification = notification\n \n- def build(self) -> SlackBody:\n+ def build(self) -> SlackAttachment | SlackBlock:\n \"\"\"\n It's currently impossible in mypy to have recursive types so we need a\n hack to get this to return a SlackBody.\n \"\"\"\n digest: Digest = self.context.get(\"digest\", {})\n- return [\n- SlackIssuesMessageBuilder(\n+ digest_groups = get_groups(digest)\n+ if not features.has(\"organizations:slack-block-kit\", self.notification.organization):\n+ return [\n+ SlackIssuesMessageBuilder(\n+ group=group,\n+ event=event,\n+ rules=[rule],\n+ issue_details=True,\n+ notification=self.notification,\n+ recipient=self.recipient,\n+ ).build()\n+ for rule, group, event in digest_groups\n+ ]\n+ blocks = []\n+ for rule, group, event in digest_groups:\n+ alert_as_blocks = SlackIssuesMessageBuilder(\n group=group,\n event=event,\n rules=[rule],\n@@ -37,5 +52,8 @@\n notification=self.notification,\n recipient=self.recipient,\n ).build()\n- for rule, group, event in get_groups(digest)\n- ]\n+ # we iterate through the list of blocks created for each alert in the digest and add\n+ # each block to the list of blocks which is used for the entire digest notification\n+ for block in alert_as_blocks.get(\"blocks\"):\n+ blocks.append(block)\n+ return self._build_blocks(*blocks)\n", "issue": "Convert digest notifications to use block kit\nConvert the `DigestNotificationMessageBuilder` (code [here](https://github.com/getsentry/sentry/blob/master/src/sentry/integrations/slack/message_builder/notifications/digest.py)) to use block kit. This may be harder to test as I have personally never received one.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping\n\nfrom sentry.digests import Digest\nfrom sentry.digests.utils import get_groups\nfrom sentry.integrations.slack.message_builder import SlackBody\nfrom sentry.integrations.slack.message_builder.issues import SlackIssuesMessageBuilder\nfrom sentry.notifications.notifications.digest import DigestNotification\nfrom sentry.services.hybrid_cloud.actor import RpcActor\n\nfrom .base import SlackNotificationsMessageBuilder\n\n\nclass DigestNotificationMessageBuilder(SlackNotificationsMessageBuilder):\n def __init__(\n self,\n notification: DigestNotification,\n context: Mapping[str, Any],\n recipient: RpcActor,\n ) -> None:\n super().__init__(notification, context, recipient)\n self.notification: DigestNotification = notification\n\n def build(self) -> SlackBody:\n \"\"\"\n It's currently impossible in mypy to have recursive types so we need a\n hack to get this to return a SlackBody.\n \"\"\"\n digest: Digest = self.context.get(\"digest\", {})\n return [\n SlackIssuesMessageBuilder(\n group=group,\n event=event,\n rules=[rule],\n issue_details=True,\n notification=self.notification,\n recipient=self.recipient,\n ).build()\n for rule, group, event in get_groups(digest)\n ]\n", "path": "src/sentry/integrations/slack/message_builder/notifications/digest.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, Mapping\n\nfrom sentry import features\nfrom sentry.digests import Digest\nfrom sentry.digests.utils import get_groups\nfrom sentry.integrations.slack.message_builder import SlackAttachment, SlackBlock\nfrom sentry.integrations.slack.message_builder.issues import SlackIssuesMessageBuilder\nfrom sentry.notifications.notifications.digest import DigestNotification\nfrom sentry.services.hybrid_cloud.actor import RpcActor\n\nfrom .base import SlackNotificationsMessageBuilder\n\n\nclass DigestNotificationMessageBuilder(SlackNotificationsMessageBuilder):\n def __init__(\n self,\n notification: DigestNotification,\n context: Mapping[str, Any],\n recipient: RpcActor,\n ) -> None:\n super().__init__(notification, context, recipient)\n self.notification: DigestNotification = notification\n\n def build(self) -> SlackAttachment | SlackBlock:\n \"\"\"\n It's currently impossible in mypy to have recursive types so we need a\n hack to get this to return a SlackBody.\n \"\"\"\n digest: Digest = self.context.get(\"digest\", {})\n digest_groups = get_groups(digest)\n if not features.has(\"organizations:slack-block-kit\", self.notification.organization):\n return [\n SlackIssuesMessageBuilder(\n group=group,\n event=event,\n rules=[rule],\n issue_details=True,\n notification=self.notification,\n recipient=self.recipient,\n ).build()\n for rule, group, event in digest_groups\n ]\n blocks = []\n for rule, group, event in digest_groups:\n alert_as_blocks = SlackIssuesMessageBuilder(\n group=group,\n event=event,\n rules=[rule],\n issue_details=True,\n notification=self.notification,\n recipient=self.recipient,\n ).build()\n # we iterate through the list of blocks created for each alert in the digest and add\n # each block to the list of blocks which is used for the entire digest notification\n for block in alert_as_blocks.get(\"blocks\"):\n blocks.append(block)\n return self._build_blocks(*blocks)\n", "path": "src/sentry/integrations/slack/message_builder/notifications/digest.py"}]} | 688 | 547 |
gh_patches_debug_14648 | rasdani/github-patches | git_diff | Kinto__kinto-1003 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GET on /v1/admin returns 404
The expected address for built-in admin plugin is `/v1/admin/`. But if you forget the trailing slash and type `/v1/admin` you get a 404. I think it would be better to raise a 307.
Related to #112 and #858.
```
> http get localhost:8888/v1/admin/
HTTP/1.1 200 OK
(...)
> http get localhost:8888/v1/admin
HTTP/1.1 404 Not Found
(...)
```
GET on /v1/admin returns 404
The expected address for built-in admin plugin is `/v1/admin/`. But if you forget the trailing slash and type `/v1/admin` you get a 404. I think it would be better to raise a 307.
Related to #112 and #858.
```
> http get localhost:8888/v1/admin/
HTTP/1.1 200 OK
(...)
> http get localhost:8888/v1/admin
HTTP/1.1 404 Not Found
(...)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/plugins/admin/__init__.py`
Content:
```
1 from pyramid.static import static_view
2
3
4 def includeme(config):
5 # Process settings to remove storage wording.
6
7 # Expose capability.
8 config.add_api_capability(
9 "admin",
10 version="1.6.0",
11 description="Serves the admin console.",
12 url="https://github.com/Kinto/kinto-admin/",
13 )
14
15 build_dir = static_view('kinto.plugins.admin:build', use_subpath=True)
16 config.add_route('catchall_static', '/admin/*subpath')
17 config.add_view(build_dir, route_name="catchall_static")
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/plugins/admin/__init__.py b/kinto/plugins/admin/__init__.py
--- a/kinto/plugins/admin/__init__.py
+++ b/kinto/plugins/admin/__init__.py
@@ -1,4 +1,5 @@
from pyramid.static import static_view
+from pyramid.httpexceptions import HTTPTemporaryRedirect
def includeme(config):
@@ -15,3 +16,10 @@
build_dir = static_view('kinto.plugins.admin:build', use_subpath=True)
config.add_route('catchall_static', '/admin/*subpath')
config.add_view(build_dir, route_name="catchall_static")
+
+ # Setup redirect without trailing slash.
+ def admin_redirect_view(request):
+ raise HTTPTemporaryRedirect(request.path + '/')
+
+ config.add_route('admin_redirect', '/admin')
+ config.add_view(admin_redirect_view, route_name="admin_redirect")
| {"golden_diff": "diff --git a/kinto/plugins/admin/__init__.py b/kinto/plugins/admin/__init__.py\n--- a/kinto/plugins/admin/__init__.py\n+++ b/kinto/plugins/admin/__init__.py\n@@ -1,4 +1,5 @@\n from pyramid.static import static_view\n+from pyramid.httpexceptions import HTTPTemporaryRedirect\n \n \n def includeme(config):\n@@ -15,3 +16,10 @@\n build_dir = static_view('kinto.plugins.admin:build', use_subpath=True)\n config.add_route('catchall_static', '/admin/*subpath')\n config.add_view(build_dir, route_name=\"catchall_static\")\n+\n+ # Setup redirect without trailing slash.\n+ def admin_redirect_view(request):\n+ raise HTTPTemporaryRedirect(request.path + '/')\n+\n+ config.add_route('admin_redirect', '/admin')\n+ config.add_view(admin_redirect_view, route_name=\"admin_redirect\")\n", "issue": "GET on /v1/admin returns 404\nThe expected address for built-in admin plugin is `/v1/admin/`. But if you forget the trailing slash and type `/v1/admin` you get a 404. I think it would be better to raise a 307.\r\n\r\nRelated to #112 and #858.\r\n\r\n```\r\n> http get localhost:8888/v1/admin/\r\nHTTP/1.1 200 OK\r\n(...)\r\n\r\n> http get localhost:8888/v1/admin\r\nHTTP/1.1 404 Not Found\r\n(...)\r\n```\r\n\r\n\nGET on /v1/admin returns 404\nThe expected address for built-in admin plugin is `/v1/admin/`. But if you forget the trailing slash and type `/v1/admin` you get a 404. I think it would be better to raise a 307.\r\n\r\nRelated to #112 and #858.\r\n\r\n```\r\n> http get localhost:8888/v1/admin/\r\nHTTP/1.1 200 OK\r\n(...)\r\n\r\n> http get localhost:8888/v1/admin\r\nHTTP/1.1 404 Not Found\r\n(...)\r\n```\r\n\r\n\n", "before_files": [{"content": "from pyramid.static import static_view\n\n\ndef includeme(config):\n # Process settings to remove storage wording.\n\n # Expose capability.\n config.add_api_capability(\n \"admin\",\n version=\"1.6.0\",\n description=\"Serves the admin console.\",\n url=\"https://github.com/Kinto/kinto-admin/\",\n )\n\n build_dir = static_view('kinto.plugins.admin:build', use_subpath=True)\n config.add_route('catchall_static', '/admin/*subpath')\n config.add_view(build_dir, route_name=\"catchall_static\")\n", "path": "kinto/plugins/admin/__init__.py"}], "after_files": [{"content": "from pyramid.static import static_view\nfrom pyramid.httpexceptions import HTTPTemporaryRedirect\n\n\ndef includeme(config):\n # Process settings to remove storage wording.\n\n # Expose capability.\n config.add_api_capability(\n \"admin\",\n version=\"1.6.0\",\n description=\"Serves the admin console.\",\n url=\"https://github.com/Kinto/kinto-admin/\",\n )\n\n build_dir = static_view('kinto.plugins.admin:build', use_subpath=True)\n config.add_route('catchall_static', '/admin/*subpath')\n config.add_view(build_dir, route_name=\"catchall_static\")\n\n # Setup redirect without trailing slash.\n def admin_redirect_view(request):\n raise HTTPTemporaryRedirect(request.path + '/')\n\n config.add_route('admin_redirect', '/admin')\n config.add_view(admin_redirect_view, route_name=\"admin_redirect\")\n", "path": "kinto/plugins/admin/__init__.py"}]} | 678 | 198 |
gh_patches_debug_61588 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1660 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document negative `sigma` values in `filters.gaussian_filter` are clipped to zero.
Negative sigma values have no effect; they are clipped to zero. This should be documented.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/filters/_gaussian.py`
Content:
```
1 import collections as coll
2 import numpy as np
3 from scipy import ndimage as ndi
4 import warnings
5
6 from ..util import img_as_float
7 from ..color import guess_spatial_dimensions
8
9 __all__ = ['gaussian_filter']
10
11
12 def gaussian_filter(image, sigma, output=None, mode='nearest', cval=0,
13 multichannel=None):
14 """Multi-dimensional Gaussian filter
15
16 Parameters
17 ----------
18 image : array-like
19 input image (grayscale or color) to filter.
20 sigma : scalar or sequence of scalars
21 standard deviation for Gaussian kernel. The standard
22 deviations of the Gaussian filter are given for each axis as a
23 sequence, or as a single number, in which case it is equal for
24 all axes.
25 output : array, optional
26 The ``output`` parameter passes an array in which to store the
27 filter output.
28 mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional
29 The `mode` parameter determines how the array borders are
30 handled, where `cval` is the value when mode is equal to
31 'constant'. Default is 'nearest'.
32 cval : scalar, optional
33 Value to fill past edges of input if `mode` is 'constant'. Default
34 is 0.0
35 multichannel : bool, optional (default: None)
36 Whether the last axis of the image is to be interpreted as multiple
37 channels. If True, each channel is filtered separately (channels are
38 not mixed together). Only 3 channels are supported. If `None`,
39 the function will attempt to guess this, and raise a warning if
40 ambiguous, when the array has shape (M, N, 3).
41
42 Returns
43 -------
44 filtered_image : ndarray
45 the filtered array
46
47 Notes
48 -----
49 This function is a wrapper around :func:`scipy.ndi.gaussian_filter`.
50
51 Integer arrays are converted to float.
52
53 The multi-dimensional filter is implemented as a sequence of
54 one-dimensional convolution filters. The intermediate arrays are
55 stored in the same data type as the output. Therefore, for output
56 types with a limited precision, the results may be imprecise
57 because intermediate results may be stored with insufficient
58 precision.
59
60 Examples
61 --------
62
63 >>> a = np.zeros((3, 3))
64 >>> a[1, 1] = 1
65 >>> a
66 array([[ 0., 0., 0.],
67 [ 0., 1., 0.],
68 [ 0., 0., 0.]])
69 >>> gaussian_filter(a, sigma=0.4) # mild smoothing
70 array([[ 0.00163116, 0.03712502, 0.00163116],
71 [ 0.03712502, 0.84496158, 0.03712502],
72 [ 0.00163116, 0.03712502, 0.00163116]])
73 >>> gaussian_filter(a, sigma=1) # more smooting
74 array([[ 0.05855018, 0.09653293, 0.05855018],
75 [ 0.09653293, 0.15915589, 0.09653293],
76 [ 0.05855018, 0.09653293, 0.05855018]])
77 >>> # Several modes are possible for handling boundaries
78 >>> gaussian_filter(a, sigma=1, mode='reflect')
79 array([[ 0.08767308, 0.12075024, 0.08767308],
80 [ 0.12075024, 0.16630671, 0.12075024],
81 [ 0.08767308, 0.12075024, 0.08767308]])
82 >>> # For RGB images, each is filtered separately
83 >>> from skimage.data import astronaut
84 >>> image = astronaut()
85 >>> filtered_img = gaussian_filter(image, sigma=1, multichannel=True)
86
87 """
88
89 spatial_dims = guess_spatial_dimensions(image)
90 if spatial_dims is None and multichannel is None:
91 msg = ("Images with dimensions (M, N, 3) are interpreted as 2D+RGB "
92 "by default. Use `multichannel=False` to interpret as "
93 "3D image with last dimension of length 3.")
94 warnings.warn(RuntimeWarning(msg))
95 multichannel = True
96 if multichannel:
97 # do not filter across channels
98 if not isinstance(sigma, coll.Iterable):
99 sigma = [sigma] * (image.ndim - 1)
100 if len(sigma) != image.ndim:
101 sigma = np.concatenate((np.asarray(sigma), [0]))
102 image = img_as_float(image)
103 return ndi.gaussian_filter(image, sigma, mode=mode, cval=cval)
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/filters/_gaussian.py b/skimage/filters/_gaussian.py
--- a/skimage/filters/_gaussian.py
+++ b/skimage/filters/_gaussian.py
@@ -93,6 +93,8 @@
"3D image with last dimension of length 3.")
warnings.warn(RuntimeWarning(msg))
multichannel = True
+ if np.any(np.asarray(sigma) < 0.0):
+ raise ValueError("Sigma values less than zero are not valid")
if multichannel:
# do not filter across channels
if not isinstance(sigma, coll.Iterable):
| {"golden_diff": "diff --git a/skimage/filters/_gaussian.py b/skimage/filters/_gaussian.py\n--- a/skimage/filters/_gaussian.py\n+++ b/skimage/filters/_gaussian.py\n@@ -93,6 +93,8 @@\n \"3D image with last dimension of length 3.\")\n warnings.warn(RuntimeWarning(msg))\n multichannel = True\n+ if np.any(np.asarray(sigma) < 0.0):\n+ raise ValueError(\"Sigma values less than zero are not valid\")\n if multichannel:\n # do not filter across channels\n if not isinstance(sigma, coll.Iterable):\n", "issue": "Document negative `sigma` values in `filters.gaussian_filter` are clipped to zero.\nNegative sigma values have no effect; they are clipped to zero. This should be documented.\n\n", "before_files": [{"content": "import collections as coll\nimport numpy as np\nfrom scipy import ndimage as ndi\nimport warnings\n\nfrom ..util import img_as_float\nfrom ..color import guess_spatial_dimensions\n\n__all__ = ['gaussian_filter']\n\n\ndef gaussian_filter(image, sigma, output=None, mode='nearest', cval=0,\n multichannel=None):\n \"\"\"Multi-dimensional Gaussian filter\n\n Parameters\n ----------\n image : array-like\n input image (grayscale or color) to filter.\n sigma : scalar or sequence of scalars\n standard deviation for Gaussian kernel. The standard\n deviations of the Gaussian filter are given for each axis as a\n sequence, or as a single number, in which case it is equal for\n all axes.\n output : array, optional\n The ``output`` parameter passes an array in which to store the\n filter output.\n mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional\n The `mode` parameter determines how the array borders are\n handled, where `cval` is the value when mode is equal to\n 'constant'. Default is 'nearest'.\n cval : scalar, optional\n Value to fill past edges of input if `mode` is 'constant'. Default\n is 0.0\n multichannel : bool, optional (default: None)\n Whether the last axis of the image is to be interpreted as multiple\n channels. If True, each channel is filtered separately (channels are\n not mixed together). Only 3 channels are supported. If `None`,\n the function will attempt to guess this, and raise a warning if\n ambiguous, when the array has shape (M, N, 3).\n\n Returns\n -------\n filtered_image : ndarray\n the filtered array\n\n Notes\n -----\n This function is a wrapper around :func:`scipy.ndi.gaussian_filter`.\n\n Integer arrays are converted to float.\n\n The multi-dimensional filter is implemented as a sequence of\n one-dimensional convolution filters. The intermediate arrays are\n stored in the same data type as the output. Therefore, for output\n types with a limited precision, the results may be imprecise\n because intermediate results may be stored with insufficient\n precision.\n\n Examples\n --------\n\n >>> a = np.zeros((3, 3))\n >>> a[1, 1] = 1\n >>> a\n array([[ 0., 0., 0.],\n [ 0., 1., 0.],\n [ 0., 0., 0.]])\n >>> gaussian_filter(a, sigma=0.4) # mild smoothing\n array([[ 0.00163116, 0.03712502, 0.00163116],\n [ 0.03712502, 0.84496158, 0.03712502],\n [ 0.00163116, 0.03712502, 0.00163116]])\n >>> gaussian_filter(a, sigma=1) # more smooting\n array([[ 0.05855018, 0.09653293, 0.05855018],\n [ 0.09653293, 0.15915589, 0.09653293],\n [ 0.05855018, 0.09653293, 0.05855018]])\n >>> # Several modes are possible for handling boundaries\n >>> gaussian_filter(a, sigma=1, mode='reflect')\n array([[ 0.08767308, 0.12075024, 0.08767308],\n [ 0.12075024, 0.16630671, 0.12075024],\n [ 0.08767308, 0.12075024, 0.08767308]])\n >>> # For RGB images, each is filtered separately\n >>> from skimage.data import astronaut\n >>> image = astronaut()\n >>> filtered_img = gaussian_filter(image, sigma=1, multichannel=True)\n\n \"\"\"\n\n spatial_dims = guess_spatial_dimensions(image)\n if spatial_dims is None and multichannel is None:\n msg = (\"Images with dimensions (M, N, 3) are interpreted as 2D+RGB \"\n \"by default. Use `multichannel=False` to interpret as \"\n \"3D image with last dimension of length 3.\")\n warnings.warn(RuntimeWarning(msg))\n multichannel = True\n if multichannel:\n # do not filter across channels\n if not isinstance(sigma, coll.Iterable):\n sigma = [sigma] * (image.ndim - 1)\n if len(sigma) != image.ndim:\n sigma = np.concatenate((np.asarray(sigma), [0]))\n image = img_as_float(image)\n return ndi.gaussian_filter(image, sigma, mode=mode, cval=cval)\n", "path": "skimage/filters/_gaussian.py"}], "after_files": [{"content": "import collections as coll\nimport numpy as np\nfrom scipy import ndimage as ndi\nimport warnings\n\nfrom ..util import img_as_float\nfrom ..color import guess_spatial_dimensions\n\n__all__ = ['gaussian_filter']\n\n\ndef gaussian_filter(image, sigma, output=None, mode='nearest', cval=0,\n multichannel=None):\n \"\"\"Multi-dimensional Gaussian filter\n\n Parameters\n ----------\n image : array-like\n input image (grayscale or color) to filter.\n sigma : scalar or sequence of scalars\n standard deviation for Gaussian kernel. The standard\n deviations of the Gaussian filter are given for each axis as a\n sequence, or as a single number, in which case it is equal for\n all axes.\n output : array, optional\n The ``output`` parameter passes an array in which to store the\n filter output.\n mode : {'reflect', 'constant', 'nearest', 'mirror', 'wrap'}, optional\n The `mode` parameter determines how the array borders are\n handled, where `cval` is the value when mode is equal to\n 'constant'. Default is 'nearest'.\n cval : scalar, optional\n Value to fill past edges of input if `mode` is 'constant'. Default\n is 0.0\n multichannel : bool, optional (default: None)\n Whether the last axis of the image is to be interpreted as multiple\n channels. If True, each channel is filtered separately (channels are\n not mixed together). Only 3 channels are supported. If `None`,\n the function will attempt to guess this, and raise a warning if\n ambiguous, when the array has shape (M, N, 3).\n\n Returns\n -------\n filtered_image : ndarray\n the filtered array\n\n Notes\n -----\n This function is a wrapper around :func:`scipy.ndi.gaussian_filter`.\n\n Integer arrays are converted to float.\n\n The multi-dimensional filter is implemented as a sequence of\n one-dimensional convolution filters. The intermediate arrays are\n stored in the same data type as the output. Therefore, for output\n types with a limited precision, the results may be imprecise\n because intermediate results may be stored with insufficient\n precision.\n\n Examples\n --------\n\n >>> a = np.zeros((3, 3))\n >>> a[1, 1] = 1\n >>> a\n array([[ 0., 0., 0.],\n [ 0., 1., 0.],\n [ 0., 0., 0.]])\n >>> gaussian_filter(a, sigma=0.4) # mild smoothing\n array([[ 0.00163116, 0.03712502, 0.00163116],\n [ 0.03712502, 0.84496158, 0.03712502],\n [ 0.00163116, 0.03712502, 0.00163116]])\n >>> gaussian_filter(a, sigma=1) # more smooting\n array([[ 0.05855018, 0.09653293, 0.05855018],\n [ 0.09653293, 0.15915589, 0.09653293],\n [ 0.05855018, 0.09653293, 0.05855018]])\n >>> # Several modes are possible for handling boundaries\n >>> gaussian_filter(a, sigma=1, mode='reflect')\n array([[ 0.08767308, 0.12075024, 0.08767308],\n [ 0.12075024, 0.16630671, 0.12075024],\n [ 0.08767308, 0.12075024, 0.08767308]])\n >>> # For RGB images, each is filtered separately\n >>> from skimage.data import astronaut\n >>> image = astronaut()\n >>> filtered_img = gaussian_filter(image, sigma=1, multichannel=True)\n\n \"\"\"\n\n spatial_dims = guess_spatial_dimensions(image)\n if spatial_dims is None and multichannel is None:\n msg = (\"Images with dimensions (M, N, 3) are interpreted as 2D+RGB \"\n \"by default. Use `multichannel=False` to interpret as \"\n \"3D image with last dimension of length 3.\")\n warnings.warn(RuntimeWarning(msg))\n multichannel = True\n if np.any(np.asarray(sigma) < 0.0):\n raise ValueError(\"Sigma values less than zero are not valid\")\n if multichannel:\n # do not filter across channels\n if not isinstance(sigma, coll.Iterable):\n sigma = [sigma] * (image.ndim - 1)\n if len(sigma) != image.ndim:\n sigma = np.concatenate((np.asarray(sigma), [0]))\n image = img_as_float(image)\n return ndi.gaussian_filter(image, sigma, mode=mode, cval=cval)\n", "path": "skimage/filters/_gaussian.py"}]} | 1,717 | 140 |
gh_patches_debug_27760 | rasdani/github-patches | git_diff | microsoft__playwright-python-191 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal docs are published on the gh pages instead of the official API
We should remove everything that is not relevant to the end user!
Docs: https://microsoft.github.io/playwright-python/sync_api.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright/__init__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import playwright.helper as helper
16 from playwright.main import AsyncPlaywrightContextManager, SyncPlaywrightContextManager
17
18 Error = helper.Error
19 TimeoutError = helper.TimeoutError
20
21
22 def async_playwright() -> AsyncPlaywrightContextManager:
23 return AsyncPlaywrightContextManager()
24
25
26 def sync_playwright() -> SyncPlaywrightContextManager:
27 return SyncPlaywrightContextManager()
28
29
30 __all__ = [
31 "async_playwright",
32 "sync_playwright",
33 "Error",
34 "TimeoutError",
35 ]
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/playwright/__init__.py b/playwright/__init__.py
--- a/playwright/__init__.py
+++ b/playwright/__init__.py
@@ -11,6 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+"""
+Python package `playwright` is a Python library to automate Chromium,
+Firefox and WebKit with a single API. Playwright is built to enable cross-browser
+web automation that is ever-green, capable, reliable and fast.
+For more information you'll find the documentation for the sync API [here](sync_api.html)
+and for the async API [here](async_api.html).
+"""
import playwright.helper as helper
from playwright.main import AsyncPlaywrightContextManager, SyncPlaywrightContextManager
@@ -33,3 +40,37 @@
"Error",
"TimeoutError",
]
+
+__pdoc__ = {
+ "accessibility": False,
+ "async_base": False,
+ "browser": False,
+ "browser_context": False,
+ "browser_type": False,
+ "cdp_session": False,
+ "chromium_browser_context": False,
+ "connection": False,
+ "console_message": False,
+ "dialog": False,
+ "download": False,
+ "element_handle": False,
+ "event_context_manager": False,
+ "file_chooser": False,
+ "frame": False,
+ "helper": False,
+ "impl_to_api_mapping": False,
+ "input": False,
+ "js_handle": False,
+ "main": False,
+ "network": False,
+ "object_factory": False,
+ "page": False,
+ "path_utils": False,
+ "playwright": False,
+ "selectors": False,
+ "sync_base": False,
+ "transport": False,
+ "wait_helper": False,
+ "async_playwright": False,
+ "sync_playwright": False,
+}
| {"golden_diff": "diff --git a/playwright/__init__.py b/playwright/__init__.py\n--- a/playwright/__init__.py\n+++ b/playwright/__init__.py\n@@ -11,6 +11,13 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n+\"\"\"\n+Python package `playwright` is a Python library to automate Chromium,\n+Firefox and WebKit with a single API. Playwright is built to enable cross-browser\n+web automation that is ever-green, capable, reliable and fast.\n+For more information you'll find the documentation for the sync API [here](sync_api.html)\n+and for the async API [here](async_api.html).\n+\"\"\"\n \n import playwright.helper as helper\n from playwright.main import AsyncPlaywrightContextManager, SyncPlaywrightContextManager\n@@ -33,3 +40,37 @@\n \"Error\",\n \"TimeoutError\",\n ]\n+\n+__pdoc__ = {\n+ \"accessibility\": False,\n+ \"async_base\": False,\n+ \"browser\": False,\n+ \"browser_context\": False,\n+ \"browser_type\": False,\n+ \"cdp_session\": False,\n+ \"chromium_browser_context\": False,\n+ \"connection\": False,\n+ \"console_message\": False,\n+ \"dialog\": False,\n+ \"download\": False,\n+ \"element_handle\": False,\n+ \"event_context_manager\": False,\n+ \"file_chooser\": False,\n+ \"frame\": False,\n+ \"helper\": False,\n+ \"impl_to_api_mapping\": False,\n+ \"input\": False,\n+ \"js_handle\": False,\n+ \"main\": False,\n+ \"network\": False,\n+ \"object_factory\": False,\n+ \"page\": False,\n+ \"path_utils\": False,\n+ \"playwright\": False,\n+ \"selectors\": False,\n+ \"sync_base\": False,\n+ \"transport\": False,\n+ \"wait_helper\": False,\n+ \"async_playwright\": False,\n+ \"sync_playwright\": False,\n+}\n", "issue": "Internal docs are published on the gh pages instead of the official API\nWe should remove everything that is not relevant to the end user!\n\nDocs: https://microsoft.github.io/playwright-python/sync_api.html\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport playwright.helper as helper\nfrom playwright.main import AsyncPlaywrightContextManager, SyncPlaywrightContextManager\n\nError = helper.Error\nTimeoutError = helper.TimeoutError\n\n\ndef async_playwright() -> AsyncPlaywrightContextManager:\n return AsyncPlaywrightContextManager()\n\n\ndef sync_playwright() -> SyncPlaywrightContextManager:\n return SyncPlaywrightContextManager()\n\n\n__all__ = [\n \"async_playwright\",\n \"sync_playwright\",\n \"Error\",\n \"TimeoutError\",\n]\n", "path": "playwright/__init__.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nPython package `playwright` is a Python library to automate Chromium,\nFirefox and WebKit with a single API. Playwright is built to enable cross-browser\nweb automation that is ever-green, capable, reliable and fast.\nFor more information you'll find the documentation for the sync API [here](sync_api.html)\nand for the async API [here](async_api.html).\n\"\"\"\n\nimport playwright.helper as helper\nfrom playwright.main import AsyncPlaywrightContextManager, SyncPlaywrightContextManager\n\nError = helper.Error\nTimeoutError = helper.TimeoutError\n\n\ndef async_playwright() -> AsyncPlaywrightContextManager:\n return AsyncPlaywrightContextManager()\n\n\ndef sync_playwright() -> SyncPlaywrightContextManager:\n return SyncPlaywrightContextManager()\n\n\n__all__ = [\n \"async_playwright\",\n \"sync_playwright\",\n \"Error\",\n \"TimeoutError\",\n]\n\n__pdoc__ = {\n \"accessibility\": False,\n \"async_base\": False,\n \"browser\": False,\n \"browser_context\": False,\n \"browser_type\": False,\n \"cdp_session\": False,\n \"chromium_browser_context\": False,\n \"connection\": False,\n \"console_message\": False,\n \"dialog\": False,\n \"download\": False,\n \"element_handle\": False,\n \"event_context_manager\": False,\n \"file_chooser\": False,\n \"frame\": False,\n \"helper\": False,\n \"impl_to_api_mapping\": False,\n \"input\": False,\n \"js_handle\": False,\n \"main\": False,\n \"network\": False,\n \"object_factory\": False,\n \"page\": False,\n \"path_utils\": False,\n \"playwright\": False,\n \"selectors\": False,\n \"sync_base\": False,\n \"transport\": False,\n \"wait_helper\": False,\n \"async_playwright\": False,\n \"sync_playwright\": False,\n}\n", "path": "playwright/__init__.py"}]} | 602 | 467 |
gh_patches_debug_6610 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3338 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider scooters_coffee is broken
During the global build at 2021-06-23-14-42-18, spider **scooters_coffee** failed with **324 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/scooters_coffee.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/scooters_coffee.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/scooters_coffee.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/scooters_coffee.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4
5 import scrapy
6
7 from locations.items import GeojsonPointItem
8 from locations.hours import OpeningHours
9
10 DAY_MAPPING = {'Mon': 'Mo', 'Tue': 'Tu',
11 'Wed': 'We', 'Thu': 'Th',
12 'Fri': 'Fr', 'Sat': 'Sa',
13 'Sun': 'Su'}
14
15
16 class ScootersCoffeeSpider(scrapy.Spider):
17 name = "scooters_coffee"
18 item_attributes = {'brand': "Scooter's Coffee"}
19 allowed_domains = ['code.metalocator.com']
20 download_delay = 0.5
21
22 def start_requests(self):
23 n = 327
24 for store_id in range(1, n+1):
25 url = f'https://code.metalocator.com/index.php?option=com_locator&view=location&tmpl=component&task=load&framed=1&sample_data=undefined&format=json&Itemid=12991&templ[]=item_address_template&lang=&_opt_out=&_urlparams=&distance=NaN&id={store_id}'
26
27 yield scrapy.Request(url=url, callback=self.parse)
28
29 def parse_hours(self, hours):
30 opening_hours = OpeningHours()
31
32 weekdays = re.findall(r'{(.*?)}', hours)
33 for weekday in weekdays:
34 day, open_close = weekday.split('|')
35 if open_close == 'C':
36 continue
37 else:
38 open_time, close_time = open_close.split('-')
39 opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time, time_format='%I:%M%p')
40
41 return opening_hours.as_opening_hours()
42
43 def parse(self, response):
44 store_data = json.loads(response.text)[0]
45 name = store_data['name']
46 if '*permanently closed' in name.lower():
47 pass
48 else: # Gather the store details
49
50 properties = {
51 'ref': store_data['id'],
52 'name': store_data['name'].strip(' *COMING SOON'),
53 'addr_full': store_data['address'],
54 'city': store_data['city'],
55 'state': store_data['state'],
56 'postcode': store_data['postalcode'],
57 'country': store_data['country'],
58 'lat': store_data['lat'],
59 'lon': store_data['lng'],
60 'phone': store_data['phone'],
61 'website': response.url
62 }
63
64 hours = store_data.get('hours', '')
65 if hours and hours != '{Sun|C}{Mon|C}{Tue|C}{Wed|C}{Thu|C}{Fri|C}{Sat|C}':
66 store_hours = self.parse_hours(hours)
67 properties["opening_hours"] = store_hours
68
69 yield GeojsonPointItem(**properties)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/scooters_coffee.py b/locations/spiders/scooters_coffee.py
--- a/locations/spiders/scooters_coffee.py
+++ b/locations/spiders/scooters_coffee.py
@@ -35,6 +35,7 @@
if open_close == 'C':
continue
else:
+ open_close = open_close.replace(' ', '')
open_time, close_time = open_close.split('-')
opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time, time_format='%I:%M%p')
| {"golden_diff": "diff --git a/locations/spiders/scooters_coffee.py b/locations/spiders/scooters_coffee.py\n--- a/locations/spiders/scooters_coffee.py\n+++ b/locations/spiders/scooters_coffee.py\n@@ -35,6 +35,7 @@\n if open_close == 'C':\n continue\n else:\n+ open_close = open_close.replace(' ', '')\n open_time, close_time = open_close.split('-')\n opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time, time_format='%I:%M%p')\n", "issue": "Spider scooters_coffee is broken\nDuring the global build at 2021-06-23-14-42-18, spider **scooters_coffee** failed with **324 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/scooters_coffee.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/scooters_coffee.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/scooters_coffee.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = {'Mon': 'Mo', 'Tue': 'Tu',\n 'Wed': 'We', 'Thu': 'Th',\n 'Fri': 'Fr', 'Sat': 'Sa',\n 'Sun': 'Su'}\n\n\nclass ScootersCoffeeSpider(scrapy.Spider):\n name = \"scooters_coffee\"\n item_attributes = {'brand': \"Scooter's Coffee\"}\n allowed_domains = ['code.metalocator.com']\n download_delay = 0.5\n\n def start_requests(self):\n n = 327\n for store_id in range(1, n+1):\n url = f'https://code.metalocator.com/index.php?option=com_locator&view=location&tmpl=component&task=load&framed=1&sample_data=undefined&format=json&Itemid=12991&templ[]=item_address_template&lang=&_opt_out=&_urlparams=&distance=NaN&id={store_id}'\n \n yield scrapy.Request(url=url, callback=self.parse)\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n weekdays = re.findall(r'{(.*?)}', hours)\n for weekday in weekdays:\n day, open_close = weekday.split('|')\n if open_close == 'C':\n continue\n else:\n open_time, close_time = open_close.split('-')\n opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time, time_format='%I:%M%p')\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n store_data = json.loads(response.text)[0]\n name = store_data['name']\n if '*permanently closed' in name.lower():\n pass\n else: # Gather the store details\n\n properties = {\n 'ref': store_data['id'],\n 'name': store_data['name'].strip(' *COMING SOON'),\n 'addr_full': store_data['address'],\n 'city': store_data['city'],\n 'state': store_data['state'],\n 'postcode': store_data['postalcode'],\n 'country': store_data['country'],\n 'lat': store_data['lat'],\n 'lon': store_data['lng'],\n 'phone': store_data['phone'],\n 'website': response.url\n }\n\n hours = store_data.get('hours', '')\n if hours and hours != '{Sun|C}{Mon|C}{Tue|C}{Wed|C}{Thu|C}{Fri|C}{Sat|C}':\n store_hours = self.parse_hours(hours)\n properties[\"opening_hours\"] = store_hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/scooters_coffee.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = {'Mon': 'Mo', 'Tue': 'Tu',\n 'Wed': 'We', 'Thu': 'Th',\n 'Fri': 'Fr', 'Sat': 'Sa',\n 'Sun': 'Su'}\n\n\nclass ScootersCoffeeSpider(scrapy.Spider):\n name = \"scooters_coffee\"\n item_attributes = {'brand': \"Scooter's Coffee\"}\n allowed_domains = ['code.metalocator.com']\n download_delay = 0.5\n\n def start_requests(self):\n n = 327\n for store_id in range(1, n+1):\n url = f'https://code.metalocator.com/index.php?option=com_locator&view=location&tmpl=component&task=load&framed=1&sample_data=undefined&format=json&Itemid=12991&templ[]=item_address_template&lang=&_opt_out=&_urlparams=&distance=NaN&id={store_id}'\n \n yield scrapy.Request(url=url, callback=self.parse)\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n weekdays = re.findall(r'{(.*?)}', hours)\n for weekday in weekdays:\n day, open_close = weekday.split('|')\n if open_close == 'C':\n continue\n else:\n open_close = open_close.replace(' ', '')\n open_time, close_time = open_close.split('-')\n opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time, time_format='%I:%M%p')\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n store_data = json.loads(response.text)[0]\n name = store_data['name']\n if '*permanently closed' in name.lower():\n pass\n else: # Gather the store details\n\n properties = {\n 'ref': store_data['id'],\n 'name': store_data['name'].strip(' *COMING SOON'),\n 'addr_full': store_data['address'],\n 'city': store_data['city'],\n 'state': store_data['state'],\n 'postcode': store_data['postalcode'],\n 'country': store_data['country'],\n 'lat': store_data['lat'],\n 'lon': store_data['lng'],\n 'phone': store_data['phone'],\n 'website': response.url\n }\n\n hours = store_data.get('hours', '')\n if hours and hours != '{Sun|C}{Mon|C}{Tue|C}{Wed|C}{Thu|C}{Fri|C}{Sat|C}':\n store_hours = self.parse_hours(hours)\n properties[\"opening_hours\"] = store_hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/scooters_coffee.py"}]} | 1,199 | 132 |
gh_patches_debug_5979 | rasdani/github-patches | git_diff | getnikola__nikola-1994 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nikola no longer has all-nocdn.*
I started using `nikola` today for a gallery. The generated pages make reference to `all-nocdn.css` and `all-nocdn.js`, but they're not (longer?) part of `nikola`'s code:
``` bash
mdione@diablo:~/src/projects/nikola$ git remote show origin
* remote origin
Fetch URL: https://github.com/getnikola/nikola.git
Push URL: https://github.com/getnikola/nikola.git
HEAD branch: master
mdione@diablo:~/src/projects/nikola$ git branch
* master
mdione@diablo:~/src/projects/nikola$ git pull
Already up-to-date.
mdione@diablo:~/src/projects/nikola$ find . -name all-nocdn.*
mdione@diablo:~/src/projects/nikola$
```
I just copied those files from and old attempt to use `nikola` and the site works. This happens with both versions `7.6.0-3` from Debian and from `master`, as you can see above.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/bundles.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Bundle assets using WebAssets."""
28
29 from __future__ import unicode_literals
30
31 import os
32
33 try:
34 import webassets
35 except ImportError:
36 webassets = None # NOQA
37
38 from nikola.plugin_categories import LateTask
39 from nikola import utils
40
41
42 class BuildBundles(LateTask):
43
44 """Bundle assets using WebAssets."""
45
46 name = "create_bundles"
47
48 def set_site(self, site):
49 """Set Nikola site."""
50 self.logger = utils.get_logger('bundles', utils.STDERR_HANDLER)
51 if webassets is None and site.config['USE_BUNDLES']:
52 utils.req_missing(['webassets'], 'USE_BUNDLES', optional=True)
53 self.logger.warn('Setting USE_BUNDLES to False.')
54 site.config['USE_BUNDLES'] = False
55 super(BuildBundles, self).set_site(site)
56
57 def gen_tasks(self):
58 """Bundle assets using WebAssets."""
59 kw = {
60 'filters': self.site.config['FILTERS'],
61 'output_folder': self.site.config['OUTPUT_FOLDER'],
62 'cache_folder': self.site.config['CACHE_FOLDER'],
63 'theme_bundles': get_theme_bundles(self.site.THEMES),
64 'themes': self.site.THEMES,
65 'files_folders': self.site.config['FILES_FOLDERS'],
66 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],
67 }
68
69 def build_bundle(output, inputs):
70 out_dir = os.path.join(kw['output_folder'],
71 os.path.dirname(output))
72 inputs = [os.path.relpath(i, out_dir) for i in inputs if os.path.isfile(i)]
73 cache_dir = os.path.join(kw['cache_folder'], 'webassets')
74 utils.makedirs(cache_dir)
75 env = webassets.Environment(out_dir, os.path.dirname(output),
76 cache=cache_dir)
77 if inputs:
78 bundle = webassets.Bundle(*inputs, output=os.path.basename(output))
79 env.register(output, bundle)
80 # This generates the file
81 try:
82 env[output].urls()
83 except Exception as e:
84 self.logger.error("Failed to build bundles.")
85 self.logger.exception(e)
86 self.logger.notice("Try running ``nikola clean`` and building again.")
87 else:
88 with open(os.path.join(out_dir, os.path.basename(output)), 'wb+'):
89 pass # Create empty file
90
91 yield self.group_task()
92 if (webassets is not None and self.site.config['USE_BUNDLES'] is not
93 False):
94 for name, _files in kw['theme_bundles'].items():
95 output_path = os.path.join(kw['output_folder'], name)
96 dname = os.path.dirname(name)
97 files = []
98 for fname in _files:
99 # paths are relative to dirname
100 files.append(os.path.join(dname, fname))
101 file_dep = [os.path.join(kw['output_folder'], fname)
102 for fname in files if
103 utils.get_asset_path(fname, self.site.THEMES, self.site.config['FILES_FOLDERS']) or fname == os.path.join('assets', 'css', 'code.css')]
104 # code.css will be generated by us if it does not exist in
105 # FILES_FOLDERS or theme assets. It is guaranteed that the
106 # generation will happen before this task.
107 task = {
108 'file_dep': list(file_dep),
109 'task_dep': ['copy_assets', 'copy_files'],
110 'basename': str(self.name),
111 'name': str(output_path),
112 'actions': [(build_bundle, (name, file_dep))],
113 'targets': [output_path],
114 'uptodate': [
115 utils.config_changed({
116 1: kw,
117 2: file_dep
118 }, 'nikola.plugins.task.bundles')],
119 'clean': True,
120 }
121 yield utils.apply_filters(task, kw['filters'])
122
123
124 def get_theme_bundles(themes):
125 """Given a theme chain, return the bundle definitions."""
126 bundles = {}
127 for theme_name in themes:
128 bundles_path = os.path.join(
129 utils.get_theme_path(theme_name), 'bundles')
130 if os.path.isfile(bundles_path):
131 with open(bundles_path) as fd:
132 for line in fd:
133 try:
134 name, files = line.split('=')
135 files = [f.strip() for f in files.split(',')]
136 bundles[name.strip().replace('/', os.sep)] = files
137 except ValueError:
138 # for empty lines
139 pass
140 break
141 return bundles
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/task/bundles.py b/nikola/plugins/task/bundles.py
--- a/nikola/plugins/task/bundles.py
+++ b/nikola/plugins/task/bundles.py
@@ -52,6 +52,7 @@
utils.req_missing(['webassets'], 'USE_BUNDLES', optional=True)
self.logger.warn('Setting USE_BUNDLES to False.')
site.config['USE_BUNDLES'] = False
+ site._GLOBAL_CONTEXT['use_bundles'] = False
super(BuildBundles, self).set_site(site)
def gen_tasks(self):
| {"golden_diff": "diff --git a/nikola/plugins/task/bundles.py b/nikola/plugins/task/bundles.py\n--- a/nikola/plugins/task/bundles.py\n+++ b/nikola/plugins/task/bundles.py\n@@ -52,6 +52,7 @@\n utils.req_missing(['webassets'], 'USE_BUNDLES', optional=True)\n self.logger.warn('Setting USE_BUNDLES to False.')\n site.config['USE_BUNDLES'] = False\n+ site._GLOBAL_CONTEXT['use_bundles'] = False\n super(BuildBundles, self).set_site(site)\n \n def gen_tasks(self):\n", "issue": "nikola no longer has all-nocdn.*\nI started using `nikola` today for a gallery. The generated pages make reference to `all-nocdn.css` and `all-nocdn.js`, but they're not (longer?) part of `nikola`'s code:\n\n``` bash\nmdione@diablo:~/src/projects/nikola$ git remote show origin\n* remote origin\n Fetch URL: https://github.com/getnikola/nikola.git\n Push URL: https://github.com/getnikola/nikola.git\n HEAD branch: master\nmdione@diablo:~/src/projects/nikola$ git branch\n* master\nmdione@diablo:~/src/projects/nikola$ git pull\nAlready up-to-date.\nmdione@diablo:~/src/projects/nikola$ find . -name all-nocdn.*\nmdione@diablo:~/src/projects/nikola$\n```\n\nI just copied those files from and old attempt to use `nikola` and the site works. This happens with both versions `7.6.0-3` from Debian and from `master`, as you can see above.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Bundle assets using WebAssets.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\n\ntry:\n import webassets\nexcept ImportError:\n webassets = None # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola import utils\n\n\nclass BuildBundles(LateTask):\n\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n name = \"create_bundles\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.logger = utils.get_logger('bundles', utils.STDERR_HANDLER)\n if webassets is None and site.config['USE_BUNDLES']:\n utils.req_missing(['webassets'], 'USE_BUNDLES', optional=True)\n self.logger.warn('Setting USE_BUNDLES to False.')\n site.config['USE_BUNDLES'] = False\n super(BuildBundles, self).set_site(site)\n\n def gen_tasks(self):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n kw = {\n 'filters': self.site.config['FILTERS'],\n 'output_folder': self.site.config['OUTPUT_FOLDER'],\n 'cache_folder': self.site.config['CACHE_FOLDER'],\n 'theme_bundles': get_theme_bundles(self.site.THEMES),\n 'themes': self.site.THEMES,\n 'files_folders': self.site.config['FILES_FOLDERS'],\n 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],\n }\n\n def build_bundle(output, inputs):\n out_dir = os.path.join(kw['output_folder'],\n os.path.dirname(output))\n inputs = [os.path.relpath(i, out_dir) for i in inputs if os.path.isfile(i)]\n cache_dir = os.path.join(kw['cache_folder'], 'webassets')\n utils.makedirs(cache_dir)\n env = webassets.Environment(out_dir, os.path.dirname(output),\n cache=cache_dir)\n if inputs:\n bundle = webassets.Bundle(*inputs, output=os.path.basename(output))\n env.register(output, bundle)\n # This generates the file\n try:\n env[output].urls()\n except Exception as e:\n self.logger.error(\"Failed to build bundles.\")\n self.logger.exception(e)\n self.logger.notice(\"Try running ``nikola clean`` and building again.\")\n else:\n with open(os.path.join(out_dir, os.path.basename(output)), 'wb+'):\n pass # Create empty file\n\n yield self.group_task()\n if (webassets is not None and self.site.config['USE_BUNDLES'] is not\n False):\n for name, _files in kw['theme_bundles'].items():\n output_path = os.path.join(kw['output_folder'], name)\n dname = os.path.dirname(name)\n files = []\n for fname in _files:\n # paths are relative to dirname\n files.append(os.path.join(dname, fname))\n file_dep = [os.path.join(kw['output_folder'], fname)\n for fname in files if\n utils.get_asset_path(fname, self.site.THEMES, self.site.config['FILES_FOLDERS']) or fname == os.path.join('assets', 'css', 'code.css')]\n # code.css will be generated by us if it does not exist in\n # FILES_FOLDERS or theme assets. It is guaranteed that the\n # generation will happen before this task.\n task = {\n 'file_dep': list(file_dep),\n 'task_dep': ['copy_assets', 'copy_files'],\n 'basename': str(self.name),\n 'name': str(output_path),\n 'actions': [(build_bundle, (name, file_dep))],\n 'targets': [output_path],\n 'uptodate': [\n utils.config_changed({\n 1: kw,\n 2: file_dep\n }, 'nikola.plugins.task.bundles')],\n 'clean': True,\n }\n yield utils.apply_filters(task, kw['filters'])\n\n\ndef get_theme_bundles(themes):\n \"\"\"Given a theme chain, return the bundle definitions.\"\"\"\n bundles = {}\n for theme_name in themes:\n bundles_path = os.path.join(\n utils.get_theme_path(theme_name), 'bundles')\n if os.path.isfile(bundles_path):\n with open(bundles_path) as fd:\n for line in fd:\n try:\n name, files = line.split('=')\n files = [f.strip() for f in files.split(',')]\n bundles[name.strip().replace('/', os.sep)] = files\n except ValueError:\n # for empty lines\n pass\n break\n return bundles\n", "path": "nikola/plugins/task/bundles.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Bundle assets using WebAssets.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\n\ntry:\n import webassets\nexcept ImportError:\n webassets = None # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola import utils\n\n\nclass BuildBundles(LateTask):\n\n \"\"\"Bundle assets using WebAssets.\"\"\"\n\n name = \"create_bundles\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.logger = utils.get_logger('bundles', utils.STDERR_HANDLER)\n if webassets is None and site.config['USE_BUNDLES']:\n utils.req_missing(['webassets'], 'USE_BUNDLES', optional=True)\n self.logger.warn('Setting USE_BUNDLES to False.')\n site.config['USE_BUNDLES'] = False\n site._GLOBAL_CONTEXT['use_bundles'] = False\n super(BuildBundles, self).set_site(site)\n\n def gen_tasks(self):\n \"\"\"Bundle assets using WebAssets.\"\"\"\n kw = {\n 'filters': self.site.config['FILTERS'],\n 'output_folder': self.site.config['OUTPUT_FOLDER'],\n 'cache_folder': self.site.config['CACHE_FOLDER'],\n 'theme_bundles': get_theme_bundles(self.site.THEMES),\n 'themes': self.site.THEMES,\n 'files_folders': self.site.config['FILES_FOLDERS'],\n 'code_color_scheme': self.site.config['CODE_COLOR_SCHEME'],\n }\n\n def build_bundle(output, inputs):\n out_dir = os.path.join(kw['output_folder'],\n os.path.dirname(output))\n inputs = [os.path.relpath(i, out_dir) for i in inputs if os.path.isfile(i)]\n cache_dir = os.path.join(kw['cache_folder'], 'webassets')\n utils.makedirs(cache_dir)\n env = webassets.Environment(out_dir, os.path.dirname(output),\n cache=cache_dir)\n if inputs:\n bundle = webassets.Bundle(*inputs, output=os.path.basename(output))\n env.register(output, bundle)\n # This generates the file\n try:\n env[output].urls()\n except Exception as e:\n self.logger.error(\"Failed to build bundles.\")\n self.logger.exception(e)\n self.logger.notice(\"Try running ``nikola clean`` and building again.\")\n else:\n with open(os.path.join(out_dir, os.path.basename(output)), 'wb+'):\n pass # Create empty file\n\n yield self.group_task()\n if (webassets is not None and self.site.config['USE_BUNDLES'] is not\n False):\n for name, _files in kw['theme_bundles'].items():\n output_path = os.path.join(kw['output_folder'], name)\n dname = os.path.dirname(name)\n files = []\n for fname in _files:\n # paths are relative to dirname\n files.append(os.path.join(dname, fname))\n file_dep = [os.path.join(kw['output_folder'], fname)\n for fname in files if\n utils.get_asset_path(fname, self.site.THEMES, self.site.config['FILES_FOLDERS']) or fname == os.path.join('assets', 'css', 'code.css')]\n # code.css will be generated by us if it does not exist in\n # FILES_FOLDERS or theme assets. It is guaranteed that the\n # generation will happen before this task.\n task = {\n 'file_dep': list(file_dep),\n 'task_dep': ['copy_assets', 'copy_files'],\n 'basename': str(self.name),\n 'name': str(output_path),\n 'actions': [(build_bundle, (name, file_dep))],\n 'targets': [output_path],\n 'uptodate': [\n utils.config_changed({\n 1: kw,\n 2: file_dep\n }, 'nikola.plugins.task.bundles')],\n 'clean': True,\n }\n yield utils.apply_filters(task, kw['filters'])\n\n\ndef get_theme_bundles(themes):\n \"\"\"Given a theme chain, return the bundle definitions.\"\"\"\n bundles = {}\n for theme_name in themes:\n bundles_path = os.path.join(\n utils.get_theme_path(theme_name), 'bundles')\n if os.path.isfile(bundles_path):\n with open(bundles_path) as fd:\n for line in fd:\n try:\n name, files = line.split('=')\n files = [f.strip() for f in files.split(',')]\n bundles[name.strip().replace('/', os.sep)] = files\n except ValueError:\n # for empty lines\n pass\n break\n return bundles\n", "path": "nikola/plugins/task/bundles.py"}]} | 2,043 | 131 |
gh_patches_debug_40591 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4802 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/inference/tensor_parallel/policies/llama.py`
Content:
```
1 from functools import partial
2
3 import torch
4 from transformers.models.llama.modeling_llama import LlamaAttention, LlamaDecoderLayer, LlamaModel, LlamaRMSNorm
5
6 from colossalai.shardformer.layer import VocabParallelEmbedding1D
7 from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, Policy, SubModuleReplacementDescription
8 # import colossalai
9 from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy
10
11 from ..modeling.llama import LlamaInferenceForwards, get_llama_vllm_rmsnorm_forward
12
13 try:
14 from colossalai.kernel.triton import rmsnorm_forward
15
16 HAS_TRITON_RMSNORM = True
17 except:
18 print("you should install triton from https://github.com/openai/triton")
19 HAS_TRITON_RMSNORM = False
20
21
22 def get_triton_rmsnorm_forward():
23 if HAS_TRITON_RMSNORM:
24
25 def _triton_rmsnorm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor):
26 return rmsnorm_forward(hidden_states, self.weight.data, self.variance_epsilon)
27
28 return _triton_rmsnorm_forward
29 else:
30 return None
31
32
33 class LlamaModelInferPolicy(LlamaForCausalLMPolicy):
34 def __init__(self) -> None:
35 super().__init__()
36
37 def module_policy(self):
38 policy = super().module_policy()
39
40 if self.shard_config.inference_gptq:
41 from colossalai.inference.quant.gptq.cai_gptq import ColCaiQuantLinear, RowCaiQuantLinear
42
43 decoder_attribute_replacement = {
44 "self_attn.hidden_size": self.model.config.hidden_size // self.shard_config.tensor_parallel_size,
45 "self_attn.num_heads": self.model.config.num_attention_heads // self.shard_config.tensor_parallel_size,
46 }
47 policy[LlamaDecoderLayer] = ModulePolicyDescription(
48 attribute_replacement=decoder_attribute_replacement,
49 sub_module_replacement=[
50 SubModuleReplacementDescription(
51 suffix="self_attn.q_proj",
52 target_module=ColCaiQuantLinear,
53 kwargs={'split_num': 1},
54 ),
55 SubModuleReplacementDescription(
56 suffix="self_attn.k_proj",
57 target_module=ColCaiQuantLinear,
58 kwargs={'split_num': 1},
59 ),
60 SubModuleReplacementDescription(
61 suffix="self_attn.v_proj",
62 target_module=ColCaiQuantLinear,
63 kwargs={'split_num': 1},
64 ),
65 SubModuleReplacementDescription(
66 suffix="self_attn.o_proj",
67 target_module=RowCaiQuantLinear,
68 kwargs={'split_num': 1},
69 ),
70 SubModuleReplacementDescription(
71 suffix="mlp.gate_proj",
72 target_module=ColCaiQuantLinear,
73 kwargs={'split_num': 1},
74 ),
75 SubModuleReplacementDescription(
76 suffix="mlp.up_proj",
77 target_module=ColCaiQuantLinear,
78 kwargs={'split_num': 1},
79 ),
80 SubModuleReplacementDescription(
81 suffix="mlp.down_proj",
82 target_module=RowCaiQuantLinear,
83 kwargs={'split_num': 1},
84 )
85 ],
86 )
87
88 self.shard_config._infer()
89
90 infer_forward = LlamaInferenceForwards.llama_model_forward
91 method_replacement = {"forward": partial(infer_forward)}
92 self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)
93
94 infer_forward = LlamaInferenceForwards.llama_decoder_layer_forward
95 method_replacement = {"forward": partial(infer_forward)}
96 self.append_or_create_method_replacement(
97 description=method_replacement, policy=policy, target_key=LlamaDecoderLayer
98 )
99
100 infer_forward = LlamaInferenceForwards.llama_flash_attn_kvcache_forward
101 method_replacement = {"forward": partial(infer_forward)}
102 self.append_or_create_method_replacement(
103 description=method_replacement, policy=policy, target_key=LlamaAttention
104 )
105
106 infer_forward = None
107 if HAS_TRITON_RMSNORM:
108 infer_forward = get_triton_rmsnorm_forward()
109 else:
110 # NOTE: adding rms_norm from cuda kernels caused precision issue, fix @tiandiao123
111 infer_forward = get_llama_vllm_rmsnorm_forward()
112
113 if infer_forward is not None:
114 method_replacement = {"forward": partial(infer_forward)}
115 self.append_or_create_method_replacement(
116 description=method_replacement, policy=policy, target_key=LlamaRMSNorm
117 )
118
119 return policy
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/inference/tensor_parallel/policies/llama.py b/colossalai/inference/tensor_parallel/policies/llama.py
--- a/colossalai/inference/tensor_parallel/policies/llama.py
+++ b/colossalai/inference/tensor_parallel/policies/llama.py
@@ -3,8 +3,8 @@
import torch
from transformers.models.llama.modeling_llama import LlamaAttention, LlamaDecoderLayer, LlamaModel, LlamaRMSNorm
-from colossalai.shardformer.layer import VocabParallelEmbedding1D
-from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, Policy, SubModuleReplacementDescription
+from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription
+
# import colossalai
from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy
@@ -50,38 +50,38 @@
SubModuleReplacementDescription(
suffix="self_attn.q_proj",
target_module=ColCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="self_attn.k_proj",
target_module=ColCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="self_attn.v_proj",
target_module=ColCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="self_attn.o_proj",
target_module=RowCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="mlp.gate_proj",
target_module=ColCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="mlp.up_proj",
target_module=ColCaiQuantLinear,
- kwargs={'split_num': 1},
+ kwargs={"split_num": 1},
),
SubModuleReplacementDescription(
suffix="mlp.down_proj",
target_module=RowCaiQuantLinear,
- kwargs={'split_num': 1},
- )
+ kwargs={"split_num": 1},
+ ),
],
)
| {"golden_diff": "diff --git a/colossalai/inference/tensor_parallel/policies/llama.py b/colossalai/inference/tensor_parallel/policies/llama.py\n--- a/colossalai/inference/tensor_parallel/policies/llama.py\n+++ b/colossalai/inference/tensor_parallel/policies/llama.py\n@@ -3,8 +3,8 @@\n import torch\n from transformers.models.llama.modeling_llama import LlamaAttention, LlamaDecoderLayer, LlamaModel, LlamaRMSNorm\n \n-from colossalai.shardformer.layer import VocabParallelEmbedding1D\n-from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, Policy, SubModuleReplacementDescription\n+from colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription\n+\n # import colossalai\n from colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n \n@@ -50,38 +50,38 @@\n SubModuleReplacementDescription(\n suffix=\"self_attn.q_proj\",\n target_module=ColCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.k_proj\",\n target_module=ColCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.v_proj\",\n target_module=ColCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.o_proj\",\n target_module=RowCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.gate_proj\",\n target_module=ColCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.up_proj\",\n target_module=ColCaiQuantLinear,\n- kwargs={'split_num': 1},\n+ kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.down_proj\",\n target_module=RowCaiQuantLinear,\n- kwargs={'split_num': 1},\n- )\n+ kwargs={\"split_num\": 1},\n+ ),\n ],\n )\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from functools import partial\n\nimport torch\nfrom transformers.models.llama.modeling_llama import LlamaAttention, LlamaDecoderLayer, LlamaModel, LlamaRMSNorm\n\nfrom colossalai.shardformer.layer import VocabParallelEmbedding1D\nfrom colossalai.shardformer.policies.base_policy import ModulePolicyDescription, Policy, SubModuleReplacementDescription\n# import colossalai\nfrom colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n\nfrom ..modeling.llama import LlamaInferenceForwards, get_llama_vllm_rmsnorm_forward\n\ntry:\n from colossalai.kernel.triton import rmsnorm_forward\n\n HAS_TRITON_RMSNORM = True\nexcept:\n print(\"you should install triton from https://github.com/openai/triton\")\n HAS_TRITON_RMSNORM = False\n\n\ndef get_triton_rmsnorm_forward():\n if HAS_TRITON_RMSNORM:\n\n def _triton_rmsnorm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor):\n return rmsnorm_forward(hidden_states, self.weight.data, self.variance_epsilon)\n\n return _triton_rmsnorm_forward\n else:\n return None\n\n\nclass LlamaModelInferPolicy(LlamaForCausalLMPolicy):\n def __init__(self) -> None:\n super().__init__()\n\n def module_policy(self):\n policy = super().module_policy()\n\n if self.shard_config.inference_gptq:\n from colossalai.inference.quant.gptq.cai_gptq import ColCaiQuantLinear, RowCaiQuantLinear\n\n decoder_attribute_replacement = {\n \"self_attn.hidden_size\": self.model.config.hidden_size // self.shard_config.tensor_parallel_size,\n \"self_attn.num_heads\": self.model.config.num_attention_heads // self.shard_config.tensor_parallel_size,\n }\n policy[LlamaDecoderLayer] = ModulePolicyDescription(\n attribute_replacement=decoder_attribute_replacement,\n sub_module_replacement=[\n SubModuleReplacementDescription(\n suffix=\"self_attn.q_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.k_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.v_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.o_proj\",\n target_module=RowCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.gate_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.up_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={'split_num': 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.down_proj\",\n target_module=RowCaiQuantLinear,\n kwargs={'split_num': 1},\n )\n ],\n )\n\n self.shard_config._infer()\n\n infer_forward = LlamaInferenceForwards.llama_model_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)\n\n infer_forward = LlamaInferenceForwards.llama_decoder_layer_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaDecoderLayer\n )\n\n infer_forward = LlamaInferenceForwards.llama_flash_attn_kvcache_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaAttention\n )\n\n infer_forward = None\n if HAS_TRITON_RMSNORM:\n infer_forward = get_triton_rmsnorm_forward()\n else:\n # NOTE: adding rms_norm from cuda kernels caused precision issue, fix @tiandiao123\n infer_forward = get_llama_vllm_rmsnorm_forward()\n\n if infer_forward is not None:\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaRMSNorm\n )\n\n return policy\n", "path": "colossalai/inference/tensor_parallel/policies/llama.py"}], "after_files": [{"content": "from functools import partial\n\nimport torch\nfrom transformers.models.llama.modeling_llama import LlamaAttention, LlamaDecoderLayer, LlamaModel, LlamaRMSNorm\n\nfrom colossalai.shardformer.policies.base_policy import ModulePolicyDescription, SubModuleReplacementDescription\n\n# import colossalai\nfrom colossalai.shardformer.policies.llama import LlamaForCausalLMPolicy\n\nfrom ..modeling.llama import LlamaInferenceForwards, get_llama_vllm_rmsnorm_forward\n\ntry:\n from colossalai.kernel.triton import rmsnorm_forward\n\n HAS_TRITON_RMSNORM = True\nexcept:\n print(\"you should install triton from https://github.com/openai/triton\")\n HAS_TRITON_RMSNORM = False\n\n\ndef get_triton_rmsnorm_forward():\n if HAS_TRITON_RMSNORM:\n\n def _triton_rmsnorm_forward(self: LlamaRMSNorm, hidden_states: torch.Tensor):\n return rmsnorm_forward(hidden_states, self.weight.data, self.variance_epsilon)\n\n return _triton_rmsnorm_forward\n else:\n return None\n\n\nclass LlamaModelInferPolicy(LlamaForCausalLMPolicy):\n def __init__(self) -> None:\n super().__init__()\n\n def module_policy(self):\n policy = super().module_policy()\n\n if self.shard_config.inference_gptq:\n from colossalai.inference.quant.gptq.cai_gptq import ColCaiQuantLinear, RowCaiQuantLinear\n\n decoder_attribute_replacement = {\n \"self_attn.hidden_size\": self.model.config.hidden_size // self.shard_config.tensor_parallel_size,\n \"self_attn.num_heads\": self.model.config.num_attention_heads // self.shard_config.tensor_parallel_size,\n }\n policy[LlamaDecoderLayer] = ModulePolicyDescription(\n attribute_replacement=decoder_attribute_replacement,\n sub_module_replacement=[\n SubModuleReplacementDescription(\n suffix=\"self_attn.q_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.k_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.v_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"self_attn.o_proj\",\n target_module=RowCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.gate_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.up_proj\",\n target_module=ColCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n SubModuleReplacementDescription(\n suffix=\"mlp.down_proj\",\n target_module=RowCaiQuantLinear,\n kwargs={\"split_num\": 1},\n ),\n ],\n )\n\n self.shard_config._infer()\n\n infer_forward = LlamaInferenceForwards.llama_model_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(description=method_replacement, policy=policy, target_key=LlamaModel)\n\n infer_forward = LlamaInferenceForwards.llama_decoder_layer_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaDecoderLayer\n )\n\n infer_forward = LlamaInferenceForwards.llama_flash_attn_kvcache_forward\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaAttention\n )\n\n infer_forward = None\n if HAS_TRITON_RMSNORM:\n infer_forward = get_triton_rmsnorm_forward()\n else:\n # NOTE: adding rms_norm from cuda kernels caused precision issue, fix @tiandiao123\n infer_forward = get_llama_vllm_rmsnorm_forward()\n\n if infer_forward is not None:\n method_replacement = {\"forward\": partial(infer_forward)}\n self.append_or_create_method_replacement(\n description=method_replacement, policy=policy, target_key=LlamaRMSNorm\n )\n\n return policy\n", "path": "colossalai/inference/tensor_parallel/policies/llama.py"}]} | 1,546 | 548 |
gh_patches_debug_47928 | rasdani/github-patches | git_diff | uccser__cs-unplugged-862 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Only prepend www for production website
It should not be used for development website.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/config/settings/production.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Django settings for production environment.
4
5 - Load secret values from environment variables.
6 - Set static URL to Google Cloud Storage Bucket.
7 """
8
9 from .base import * # noqa: F403
10
11
12 # SECRET CONFIGURATION
13 # ------------------------------------------------------------------------------
14 # See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key
15 # Raises ImproperlyConfigured exception if DJANGO_SECRET_KEY not in os.environ
16 SECRET_KEY = env("DJANGO_SECRET_KEY") # noqa: F405
17
18 # SECURITY WARNING: App Engine"s security features ensure that it is safe to
19 # have ALLOWED_HOSTS = ["*"] when the app is deployed. If you deploy a Django
20 # app not on App Engine, make sure to set an appropriate host here.
21 # See https://docs.djangoproject.com/en/1.10/ref/settings/
22 ALLOWED_HOSTS = ["*"]
23
24 # URL Configuration
25 # ------------------------------------------------------------------------------
26 PREPEND_WWW = True
27
28 # DATABASE CONFIGURATION
29 # ----------------------------------------------------------------------------
30 # See: https://docs.djangoproject.com/en/dev/ref/settings/#databases
31 DATABASES = {
32 "default": {
33 "ENGINE": "django.db.backends.postgresql",
34 "NAME": "csunplugged",
35 "USER": env("GOOGLE_CLOUD_SQL_DATABASE_USERNAME"), # noqa: F405
36 "PASSWORD": env("GOOGLE_CLOUD_SQL_DATABASE_PASSWORD"), # noqa: F405
37 "HOST": "/cloudsql/" + env("GOOGLE_CLOUD_SQL_CONNECTION_NAME"), # noqa: F405
38 }
39 }
40 DATABASES["default"]["ATOMIC_REQUESTS"] = True
41
42 # Static files
43 STATIC_URL = "https://storage.googleapis.com/" + env("GOOGLE_CLOUD_STORAGE_BUCKET_NAME") + "/static/" # noqa: F405
44
45 # SECURITY CONFIGURATION
46 # ------------------------------------------------------------------------------
47 # See https://docs.djangoproject.com/en/dev/ref/middleware/#module-django.middleware.security
48 # and https://docs.djangoproject.com/en/dev/howto/deployment/checklist/#run-manage-py-check-deploy
49
50 # set this to 60 seconds and then to 518400 when you can prove it works
51 SECURE_HSTS_SECONDS = 60
52 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
53 SECURE_HSTS_INCLUDE_SUBDOMAINS = env.bool("DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS", default=True) # noqa: F405
54 SECURE_CONTENT_TYPE_NOSNIFF = env.bool("DJANGO_SECURE_CONTENT_TYPE_NOSNIFF", default=True) # noqa: F405
55 SECURE_BROWSER_XSS_FILTER = True
56 SESSION_COOKIE_SECURE = True
57 SESSION_COOKIE_HTTPONLY = True
58 SECURE_SSL_REDIRECT = env.bool("DJANGO_SECURE_SSL_REDIRECT", default=True) # noqa: F405
59 CSRF_COOKIE_SECURE = True
60 CSRF_COOKIE_HTTPONLY = True
61 X_FRAME_OPTIONS = "DENY"
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/config/settings/production.py b/csunplugged/config/settings/production.py
--- a/csunplugged/config/settings/production.py
+++ b/csunplugged/config/settings/production.py
@@ -23,7 +23,10 @@
# URL Configuration
# ------------------------------------------------------------------------------
-PREPEND_WWW = True
+if env("DEPLOYMENT", default=None) == "prod": # noqa: F405
+ PREPEND_WWW = True
+else:
+ PREPEND_WWW = False
# DATABASE CONFIGURATION
# ----------------------------------------------------------------------------
| {"golden_diff": "diff --git a/csunplugged/config/settings/production.py b/csunplugged/config/settings/production.py\n--- a/csunplugged/config/settings/production.py\n+++ b/csunplugged/config/settings/production.py\n@@ -23,7 +23,10 @@\n \n # URL Configuration\n # ------------------------------------------------------------------------------\n-PREPEND_WWW = True\n+if env(\"DEPLOYMENT\", default=None) == \"prod\": # noqa: F405\n+ PREPEND_WWW = True\n+else:\n+ PREPEND_WWW = False\n \n # DATABASE CONFIGURATION\n # ----------------------------------------------------------------------------\n", "issue": "Only prepend www for production website\nIt should not be used for development website.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDjango settings for production environment.\n\n- Load secret values from environment variables.\n- Set static URL to Google Cloud Storage Bucket.\n\"\"\"\n\nfrom .base import * # noqa: F403\n\n\n# SECRET CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\n# Raises ImproperlyConfigured exception if DJANGO_SECRET_KEY not in os.environ\nSECRET_KEY = env(\"DJANGO_SECRET_KEY\") # noqa: F405\n\n# SECURITY WARNING: App Engine\"s security features ensure that it is safe to\n# have ALLOWED_HOSTS = [\"*\"] when the app is deployed. If you deploy a Django\n# app not on App Engine, make sure to set an appropriate host here.\n# See https://docs.djangoproject.com/en/1.10/ref/settings/\nALLOWED_HOSTS = [\"*\"]\n\n# URL Configuration\n# ------------------------------------------------------------------------------\nPREPEND_WWW = True\n\n# DATABASE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"csunplugged\",\n \"USER\": env(\"GOOGLE_CLOUD_SQL_DATABASE_USERNAME\"), # noqa: F405\n \"PASSWORD\": env(\"GOOGLE_CLOUD_SQL_DATABASE_PASSWORD\"), # noqa: F405\n \"HOST\": \"/cloudsql/\" + env(\"GOOGLE_CLOUD_SQL_CONNECTION_NAME\"), # noqa: F405\n }\n}\nDATABASES[\"default\"][\"ATOMIC_REQUESTS\"] = True\n\n# Static files\nSTATIC_URL = \"https://storage.googleapis.com/\" + env(\"GOOGLE_CLOUD_STORAGE_BUCKET_NAME\") + \"/static/\" # noqa: F405\n\n# SECURITY CONFIGURATION\n# ------------------------------------------------------------------------------\n# See https://docs.djangoproject.com/en/dev/ref/middleware/#module-django.middleware.security\n# and https://docs.djangoproject.com/en/dev/howto/deployment/checklist/#run-manage-py-check-deploy\n\n# set this to 60 seconds and then to 518400 when you can prove it works\nSECURE_HSTS_SECONDS = 60\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\nSECURE_HSTS_INCLUDE_SUBDOMAINS = env.bool(\"DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS\", default=True) # noqa: F405\nSECURE_CONTENT_TYPE_NOSNIFF = env.bool(\"DJANGO_SECURE_CONTENT_TYPE_NOSNIFF\", default=True) # noqa: F405\nSECURE_BROWSER_XSS_FILTER = True\nSESSION_COOKIE_SECURE = True\nSESSION_COOKIE_HTTPONLY = True\nSECURE_SSL_REDIRECT = env.bool(\"DJANGO_SECURE_SSL_REDIRECT\", default=True) # noqa: F405\nCSRF_COOKIE_SECURE = True\nCSRF_COOKIE_HTTPONLY = True\nX_FRAME_OPTIONS = \"DENY\"\n", "path": "csunplugged/config/settings/production.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDjango settings for production environment.\n\n- Load secret values from environment variables.\n- Set static URL to Google Cloud Storage Bucket.\n\"\"\"\n\nfrom .base import * # noqa: F403\n\n\n# SECRET CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\n# Raises ImproperlyConfigured exception if DJANGO_SECRET_KEY not in os.environ\nSECRET_KEY = env(\"DJANGO_SECRET_KEY\") # noqa: F405\n\n# SECURITY WARNING: App Engine\"s security features ensure that it is safe to\n# have ALLOWED_HOSTS = [\"*\"] when the app is deployed. If you deploy a Django\n# app not on App Engine, make sure to set an appropriate host here.\n# See https://docs.djangoproject.com/en/1.10/ref/settings/\nALLOWED_HOSTS = [\"*\"]\n\n# URL Configuration\n# ------------------------------------------------------------------------------\nif env(\"DEPLOYMENT\", default=None) == \"prod\": # noqa: F405\n PREPEND_WWW = True\nelse:\n PREPEND_WWW = False\n\n# DATABASE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"csunplugged\",\n \"USER\": env(\"GOOGLE_CLOUD_SQL_DATABASE_USERNAME\"), # noqa: F405\n \"PASSWORD\": env(\"GOOGLE_CLOUD_SQL_DATABASE_PASSWORD\"), # noqa: F405\n \"HOST\": \"/cloudsql/\" + env(\"GOOGLE_CLOUD_SQL_CONNECTION_NAME\"), # noqa: F405\n }\n}\nDATABASES[\"default\"][\"ATOMIC_REQUESTS\"] = True\n\n# Static files\nSTATIC_URL = \"https://storage.googleapis.com/\" + env(\"GOOGLE_CLOUD_STORAGE_BUCKET_NAME\") + \"/static/\" # noqa: F405\n\n# SECURITY CONFIGURATION\n# ------------------------------------------------------------------------------\n# See https://docs.djangoproject.com/en/dev/ref/middleware/#module-django.middleware.security\n# and https://docs.djangoproject.com/en/dev/howto/deployment/checklist/#run-manage-py-check-deploy\n\n# set this to 60 seconds and then to 518400 when you can prove it works\nSECURE_HSTS_SECONDS = 60\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\nSECURE_HSTS_INCLUDE_SUBDOMAINS = env.bool(\"DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS\", default=True) # noqa: F405\nSECURE_CONTENT_TYPE_NOSNIFF = env.bool(\"DJANGO_SECURE_CONTENT_TYPE_NOSNIFF\", default=True) # noqa: F405\nSECURE_BROWSER_XSS_FILTER = True\nSESSION_COOKIE_SECURE = True\nSESSION_COOKIE_HTTPONLY = True\nSECURE_SSL_REDIRECT = env.bool(\"DJANGO_SECURE_SSL_REDIRECT\", default=True) # noqa: F405\nCSRF_COOKIE_SECURE = True\nCSRF_COOKIE_HTTPONLY = True\nX_FRAME_OPTIONS = \"DENY\"\n", "path": "csunplugged/config/settings/production.py"}]} | 1,026 | 129 |
gh_patches_debug_6831 | rasdani/github-patches | git_diff | wagtail__wagtail-6104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in wagtail/core/middleware.py warning
The second line of the warning in `wagtail/core/middleware.py` should read:
` 'Please update your code to use Site.find_for_request(request) in place of request.site, '`
instead of:
` 'Please update your code to use Site.get_for_request(request) in place of request.site, '`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/core/middleware.py`
Content:
```
1 import warnings
2
3 from django.utils.deprecation import MiddlewareMixin
4 from wagtail.core.models import Site
5 from wagtail.utils.deprecation import RemovedInWagtail211Warning
6
7
8 warnings.warn(
9 'wagtail.core.middleware.SiteMiddleware and the use of request.site is deprecated. '
10 'Please update your code to use Site.get_for_request(request) in place of request.site, '
11 'and remove wagtail.core.middleware.SiteMiddleware from MIDDLEWARES',
12 RemovedInWagtail211Warning
13 )
14
15
16 class SiteMiddleware(MiddlewareMixin):
17 def process_request(self, request):
18 """
19 Set request.site to contain the Site object responsible for handling this request,
20 according to hostname matching rules
21 """
22 try:
23 request.site = Site.find_for_request(request)
24 except Site.DoesNotExist:
25 request.site = None
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/core/middleware.py b/wagtail/core/middleware.py
--- a/wagtail/core/middleware.py
+++ b/wagtail/core/middleware.py
@@ -7,7 +7,7 @@
warnings.warn(
'wagtail.core.middleware.SiteMiddleware and the use of request.site is deprecated. '
- 'Please update your code to use Site.get_for_request(request) in place of request.site, '
+ 'Please update your code to use Site.find_for_request(request) in place of request.site, '
'and remove wagtail.core.middleware.SiteMiddleware from MIDDLEWARES',
RemovedInWagtail211Warning
)
| {"golden_diff": "diff --git a/wagtail/core/middleware.py b/wagtail/core/middleware.py\n--- a/wagtail/core/middleware.py\n+++ b/wagtail/core/middleware.py\n@@ -7,7 +7,7 @@\n \n warnings.warn(\n 'wagtail.core.middleware.SiteMiddleware and the use of request.site is deprecated. '\n- 'Please update your code to use Site.get_for_request(request) in place of request.site, '\n+ 'Please update your code to use Site.find_for_request(request) in place of request.site, '\n 'and remove wagtail.core.middleware.SiteMiddleware from MIDDLEWARES',\n RemovedInWagtail211Warning\n )\n", "issue": "Typo in wagtail/core/middleware.py warning\nThe second line of the warning in `wagtail/core/middleware.py` should read:\r\n\r\n` 'Please update your code to use Site.find_for_request(request) in place of request.site, '`\r\n\r\ninstead of:\r\n\r\n` 'Please update your code to use Site.get_for_request(request) in place of request.site, '`\r\n\n", "before_files": [{"content": "import warnings\n\nfrom django.utils.deprecation import MiddlewareMixin\nfrom wagtail.core.models import Site\nfrom wagtail.utils.deprecation import RemovedInWagtail211Warning\n\n\nwarnings.warn(\n 'wagtail.core.middleware.SiteMiddleware and the use of request.site is deprecated. '\n 'Please update your code to use Site.get_for_request(request) in place of request.site, '\n 'and remove wagtail.core.middleware.SiteMiddleware from MIDDLEWARES',\n RemovedInWagtail211Warning\n)\n\n\nclass SiteMiddleware(MiddlewareMixin):\n def process_request(self, request):\n \"\"\"\n Set request.site to contain the Site object responsible for handling this request,\n according to hostname matching rules\n \"\"\"\n try:\n request.site = Site.find_for_request(request)\n except Site.DoesNotExist:\n request.site = None\n", "path": "wagtail/core/middleware.py"}], "after_files": [{"content": "import warnings\n\nfrom django.utils.deprecation import MiddlewareMixin\nfrom wagtail.core.models import Site\nfrom wagtail.utils.deprecation import RemovedInWagtail211Warning\n\n\nwarnings.warn(\n 'wagtail.core.middleware.SiteMiddleware and the use of request.site is deprecated. '\n 'Please update your code to use Site.find_for_request(request) in place of request.site, '\n 'and remove wagtail.core.middleware.SiteMiddleware from MIDDLEWARES',\n RemovedInWagtail211Warning\n)\n\n\nclass SiteMiddleware(MiddlewareMixin):\n def process_request(self, request):\n \"\"\"\n Set request.site to contain the Site object responsible for handling this request,\n according to hostname matching rules\n \"\"\"\n try:\n request.site = Site.find_for_request(request)\n except Site.DoesNotExist:\n request.site = None\n", "path": "wagtail/core/middleware.py"}]} | 562 | 145 |
gh_patches_debug_15848 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: date time issue
### I Have A Problem With:
The integration in general
### What's Your Problem
Used to work. No change in cfg.
At some point, after an upgrade…
Integration fails while starting. See log.
### Source (if relevant)
_No response_
### Logs
```Shell
Denne feilen stammer fra en tilpasset integrasjon.
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 19:02:11 (1 occurrences)
Last logged: 19:02:11
fetch failed for source Stavanger Kommune: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py", line 63, in fetch date = datetime.strptime(date[0] + "." + year, "%d.%m.%Y").date() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/_strptime.py", line 534, in _strptime julian = datetime_date(year, month, day).toordinal() - \ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: day is out of range for month
```
### Relevant Configuration
```YAML
name: stavanger_no
municipality: Stavanger
```
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py`
Content:
```
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Stavanger Kommune"
8 DESCRIPTION = "Source for Stavanger Kommune, Norway"
9 URL = "https://www.stavanger.kommune.no/"
10 TEST_CASES = {
11 "TestcaseI": {
12 "id": "57bf9d36-722e-400b-ae93-d80f8e354724",
13 "municipality": "Stavanger",
14 "gnumber": "57",
15 "bnumber": "922",
16 "snumber": "0",
17 },
18 }
19
20 ICON_MAP = {
21 "Restavfall": "mdi:trash-can",
22 "Papp/papir": "mdi:recycle",
23 "Bio": "mdi:leaf",
24 "Juletre": "mdi:pine-tree",
25 }
26
27
28 class Source:
29 def __init__(self, id, municipality, gnumber, bnumber, snumber):
30 self._id = id
31 self._municipality = municipality
32 self._gnumber = gnumber
33 self._bnumber = bnumber
34 self._snumber = snumber
35
36 def fetch(self):
37 url = "https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show"
38 headers = {"referer": "https://www.stavanger.kommune.no"}
39
40 params = {
41 "id": self._id,
42 "municipality": self._municipality,
43 "gnumber": self._gnumber,
44 "bnumber": self._bnumber,
45 "snumber": self._snumber,
46 }
47
48 r = requests.get(url, params=params, headers=headers)
49 r.raise_for_status()
50
51 soup = BeautifulSoup(r.text, "html.parser")
52
53 tag = soup.find_all("option")
54 year = tag[0].get("value").split("-")
55 year = year[1]
56
57 entries = []
58 for tag in soup.find_all("tr", {"class": "waste-calendar__item"}):
59 if tag.text.strip() == "Dato og dag\nAvfallstype":
60 continue
61
62 date = tag.text.strip().split(" - ")
63 date = datetime.strptime(date[0] + "." + year, "%d.%m.%Y").date()
64
65 for img in tag.find_all("img"):
66 waste_type = img.get("title")
67 entries.append(
68 Collection(date, waste_type, icon=ICON_MAP.get(waste_type))
69 )
70
71 return entries
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py
@@ -51,14 +51,12 @@
soup = BeautifulSoup(r.text, "html.parser")
tag = soup.find_all("option")
- year = tag[0].get("value").split("-")
- year = year[1]
-
entries = []
for tag in soup.find_all("tr", {"class": "waste-calendar__item"}):
if tag.text.strip() == "Dato og dag\nAvfallstype":
continue
+ year = tag.parent.attrs["data-month"].split("-")[1]
date = tag.text.strip().split(" - ")
date = datetime.strptime(date[0] + "." + year, "%d.%m.%Y").date()
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\n@@ -51,14 +51,12 @@\n soup = BeautifulSoup(r.text, \"html.parser\")\n \n tag = soup.find_all(\"option\")\n- year = tag[0].get(\"value\").split(\"-\")\n- year = year[1]\n-\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n \n+ year = tag.parent.attrs[\"data-month\"].split(\"-\")[1]\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n", "issue": "[Bug]: date time issue\n### I Have A Problem With:\n\nThe integration in general\n\n### What's Your Problem\n\nUsed to work. No change in cfg.\r\nAt some point, after an upgrade\u2026\r\nIntegration fails while starting. See log.\n\n### Source (if relevant)\n\n_No response_\n\n### Logs\n\n```Shell\nDenne feilen stammer fra en tilpasset integrasjon.\r\n\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 19:02:11 (1 occurrences)\r\nLast logged: 19:02:11\r\n\r\nfetch failed for source Stavanger Kommune: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py\", line 63, in fetch date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"/usr/local/lib/python3.11/_strptime.py\", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"/usr/local/lib/python3.11/_strptime.py\", line 534, in _strptime julian = datetime_date(year, month, day).toordinal() - \\ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: day is out of range for month\n```\n\n\n### Relevant Configuration\n\n```YAML\nname: stavanger_no\r\n\r\nmunicipality: Stavanger\n```\n\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Stavanger Kommune\"\nDESCRIPTION = \"Source for Stavanger Kommune, Norway\"\nURL = \"https://www.stavanger.kommune.no/\"\nTEST_CASES = {\n \"TestcaseI\": {\n \"id\": \"57bf9d36-722e-400b-ae93-d80f8e354724\",\n \"municipality\": \"Stavanger\",\n \"gnumber\": \"57\",\n \"bnumber\": \"922\",\n \"snumber\": \"0\",\n },\n}\n\nICON_MAP = {\n \"Restavfall\": \"mdi:trash-can\",\n \"Papp/papir\": \"mdi:recycle\",\n \"Bio\": \"mdi:leaf\",\n \"Juletre\": \"mdi:pine-tree\",\n}\n\n\nclass Source:\n def __init__(self, id, municipality, gnumber, bnumber, snumber):\n self._id = id\n self._municipality = municipality\n self._gnumber = gnumber\n self._bnumber = bnumber\n self._snumber = snumber\n\n def fetch(self):\n url = \"https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show\"\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n\n params = {\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n \"bnumber\": self._bnumber,\n \"snumber\": self._snumber,\n }\n\n r = requests.get(url, params=params, headers=headers)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n tag = soup.find_all(\"option\")\n year = tag[0].get(\"value\").split(\"-\")\n year = year[1]\n\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n\n for img in tag.find_all(\"img\"):\n waste_type = img.get(\"title\")\n entries.append(\n Collection(date, waste_type, icon=ICON_MAP.get(waste_type))\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Stavanger Kommune\"\nDESCRIPTION = \"Source for Stavanger Kommune, Norway\"\nURL = \"https://www.stavanger.kommune.no/\"\nTEST_CASES = {\n \"TestcaseI\": {\n \"id\": \"57bf9d36-722e-400b-ae93-d80f8e354724\",\n \"municipality\": \"Stavanger\",\n \"gnumber\": \"57\",\n \"bnumber\": \"922\",\n \"snumber\": \"0\",\n },\n}\n\nICON_MAP = {\n \"Restavfall\": \"mdi:trash-can\",\n \"Papp/papir\": \"mdi:recycle\",\n \"Bio\": \"mdi:leaf\",\n \"Juletre\": \"mdi:pine-tree\",\n}\n\n\nclass Source:\n def __init__(self, id, municipality, gnumber, bnumber, snumber):\n self._id = id\n self._municipality = municipality\n self._gnumber = gnumber\n self._bnumber = bnumber\n self._snumber = snumber\n\n def fetch(self):\n url = \"https://www.stavanger.kommune.no/renovasjon-og-miljo/tommekalender/finn-kalender/show\"\n headers = {\"referer\": \"https://www.stavanger.kommune.no\"}\n\n params = {\n \"id\": self._id,\n \"municipality\": self._municipality,\n \"gnumber\": self._gnumber,\n \"bnumber\": self._bnumber,\n \"snumber\": self._snumber,\n }\n\n r = requests.get(url, params=params, headers=headers)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n tag = soup.find_all(\"option\")\n entries = []\n for tag in soup.find_all(\"tr\", {\"class\": \"waste-calendar__item\"}):\n if tag.text.strip() == \"Dato og dag\\nAvfallstype\":\n continue\n\n year = tag.parent.attrs[\"data-month\"].split(\"-\")[1]\n date = tag.text.strip().split(\" - \")\n date = datetime.strptime(date[0] + \".\" + year, \"%d.%m.%Y\").date()\n\n for img in tag.find_all(\"img\"):\n waste_type = img.get(\"title\")\n entries.append(\n Collection(date, waste_type, icon=ICON_MAP.get(waste_type))\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stavanger_no.py"}]} | 1,601 | 230 |
gh_patches_debug_30563 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1176 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add axis labels to pyhf.contrib.viz.brazil.plot_results
# Description
We know that the axis labels for [`pyhf.contrib.viz.brazil.plot_results`](https://github.com/scikit-hep/pyhf/blob/28fdfe95a3a4846ba70a9a338b3f72a94eac1322/src/pyhf/contrib/viz/brazil.py#L5) are always going to be the same, so we should just add them on there as
```python
ax.set_xlabel(r"$\mu$")
ax.set_ylabel(r"$\mathrm{CL}_{s}$")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/contrib/viz/brazil.py`
Content:
```
1 """Brazil Band Plots."""
2 import numpy as np
3
4
5 def plot_results(ax, mutests, tests, test_size=0.05):
6 """Plot a series of hypothesis tests for various POI values."""
7 cls_obs = np.array([test[0] for test in tests]).flatten()
8 cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]
9 ax.plot(mutests, cls_obs, c='black')
10 for idx, color in zip(range(5), 5 * ['black']):
11 ax.plot(
12 mutests, cls_exp[idx], c=color, linestyle='dotted' if idx != 2 else 'dashed'
13 )
14 ax.fill_between(mutests, cls_exp[0], cls_exp[-1], facecolor='yellow')
15 ax.fill_between(mutests, cls_exp[1], cls_exp[-2], facecolor='green')
16 ax.plot(mutests, [test_size] * len(mutests), c='red')
17 ax.set_ylim(0, 1)
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/contrib/viz/brazil.py b/src/pyhf/contrib/viz/brazil.py
--- a/src/pyhf/contrib/viz/brazil.py
+++ b/src/pyhf/contrib/viz/brazil.py
@@ -3,7 +3,37 @@
def plot_results(ax, mutests, tests, test_size=0.05):
- """Plot a series of hypothesis tests for various POI values."""
+ """
+ Plot a series of hypothesis tests for various POI values.
+
+ Example:
+
+ >>> import numpy as np
+ >>> import matplotlib.pyplot as plt
+ >>> import pyhf
+ >>> import pyhf.contrib.viz.brazil
+ >>> pyhf.set_backend("numpy")
+ >>> model = pyhf.simplemodels.hepdata_like(
+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
+ ... )
+ >>> observations = [51, 48]
+ >>> data = observations + model.config.auxdata
+ >>> poi_vals = np.linspace(0, 5, 41)
+ >>> results = [
+ ... pyhf.infer.hypotest(test_poi, data, model, return_expected_set=True)
+ ... for test_poi in poi_vals
+ ... ]
+ >>> fig, ax = plt.subplots()
+ >>> pyhf.contrib.viz.brazil.plot_results(ax, poi_vals, results)
+
+ Args:
+ ax (`matplotlib.axes.Axes`): The matplotlib axis object to plot on.
+ mutests (:obj:`list` or :obj:`array`): The values of the POI where the
+ hypothesis tests were performed.
+ tests (:obj:`list` or :obj:`array`): The :math:$\\mathrm{CL}_{s}$ values
+ from the hypothesis tests.
+ test_size (:obj:`float`): The size, :math:$\alpha$, of the test.
+ """
cls_obs = np.array([test[0] for test in tests]).flatten()
cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]
ax.plot(mutests, cls_obs, c='black')
@@ -15,3 +45,6 @@
ax.fill_between(mutests, cls_exp[1], cls_exp[-2], facecolor='green')
ax.plot(mutests, [test_size] * len(mutests), c='red')
ax.set_ylim(0, 1)
+
+ ax.set_xlabel(r"$\mu$ (POI)")
+ ax.set_ylabel(r"$\mathrm{CL}_{s}$")
| {"golden_diff": "diff --git a/src/pyhf/contrib/viz/brazil.py b/src/pyhf/contrib/viz/brazil.py\n--- a/src/pyhf/contrib/viz/brazil.py\n+++ b/src/pyhf/contrib/viz/brazil.py\n@@ -3,7 +3,37 @@\n \n \n def plot_results(ax, mutests, tests, test_size=0.05):\n- \"\"\"Plot a series of hypothesis tests for various POI values.\"\"\"\n+ \"\"\"\n+ Plot a series of hypothesis tests for various POI values.\n+\n+ Example:\n+\n+ >>> import numpy as np\n+ >>> import matplotlib.pyplot as plt\n+ >>> import pyhf\n+ >>> import pyhf.contrib.viz.brazil\n+ >>> pyhf.set_backend(\"numpy\")\n+ >>> model = pyhf.simplemodels.hepdata_like(\n+ ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n+ ... )\n+ >>> observations = [51, 48]\n+ >>> data = observations + model.config.auxdata\n+ >>> poi_vals = np.linspace(0, 5, 41)\n+ >>> results = [\n+ ... pyhf.infer.hypotest(test_poi, data, model, return_expected_set=True)\n+ ... for test_poi in poi_vals\n+ ... ]\n+ >>> fig, ax = plt.subplots()\n+ >>> pyhf.contrib.viz.brazil.plot_results(ax, poi_vals, results)\n+\n+ Args:\n+ ax (`matplotlib.axes.Axes`): The matplotlib axis object to plot on.\n+ mutests (:obj:`list` or :obj:`array`): The values of the POI where the\n+ hypothesis tests were performed.\n+ tests (:obj:`list` or :obj:`array`): The :math:$\\\\mathrm{CL}_{s}$ values\n+ from the hypothesis tests.\n+ test_size (:obj:`float`): The size, :math:$\\alpha$, of the test.\n+ \"\"\"\n cls_obs = np.array([test[0] for test in tests]).flatten()\n cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]\n ax.plot(mutests, cls_obs, c='black')\n@@ -15,3 +45,6 @@\n ax.fill_between(mutests, cls_exp[1], cls_exp[-2], facecolor='green')\n ax.plot(mutests, [test_size] * len(mutests), c='red')\n ax.set_ylim(0, 1)\n+\n+ ax.set_xlabel(r\"$\\mu$ (POI)\")\n+ ax.set_ylabel(r\"$\\mathrm{CL}_{s}$\")\n", "issue": "Add axis labels to pyhf.contrib.viz.brazil.plot_results\n# Description\r\n\r\nWe know that the axis labels for [`pyhf.contrib.viz.brazil.plot_results`](https://github.com/scikit-hep/pyhf/blob/28fdfe95a3a4846ba70a9a338b3f72a94eac1322/src/pyhf/contrib/viz/brazil.py#L5) are always going to be the same, so we should just add them on there as\r\n\r\n```python\r\nax.set_xlabel(r\"$\\mu$\")\r\nax.set_ylabel(r\"$\\mathrm{CL}_{s}$\")\r\n```\n", "before_files": [{"content": "\"\"\"Brazil Band Plots.\"\"\"\nimport numpy as np\n\n\ndef plot_results(ax, mutests, tests, test_size=0.05):\n \"\"\"Plot a series of hypothesis tests for various POI values.\"\"\"\n cls_obs = np.array([test[0] for test in tests]).flatten()\n cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]\n ax.plot(mutests, cls_obs, c='black')\n for idx, color in zip(range(5), 5 * ['black']):\n ax.plot(\n mutests, cls_exp[idx], c=color, linestyle='dotted' if idx != 2 else 'dashed'\n )\n ax.fill_between(mutests, cls_exp[0], cls_exp[-1], facecolor='yellow')\n ax.fill_between(mutests, cls_exp[1], cls_exp[-2], facecolor='green')\n ax.plot(mutests, [test_size] * len(mutests), c='red')\n ax.set_ylim(0, 1)\n", "path": "src/pyhf/contrib/viz/brazil.py"}], "after_files": [{"content": "\"\"\"Brazil Band Plots.\"\"\"\nimport numpy as np\n\n\ndef plot_results(ax, mutests, tests, test_size=0.05):\n \"\"\"\n Plot a series of hypothesis tests for various POI values.\n\n Example:\n\n >>> import numpy as np\n >>> import matplotlib.pyplot as plt\n >>> import pyhf\n >>> import pyhf.contrib.viz.brazil\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = observations + model.config.auxdata\n >>> poi_vals = np.linspace(0, 5, 41)\n >>> results = [\n ... pyhf.infer.hypotest(test_poi, data, model, return_expected_set=True)\n ... for test_poi in poi_vals\n ... ]\n >>> fig, ax = plt.subplots()\n >>> pyhf.contrib.viz.brazil.plot_results(ax, poi_vals, results)\n\n Args:\n ax (`matplotlib.axes.Axes`): The matplotlib axis object to plot on.\n mutests (:obj:`list` or :obj:`array`): The values of the POI where the\n hypothesis tests were performed.\n tests (:obj:`list` or :obj:`array`): The :math:$\\\\mathrm{CL}_{s}$ values\n from the hypothesis tests.\n test_size (:obj:`float`): The size, :math:$\\alpha$, of the test.\n \"\"\"\n cls_obs = np.array([test[0] for test in tests]).flatten()\n cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]\n ax.plot(mutests, cls_obs, c='black')\n for idx, color in zip(range(5), 5 * ['black']):\n ax.plot(\n mutests, cls_exp[idx], c=color, linestyle='dotted' if idx != 2 else 'dashed'\n )\n ax.fill_between(mutests, cls_exp[0], cls_exp[-1], facecolor='yellow')\n ax.fill_between(mutests, cls_exp[1], cls_exp[-2], facecolor='green')\n ax.plot(mutests, [test_size] * len(mutests), c='red')\n ax.set_ylim(0, 1)\n\n ax.set_xlabel(r\"$\\mu$ (POI)\")\n ax.set_ylabel(r\"$\\mathrm{CL}_{s}$\")\n", "path": "src/pyhf/contrib/viz/brazil.py"}]} | 659 | 618 |
gh_patches_debug_64571 | rasdani/github-patches | git_diff | cocotb__cocotb-1145 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Packaging: Add python_requires to manifest
Define our Python version requirements in our package manifest, as described here: https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 ###############################################################################
3 # Copyright (c) 2013 Potential Ventures Ltd
4 # Copyright (c) 2013 SolarFlare Communications Inc
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 # * Redistributions of source code must retain the above copyright
10 # notice, this list of conditions and the following disclaimer.
11 # * Redistributions in binary form must reproduce the above copyright
12 # notice, this list of conditions and the following disclaimer in the
13 # documentation and/or other materials provided with the distribution.
14 # * Neither the name of Potential Ventures Ltd,
15 # SolarFlare Communications Inc nor the
16 # names of its contributors may be used to endorse or promote products
17 # derived from this software without specific prior written permission.
18 #
19 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
20 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
21 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
23 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
24 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
25 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
26 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
28 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29 ###############################################################################
30
31 from setuptools import setup
32 from setuptools import find_packages
33 from os import path, walk
34
35 def read_file(fname):
36 return open(path.join(path.dirname(__file__), fname)).read()
37
38 def package_files(directory):
39 paths = []
40 for (fpath, directories, filenames) in walk(directory):
41 for filename in filenames:
42 paths.append(path.join('..', fpath, filename))
43 return paths
44
45 version = read_file('version')[8:].strip()
46
47 setup(
48 name='cocotb',
49 version=version,
50 description='cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.',
51 url='https://github.com/potentialventures/cocotb',
52 license='BSD',
53 long_description=read_file('README.md'),
54 long_description_content_type='text/markdown',
55 author='Chris Higgs, Stuart Hodgson',
56 author_email='[email protected]',
57 install_requires=[],
58 packages=find_packages(),
59 include_package_data=True,
60 package_data={'cocotb': package_files('cocotb/share')},
61 entry_points={
62 'console_scripts': [
63 'cocotb-config=cocotb.config:main',
64 ]
65 },
66 platforms='any',
67 classifiers=[
68 "Programming Language :: Python :: 2.7",
69 "Programming Language :: Python :: 3",
70 "License :: OSI Approved :: BSD License",
71 "Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)",
72 ],
73 )
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
old mode 100644
new mode 100755
--- a/setup.py
+++ b/setup.py
@@ -55,6 +55,7 @@
author='Chris Higgs, Stuart Hodgson',
author_email='[email protected]',
install_requires=[],
+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
packages=find_packages(),
include_package_data=True,
package_data={'cocotb': package_files('cocotb/share')},
| {"golden_diff": "diff --git a/setup.py b/setup.py\nold mode 100644\nnew mode 100755\n--- a/setup.py\n+++ b/setup.py\n@@ -55,6 +55,7 @@\n author='Chris Higgs, Stuart Hodgson',\n author_email='[email protected]',\n install_requires=[],\n+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n packages=find_packages(),\n include_package_data=True,\n package_data={'cocotb': package_files('cocotb/share')},\n", "issue": "Packaging: Add python_requires to manifest\nDefine our Python version requirements in our package manifest, as described here: https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires\n", "before_files": [{"content": "#!/usr/bin/env python\n###############################################################################\n# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\nfrom setuptools import setup\nfrom setuptools import find_packages\nfrom os import path, walk\n\ndef read_file(fname):\n return open(path.join(path.dirname(__file__), fname)).read()\n\ndef package_files(directory):\n paths = []\n for (fpath, directories, filenames) in walk(directory):\n for filename in filenames:\n paths.append(path.join('..', fpath, filename))\n return paths\n\nversion = read_file('version')[8:].strip()\n\nsetup(\n name='cocotb',\n version=version,\n description='cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.',\n url='https://github.com/potentialventures/cocotb',\n license='BSD',\n long_description=read_file('README.md'),\n long_description_content_type='text/markdown',\n author='Chris Higgs, Stuart Hodgson',\n author_email='[email protected]',\n install_requires=[],\n packages=find_packages(),\n include_package_data=True,\n package_data={'cocotb': package_files('cocotb/share')},\n entry_points={\n 'console_scripts': [\n 'cocotb-config=cocotb.config:main',\n ]\n },\n platforms='any',\n classifiers=[\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: BSD License\",\n \"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n###############################################################################\n# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\nfrom setuptools import setup\nfrom setuptools import find_packages\nfrom os import path, walk\n\ndef read_file(fname):\n return open(path.join(path.dirname(__file__), fname)).read()\n\ndef package_files(directory):\n paths = []\n for (fpath, directories, filenames) in walk(directory):\n for filename in filenames:\n paths.append(path.join('..', fpath, filename))\n return paths\n\nversion = read_file('version')[8:].strip()\n\nsetup(\n name='cocotb',\n version=version,\n description='cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.',\n url='https://github.com/potentialventures/cocotb',\n license='BSD',\n long_description=read_file('README.md'),\n long_description_content_type='text/markdown',\n author='Chris Higgs, Stuart Hodgson',\n author_email='[email protected]',\n install_requires=[],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n packages=find_packages(),\n include_package_data=True,\n package_data={'cocotb': package_files('cocotb/share')},\n entry_points={\n 'console_scripts': [\n 'cocotb-config=cocotb.config:main',\n ]\n },\n platforms='any',\n classifiers=[\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: BSD License\",\n \"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)\",\n ],\n)\n", "path": "setup.py"}]} | 1,111 | 145 |
gh_patches_debug_24079 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9005 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
improvement to insertion_sort algorithm
### Feature description
I was about to make a PR to improve the implementation of [insertion_sort algorithm](https://github.com/TheAlgorithms/Python/blob/master/sorts/insertion_sort.py) but since there might be multiple ways of doing so, I thought I should first ask your opinions.
These are the things that need improvements:
1. We unnecessarily create a whole new copy of the list: `enumerate(collection[1:])`.
We can either use "indexes" to avoid this which is not very pythonic, or we can use the iterator of the list using `iter()` and throw away the first item using `next()`. In second case we have to either check for empty list first or wrap it in a try-except block. I'll go with indexes if you ask. What do you think?
2. I think a function should either mutate the list in-place and returns `None`, or it should create new sorted list without modifying the original list. Mutating the list and returning the mutated list is not what most developers expect to see. What do you think?
3. We can safely remove `if insert_index != temp_index:` condition and unindent its body. Assigning an item to an index of a list is not costly. So it's one less line in general.
improvement to insertion_sort algorithm
### Feature description
I was about to make a PR to improve the implementation of [insertion_sort algorithm](https://github.com/TheAlgorithms/Python/blob/master/sorts/insertion_sort.py) but since there might be multiple ways of doing so, I thought I should first ask your opinions.
These are the things that need improvements:
1. We unnecessarily create a whole new copy of the list: `enumerate(collection[1:])`.
We can either use "indexes" to avoid this which is not very pythonic, or we can use the iterator of the list using `iter()` and throw away the first item using `next()`. In second case we have to either check for empty list first or wrap it in a try-except block. I'll go with indexes if you ask. What do you think?
2. I think a function should either mutate the list in-place and returns `None`, or it should create new sorted list without modifying the original list. Mutating the list and returning the mutated list is not what most developers expect to see. What do you think?
3. We can safely remove `if insert_index != temp_index:` condition and unindent its body. Assigning an item to an index of a list is not costly. So it's one less line in general.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sorts/insertion_sort.py`
Content:
```
1 """
2 A pure Python implementation of the insertion sort algorithm
3
4 This algorithm sorts a collection by comparing adjacent elements.
5 When it finds that order is not respected, it moves the element compared
6 backward until the order is correct. It then goes back directly to the
7 element's initial position resuming forward comparison.
8
9 For doctests run following command:
10 python3 -m doctest -v insertion_sort.py
11
12 For manual testing run:
13 python3 insertion_sort.py
14 """
15
16
17 def insertion_sort(collection: list) -> list:
18 """A pure Python implementation of the insertion sort algorithm
19
20 :param collection: some mutable ordered collection with heterogeneous
21 comparable items inside
22 :return: the same collection ordered by ascending
23
24 Examples:
25 >>> insertion_sort([0, 5, 3, 2, 2])
26 [0, 2, 2, 3, 5]
27 >>> insertion_sort([]) == sorted([])
28 True
29 >>> insertion_sort([-2, -5, -45]) == sorted([-2, -5, -45])
30 True
31 >>> insertion_sort(['d', 'a', 'b', 'e', 'c']) == sorted(['d', 'a', 'b', 'e', 'c'])
32 True
33 >>> import random
34 >>> collection = random.sample(range(-50, 50), 100)
35 >>> insertion_sort(collection) == sorted(collection)
36 True
37 >>> import string
38 >>> collection = random.choices(string.ascii_letters + string.digits, k=100)
39 >>> insertion_sort(collection) == sorted(collection)
40 True
41 """
42
43 for insert_index, insert_value in enumerate(collection[1:]):
44 temp_index = insert_index
45 while insert_index >= 0 and insert_value < collection[insert_index]:
46 collection[insert_index + 1] = collection[insert_index]
47 insert_index -= 1
48 if insert_index != temp_index:
49 collection[insert_index + 1] = insert_value
50 return collection
51
52
53 if __name__ == "__main__":
54 from doctest import testmod
55
56 testmod()
57
58 user_input = input("Enter numbers separated by a comma:\n").strip()
59 unsorted = [int(item) for item in user_input.split(",")]
60 print(f"{insertion_sort(unsorted) = }")
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sorts/insertion_sort.py b/sorts/insertion_sort.py
--- a/sorts/insertion_sort.py
+++ b/sorts/insertion_sort.py
@@ -13,8 +13,19 @@
python3 insertion_sort.py
"""
+from collections.abc import MutableSequence
+from typing import Any, Protocol, TypeVar
-def insertion_sort(collection: list) -> list:
+
+class Comparable(Protocol):
+ def __lt__(self, other: Any, /) -> bool:
+ ...
+
+
+T = TypeVar("T", bound=Comparable)
+
+
+def insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:
"""A pure Python implementation of the insertion sort algorithm
:param collection: some mutable ordered collection with heterogeneous
@@ -40,13 +51,12 @@
True
"""
- for insert_index, insert_value in enumerate(collection[1:]):
- temp_index = insert_index
- while insert_index >= 0 and insert_value < collection[insert_index]:
- collection[insert_index + 1] = collection[insert_index]
+ for insert_index in range(1, len(collection)):
+ insert_value = collection[insert_index]
+ while insert_index > 0 and insert_value < collection[insert_index - 1]:
+ collection[insert_index] = collection[insert_index - 1]
insert_index -= 1
- if insert_index != temp_index:
- collection[insert_index + 1] = insert_value
+ collection[insert_index] = insert_value
return collection
| {"golden_diff": "diff --git a/sorts/insertion_sort.py b/sorts/insertion_sort.py\n--- a/sorts/insertion_sort.py\n+++ b/sorts/insertion_sort.py\n@@ -13,8 +13,19 @@\n python3 insertion_sort.py\n \"\"\"\n \n+from collections.abc import MutableSequence\n+from typing import Any, Protocol, TypeVar\n \n-def insertion_sort(collection: list) -> list:\n+\n+class Comparable(Protocol):\n+ def __lt__(self, other: Any, /) -> bool:\n+ ...\n+\n+\n+T = TypeVar(\"T\", bound=Comparable)\n+\n+\n+def insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:\n \"\"\"A pure Python implementation of the insertion sort algorithm\n \n :param collection: some mutable ordered collection with heterogeneous\n@@ -40,13 +51,12 @@\n True\n \"\"\"\n \n- for insert_index, insert_value in enumerate(collection[1:]):\n- temp_index = insert_index\n- while insert_index >= 0 and insert_value < collection[insert_index]:\n- collection[insert_index + 1] = collection[insert_index]\n+ for insert_index in range(1, len(collection)):\n+ insert_value = collection[insert_index]\n+ while insert_index > 0 and insert_value < collection[insert_index - 1]:\n+ collection[insert_index] = collection[insert_index - 1]\n insert_index -= 1\n- if insert_index != temp_index:\n- collection[insert_index + 1] = insert_value\n+ collection[insert_index] = insert_value\n return collection\n", "issue": "improvement to insertion_sort algorithm\n### Feature description\r\n\r\nI was about to make a PR to improve the implementation of [insertion_sort algorithm](https://github.com/TheAlgorithms/Python/blob/master/sorts/insertion_sort.py) but since there might be multiple ways of doing so, I thought I should first ask your opinions.\r\n\r\nThese are the things that need improvements:\r\n\r\n1. We unnecessarily create a whole new copy of the list: `enumerate(collection[1:])`.\r\n\r\n We can either use \"indexes\" to avoid this which is not very pythonic, or we can use the iterator of the list using `iter()` and throw away the first item using `next()`. In second case we have to either check for empty list first or wrap it in a try-except block. I'll go with indexes if you ask. What do you think?\r\n\r\n2. I think a function should either mutate the list in-place and returns `None`, or it should create new sorted list without modifying the original list. Mutating the list and returning the mutated list is not what most developers expect to see. What do you think?\r\n\r\n3. We can safely remove `if insert_index != temp_index:` condition and unindent its body. Assigning an item to an index of a list is not costly. So it's one less line in general.\nimprovement to insertion_sort algorithm\n### Feature description\r\n\r\nI was about to make a PR to improve the implementation of [insertion_sort algorithm](https://github.com/TheAlgorithms/Python/blob/master/sorts/insertion_sort.py) but since there might be multiple ways of doing so, I thought I should first ask your opinions.\r\n\r\nThese are the things that need improvements:\r\n\r\n1. We unnecessarily create a whole new copy of the list: `enumerate(collection[1:])`.\r\n\r\n We can either use \"indexes\" to avoid this which is not very pythonic, or we can use the iterator of the list using `iter()` and throw away the first item using `next()`. In second case we have to either check for empty list first or wrap it in a try-except block. I'll go with indexes if you ask. What do you think?\r\n\r\n2. I think a function should either mutate the list in-place and returns `None`, or it should create new sorted list without modifying the original list. Mutating the list and returning the mutated list is not what most developers expect to see. What do you think?\r\n\r\n3. We can safely remove `if insert_index != temp_index:` condition and unindent its body. Assigning an item to an index of a list is not costly. So it's one less line in general.\n", "before_files": [{"content": "\"\"\"\nA pure Python implementation of the insertion sort algorithm\n\nThis algorithm sorts a collection by comparing adjacent elements.\nWhen it finds that order is not respected, it moves the element compared\nbackward until the order is correct. It then goes back directly to the\nelement's initial position resuming forward comparison.\n\nFor doctests run following command:\npython3 -m doctest -v insertion_sort.py\n\nFor manual testing run:\npython3 insertion_sort.py\n\"\"\"\n\n\ndef insertion_sort(collection: list) -> list:\n \"\"\"A pure Python implementation of the insertion sort algorithm\n\n :param collection: some mutable ordered collection with heterogeneous\n comparable items inside\n :return: the same collection ordered by ascending\n\n Examples:\n >>> insertion_sort([0, 5, 3, 2, 2])\n [0, 2, 2, 3, 5]\n >>> insertion_sort([]) == sorted([])\n True\n >>> insertion_sort([-2, -5, -45]) == sorted([-2, -5, -45])\n True\n >>> insertion_sort(['d', 'a', 'b', 'e', 'c']) == sorted(['d', 'a', 'b', 'e', 'c'])\n True\n >>> import random\n >>> collection = random.sample(range(-50, 50), 100)\n >>> insertion_sort(collection) == sorted(collection)\n True\n >>> import string\n >>> collection = random.choices(string.ascii_letters + string.digits, k=100)\n >>> insertion_sort(collection) == sorted(collection)\n True\n \"\"\"\n\n for insert_index, insert_value in enumerate(collection[1:]):\n temp_index = insert_index\n while insert_index >= 0 and insert_value < collection[insert_index]:\n collection[insert_index + 1] = collection[insert_index]\n insert_index -= 1\n if insert_index != temp_index:\n collection[insert_index + 1] = insert_value\n return collection\n\n\nif __name__ == \"__main__\":\n from doctest import testmod\n\n testmod()\n\n user_input = input(\"Enter numbers separated by a comma:\\n\").strip()\n unsorted = [int(item) for item in user_input.split(\",\")]\n print(f\"{insertion_sort(unsorted) = }\")\n", "path": "sorts/insertion_sort.py"}], "after_files": [{"content": "\"\"\"\nA pure Python implementation of the insertion sort algorithm\n\nThis algorithm sorts a collection by comparing adjacent elements.\nWhen it finds that order is not respected, it moves the element compared\nbackward until the order is correct. It then goes back directly to the\nelement's initial position resuming forward comparison.\n\nFor doctests run following command:\npython3 -m doctest -v insertion_sort.py\n\nFor manual testing run:\npython3 insertion_sort.py\n\"\"\"\n\nfrom collections.abc import MutableSequence\nfrom typing import Any, Protocol, TypeVar\n\n\nclass Comparable(Protocol):\n def __lt__(self, other: Any, /) -> bool:\n ...\n\n\nT = TypeVar(\"T\", bound=Comparable)\n\n\ndef insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:\n \"\"\"A pure Python implementation of the insertion sort algorithm\n\n :param collection: some mutable ordered collection with heterogeneous\n comparable items inside\n :return: the same collection ordered by ascending\n\n Examples:\n >>> insertion_sort([0, 5, 3, 2, 2])\n [0, 2, 2, 3, 5]\n >>> insertion_sort([]) == sorted([])\n True\n >>> insertion_sort([-2, -5, -45]) == sorted([-2, -5, -45])\n True\n >>> insertion_sort(['d', 'a', 'b', 'e', 'c']) == sorted(['d', 'a', 'b', 'e', 'c'])\n True\n >>> import random\n >>> collection = random.sample(range(-50, 50), 100)\n >>> insertion_sort(collection) == sorted(collection)\n True\n >>> import string\n >>> collection = random.choices(string.ascii_letters + string.digits, k=100)\n >>> insertion_sort(collection) == sorted(collection)\n True\n \"\"\"\n\n for insert_index in range(1, len(collection)):\n insert_value = collection[insert_index]\n while insert_index > 0 and insert_value < collection[insert_index - 1]:\n collection[insert_index] = collection[insert_index - 1]\n insert_index -= 1\n collection[insert_index] = insert_value\n return collection\n\n\nif __name__ == \"__main__\":\n from doctest import testmod\n\n testmod()\n\n user_input = input(\"Enter numbers separated by a comma:\\n\").strip()\n unsorted = [int(item) for item in user_input.split(\",\")]\n print(f\"{insertion_sort(unsorted) = }\")\n", "path": "sorts/insertion_sort.py"}]} | 1,427 | 353 |
gh_patches_debug_893 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-665 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImageEmbedder default behavior is not a flattened output
## 🐛 Bug
I discovered this issue while testing PR #655. If you run the [Image Embedding README example code](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), it returns a 3D tensor.
My understanding from the use of embeddings in general, and how they are used in [Fifty One](https://voxel51.com/docs/fiftyone/tutorials/image_embeddings.html) is they expect the embeddings to be 1D (for each embedding).
The reason it returns a 3D tensor is because it depends on the backbone used. The default there is `resnet101`, which returns a `2048x7x7` shape tensor. Others like inception return a flat 1D tensor, i.e. length-X.
### To Reproduce
Steps to reproduce the behavior:
Run the [README example](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), but remove the `embedding_dim` parameter. See below for example.
Note: as-is, this will error on `print(embeddings.shape)`, regardless of configuration, since that is a list. But the question here is around the logic for the ImageEmbedder.
#### Code sample
```python
from flash.core.data.utils import download_data
from flash.image import ImageEmbedder
# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", "data/")
# 2. Create an ImageEmbedder with resnet50 trained on imagenet.
embedder = ImageEmbedder(backbone="resnet50")
# 3. Generate an embedding from an image path.
embeddings = embedder.predict("data/hymenoptera_data/predict/153783656_85f9c3ac70.jpg")
# 4. Print embeddings shape
print(embeddings.shape)
```
### Expected behavior
Expect to see a 100352x1 shape tensor as the output, instead of 2048x7x7.
### Environment
- PyTorch Version (e.g., 1.0): 1.9
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source): N/A
- Python version: 3.8.6
- CUDA/cuDNN version: N/A
- GPU models and configuration: N/A
- Any other relevant information: N/A
### Additional context
I believe the question is around what the logic should be here:
https://github.com/PyTorchLightning/lightning-flash/blob/075de3a46d74d9fc0e769401063fede1f12d0518/flash/image/embedding/model.py#L85-L92
If `embedding_dim` is None, then the head is `nn.Identity()`. **If we desire a flat 1D embedding, then the question is: should `nn.Identity()` change to `nn.Flatten()`?**
It could be argued that the user should be left to flatten after on their own, but per the contributing guidelines, I thought this would align with "[Force User Decisions To Best Practices](https://github.com/PyTorchLightning/lightning-flash/blob/ddd942d3dfe3884a97a855446410166c3c9f16d9/.github/CONTRIBUTING.md#force-user-decisions-to-best-practices)"
Let me know your thoughts. If that makes sense, then I can update the code, run some tests, and update docs in a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash_examples/integrations/fiftyone/image_embedding.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import fiftyone as fo
15 import fiftyone.brain as fob
16 import numpy as np
17
18 from flash.core.data.utils import download_data
19 from flash.image import ImageEmbedder
20
21 # 1 Download data
22 download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip")
23
24 # 2 Load data into FiftyOne
25 dataset = fo.Dataset.from_dir(
26 "data/hymenoptera_data/test/",
27 fo.types.ImageClassificationDirectoryTree,
28 )
29
30 # 3 Load model
31 embedder = ImageEmbedder(backbone="resnet101", embedding_dim=128)
32
33 # 4 Generate embeddings
34 filepaths = dataset.values("filepath")
35 embeddings = np.stack(embedder.predict(filepaths))
36
37 # 5 Visualize in FiftyOne App
38 results = fob.compute_visualization(dataset, embeddings=embeddings)
39 session = fo.launch_app(dataset)
40 plot = results.visualize(labels="ground_truth.label")
41 plot.show()
42
43 # Optional: block execution until App is closed
44 session.wait()
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash_examples/integrations/fiftyone/image_embedding.py b/flash_examples/integrations/fiftyone/image_embedding.py
--- a/flash_examples/integrations/fiftyone/image_embedding.py
+++ b/flash_examples/integrations/fiftyone/image_embedding.py
@@ -28,7 +28,7 @@
)
# 3 Load model
-embedder = ImageEmbedder(backbone="resnet101", embedding_dim=128)
+embedder = ImageEmbedder(backbone="resnet101")
# 4 Generate embeddings
filepaths = dataset.values("filepath")
| {"golden_diff": "diff --git a/flash_examples/integrations/fiftyone/image_embedding.py b/flash_examples/integrations/fiftyone/image_embedding.py\n--- a/flash_examples/integrations/fiftyone/image_embedding.py\n+++ b/flash_examples/integrations/fiftyone/image_embedding.py\n@@ -28,7 +28,7 @@\n )\n \n # 3 Load model\n-embedder = ImageEmbedder(backbone=\"resnet101\", embedding_dim=128)\n+embedder = ImageEmbedder(backbone=\"resnet101\")\n \n # 4 Generate embeddings\n filepaths = dataset.values(\"filepath\")\n", "issue": "ImageEmbedder default behavior is not a flattened output\n## \ud83d\udc1b Bug\r\n\r\nI discovered this issue while testing PR #655. If you run the [Image Embedding README example code](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), it returns a 3D tensor. \r\nMy understanding from the use of embeddings in general, and how they are used in [Fifty One](https://voxel51.com/docs/fiftyone/tutorials/image_embeddings.html) is they expect the embeddings to be 1D (for each embedding). \r\n\r\nThe reason it returns a 3D tensor is because it depends on the backbone used. The default there is `resnet101`, which returns a `2048x7x7` shape tensor. Others like inception return a flat 1D tensor, i.e. length-X.\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nRun the [README example](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), but remove the `embedding_dim` parameter. See below for example.\r\n\r\nNote: as-is, this will error on `print(embeddings.shape)`, regardless of configuration, since that is a list. But the question here is around the logic for the ImageEmbedder. \r\n\r\n\r\n#### Code sample\r\n```python\r\nfrom flash.core.data.utils import download_data\r\nfrom flash.image import ImageEmbedder\r\n\r\n# 1. Download the data\r\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip\", \"data/\")\r\n\r\n# 2. Create an ImageEmbedder with resnet50 trained on imagenet.\r\nembedder = ImageEmbedder(backbone=\"resnet50\")\r\n\r\n# 3. Generate an embedding from an image path.\r\nembeddings = embedder.predict(\"data/hymenoptera_data/predict/153783656_85f9c3ac70.jpg\")\r\n\r\n# 4. Print embeddings shape\r\nprint(embeddings.shape)\r\n```\r\n\r\n### Expected behavior\r\n\r\nExpect to see a 100352x1 shape tensor as the output, instead of 2048x7x7. \r\n\r\n### Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.9\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, source): pip\r\n - Build command you used (if compiling from source): N/A\r\n - Python version: 3.8.6\r\n - CUDA/cuDNN version: N/A\r\n - GPU models and configuration: N/A\r\n - Any other relevant information: N/A\r\n\r\n### Additional context\r\n\r\nI believe the question is around what the logic should be here:\r\nhttps://github.com/PyTorchLightning/lightning-flash/blob/075de3a46d74d9fc0e769401063fede1f12d0518/flash/image/embedding/model.py#L85-L92\r\n\r\nIf `embedding_dim` is None, then the head is `nn.Identity()`. **If we desire a flat 1D embedding, then the question is: should `nn.Identity()` change to `nn.Flatten()`?**\r\n\r\nIt could be argued that the user should be left to flatten after on their own, but per the contributing guidelines, I thought this would align with \"[Force User Decisions To Best Practices](https://github.com/PyTorchLightning/lightning-flash/blob/ddd942d3dfe3884a97a855446410166c3c9f16d9/.github/CONTRIBUTING.md#force-user-decisions-to-best-practices)\"\r\n\r\nLet me know your thoughts. If that makes sense, then I can update the code, run some tests, and update docs in a PR. \r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fiftyone as fo\nimport fiftyone.brain as fob\nimport numpy as np\n\nfrom flash.core.data.utils import download_data\nfrom flash.image import ImageEmbedder\n\n# 1 Download data\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip\")\n\n# 2 Load data into FiftyOne\ndataset = fo.Dataset.from_dir(\n \"data/hymenoptera_data/test/\",\n fo.types.ImageClassificationDirectoryTree,\n)\n\n# 3 Load model\nembedder = ImageEmbedder(backbone=\"resnet101\", embedding_dim=128)\n\n# 4 Generate embeddings\nfilepaths = dataset.values(\"filepath\")\nembeddings = np.stack(embedder.predict(filepaths))\n\n# 5 Visualize in FiftyOne App\nresults = fob.compute_visualization(dataset, embeddings=embeddings)\nsession = fo.launch_app(dataset)\nplot = results.visualize(labels=\"ground_truth.label\")\nplot.show()\n\n# Optional: block execution until App is closed\nsession.wait()\n", "path": "flash_examples/integrations/fiftyone/image_embedding.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fiftyone as fo\nimport fiftyone.brain as fob\nimport numpy as np\n\nfrom flash.core.data.utils import download_data\nfrom flash.image import ImageEmbedder\n\n# 1 Download data\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip\")\n\n# 2 Load data into FiftyOne\ndataset = fo.Dataset.from_dir(\n \"data/hymenoptera_data/test/\",\n fo.types.ImageClassificationDirectoryTree,\n)\n\n# 3 Load model\nembedder = ImageEmbedder(backbone=\"resnet101\")\n\n# 4 Generate embeddings\nfilepaths = dataset.values(\"filepath\")\nembeddings = np.stack(embedder.predict(filepaths))\n\n# 5 Visualize in FiftyOne App\nresults = fob.compute_visualization(dataset, embeddings=embeddings)\nsession = fo.launch_app(dataset)\nplot = results.visualize(labels=\"ground_truth.label\")\nplot.show()\n\n# Optional: block execution until App is closed\nsession.wait()\n", "path": "flash_examples/integrations/fiftyone/image_embedding.py"}]} | 1,537 | 134 |
gh_patches_debug_15994 | rasdani/github-patches | git_diff | falconry__falcon-1588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WebDAV methods not supported
Falcon defines supported HTTP methods in `falcon/constants.py`: supported are "usual" `HTTP_METHODS` and, in addition to that, `WEBDAV_METHODS`. However, only WebDAV versioning extension methods from RFC 3253 are supported, but not the "ordinary" WebDAV ones (i.e. from RFCs 2518 & 4918) like `COPY`, `LOCK`, `MKCOL`, `MOVE` etc.
Supporting only an extension, but not the core upon which that extension builds looks somewhat inconsistent.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/constants.py`
Content:
```
1 import os
2
3 # RFC 7231, 5789 methods
4 HTTP_METHODS = [
5 'CONNECT',
6 'DELETE',
7 'GET',
8 'HEAD',
9 'OPTIONS',
10 'PATCH',
11 'POST',
12 'PUT',
13 'TRACE',
14 ]
15
16 # RFC 3253 methods
17 WEBDAV_METHODS = [
18 'CHECKIN',
19 'CHECKOUT',
20 'REPORT',
21 'UNCHECKIN',
22 'UPDATE',
23 'VERSION-CONTROL',
24 ]
25
26 # if FALCON_CUSTOM_HTTP_METHODS is defined, treat it as a comma-
27 # delimited string of additional supported methods in this env.
28 FALCON_CUSTOM_HTTP_METHODS = [
29 method.strip().upper()
30 for method in os.environ.get('FALCON_CUSTOM_HTTP_METHODS', '').split(',')
31 if method.strip() != ''
32 ]
33
34 COMBINED_METHODS = HTTP_METHODS + WEBDAV_METHODS + FALCON_CUSTOM_HTTP_METHODS
35
36 # NOTE(kgriffs): According to RFC 7159, most JSON parsers assume
37 # UTF-8 and so it is the recommended default charset going forward,
38 # and indeed, other charsets should not be specified to ensure
39 # maximum interoperability.
40 MEDIA_JSON = 'application/json'
41
42 # NOTE(kgriffs): An internet media type for MessagePack has not
43 # yet been registered. 'application/x-msgpack' is commonly used,
44 # but the use of the 'x-' prefix is discouraged by RFC 6838.
45 MEDIA_MSGPACK = 'application/msgpack'
46
47 # NOTE(kgriffs): An internet media type for YAML has not been
48 # registered. RoR uses 'application/x-yaml', but since use of
49 # 'x-' is discouraged by RFC 6838, we don't use it in Falcon.
50 #
51 # The YAML specification requires that parsers deduce the character
52 # encoding by examining the first few bytes of the document itself.
53 # Therefore, it does not make sense to include the charset in the
54 # media type string.
55 MEDIA_YAML = 'application/yaml'
56
57 # NOTE(kgriffs): According to RFC 7303, when the charset is
58 # omitted, preference is given to the encoding specified in the
59 # document itself (either via a BOM, or via the XML declaration). If
60 # the document does not explicitly specify the encoding, UTF-8 is
61 # assumed. We do not specify the charset here, because many parsers
62 # ignore it anyway and just use what is specified in the document,
63 # contrary to the RFCs.
64 MEDIA_XML = 'application/xml'
65
66
67 # NOTE(kgriffs): RFC 4329 recommends application/* over text/.
68 # futhermore, parsers are required to respect the Unicode
69 # encoding signature, if present in the document, and to default
70 # to UTF-8 when not present. Note, however, that implementations
71 # are not required to support anything besides UTF-8, so it is
72 # unclear how much utility an encoding signature (or the charset
73 # parameter for that matter) has in practice.
74 MEDIA_JS = 'application/javascript'
75
76 # NOTE(kgriffs): According to RFC 6838, most text media types should
77 # include the charset parameter.
78 MEDIA_HTML = 'text/html; charset=utf-8'
79 MEDIA_TEXT = 'text/plain; charset=utf-8'
80
81 MEDIA_JPEG = 'image/jpeg'
82 MEDIA_PNG = 'image/png'
83 MEDIA_GIF = 'image/gif'
84
85 DEFAULT_MEDIA_TYPE = MEDIA_JSON
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/falcon/constants.py b/falcon/constants.py
--- a/falcon/constants.py
+++ b/falcon/constants.py
@@ -13,12 +13,19 @@
'TRACE',
]
-# RFC 3253 methods
+# RFC 2518 and 4918 methods
WEBDAV_METHODS = [
'CHECKIN',
'CHECKOUT',
+ 'COPY',
+ 'LOCK',
+ 'MKCOL',
+ 'MOVE',
+ 'PROPFIND',
+ 'PROPPATCH',
'REPORT',
'UNCHECKIN',
+ 'UNLOCK'
'UPDATE',
'VERSION-CONTROL',
]
@@ -63,7 +70,6 @@
# contrary to the RFCs.
MEDIA_XML = 'application/xml'
-
# NOTE(kgriffs): RFC 4329 recommends application/* over text/.
# futhermore, parsers are required to respect the Unicode
# encoding signature, if present in the document, and to default
| {"golden_diff": "diff --git a/falcon/constants.py b/falcon/constants.py\n--- a/falcon/constants.py\n+++ b/falcon/constants.py\n@@ -13,12 +13,19 @@\n 'TRACE',\n ]\n \n-# RFC 3253 methods\n+# RFC 2518 and 4918 methods\n WEBDAV_METHODS = [\n 'CHECKIN',\n 'CHECKOUT',\n+ 'COPY',\n+ 'LOCK',\n+ 'MKCOL',\n+ 'MOVE',\n+ 'PROPFIND',\n+ 'PROPPATCH',\n 'REPORT',\n 'UNCHECKIN',\n+ 'UNLOCK'\n 'UPDATE',\n 'VERSION-CONTROL',\n ]\n@@ -63,7 +70,6 @@\n # contrary to the RFCs.\n MEDIA_XML = 'application/xml'\n \n-\n # NOTE(kgriffs): RFC 4329 recommends application/* over text/.\n # futhermore, parsers are required to respect the Unicode\n # encoding signature, if present in the document, and to default\n", "issue": "WebDAV methods not supported\nFalcon defines supported HTTP methods in `falcon/constants.py`: supported are \"usual\" `HTTP_METHODS` and, in addition to that, `WEBDAV_METHODS`. However, only WebDAV versioning extension methods from RFC 3253 are supported, but not the \"ordinary\" WebDAV ones (i.e. from RFCs 2518 & 4918) like `COPY`, `LOCK`, `MKCOL`, `MOVE` etc.\r\n\r\nSupporting only an extension, but not the core upon which that extension builds looks somewhat inconsistent.\n", "before_files": [{"content": "import os\n\n# RFC 7231, 5789 methods\nHTTP_METHODS = [\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n]\n\n# RFC 3253 methods\nWEBDAV_METHODS = [\n 'CHECKIN',\n 'CHECKOUT',\n 'REPORT',\n 'UNCHECKIN',\n 'UPDATE',\n 'VERSION-CONTROL',\n]\n\n# if FALCON_CUSTOM_HTTP_METHODS is defined, treat it as a comma-\n# delimited string of additional supported methods in this env.\nFALCON_CUSTOM_HTTP_METHODS = [\n method.strip().upper()\n for method in os.environ.get('FALCON_CUSTOM_HTTP_METHODS', '').split(',')\n if method.strip() != ''\n]\n\nCOMBINED_METHODS = HTTP_METHODS + WEBDAV_METHODS + FALCON_CUSTOM_HTTP_METHODS\n\n# NOTE(kgriffs): According to RFC 7159, most JSON parsers assume\n# UTF-8 and so it is the recommended default charset going forward,\n# and indeed, other charsets should not be specified to ensure\n# maximum interoperability.\nMEDIA_JSON = 'application/json'\n\n# NOTE(kgriffs): An internet media type for MessagePack has not\n# yet been registered. 'application/x-msgpack' is commonly used,\n# but the use of the 'x-' prefix is discouraged by RFC 6838.\nMEDIA_MSGPACK = 'application/msgpack'\n\n# NOTE(kgriffs): An internet media type for YAML has not been\n# registered. RoR uses 'application/x-yaml', but since use of\n# 'x-' is discouraged by RFC 6838, we don't use it in Falcon.\n#\n# The YAML specification requires that parsers deduce the character\n# encoding by examining the first few bytes of the document itself.\n# Therefore, it does not make sense to include the charset in the\n# media type string.\nMEDIA_YAML = 'application/yaml'\n\n# NOTE(kgriffs): According to RFC 7303, when the charset is\n# omitted, preference is given to the encoding specified in the\n# document itself (either via a BOM, or via the XML declaration). If\n# the document does not explicitly specify the encoding, UTF-8 is\n# assumed. We do not specify the charset here, because many parsers\n# ignore it anyway and just use what is specified in the document,\n# contrary to the RFCs.\nMEDIA_XML = 'application/xml'\n\n\n# NOTE(kgriffs): RFC 4329 recommends application/* over text/.\n# futhermore, parsers are required to respect the Unicode\n# encoding signature, if present in the document, and to default\n# to UTF-8 when not present. Note, however, that implementations\n# are not required to support anything besides UTF-8, so it is\n# unclear how much utility an encoding signature (or the charset\n# parameter for that matter) has in practice.\nMEDIA_JS = 'application/javascript'\n\n# NOTE(kgriffs): According to RFC 6838, most text media types should\n# include the charset parameter.\nMEDIA_HTML = 'text/html; charset=utf-8'\nMEDIA_TEXT = 'text/plain; charset=utf-8'\n\nMEDIA_JPEG = 'image/jpeg'\nMEDIA_PNG = 'image/png'\nMEDIA_GIF = 'image/gif'\n\nDEFAULT_MEDIA_TYPE = MEDIA_JSON\n", "path": "falcon/constants.py"}], "after_files": [{"content": "import os\n\n# RFC 7231, 5789 methods\nHTTP_METHODS = [\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n]\n\n# RFC 2518 and 4918 methods\nWEBDAV_METHODS = [\n 'CHECKIN',\n 'CHECKOUT',\n 'COPY',\n 'LOCK',\n 'MKCOL',\n 'MOVE',\n 'PROPFIND',\n 'PROPPATCH',\n 'REPORT',\n 'UNCHECKIN',\n 'UNLOCK'\n 'UPDATE',\n 'VERSION-CONTROL',\n]\n\n# if FALCON_CUSTOM_HTTP_METHODS is defined, treat it as a comma-\n# delimited string of additional supported methods in this env.\nFALCON_CUSTOM_HTTP_METHODS = [\n method.strip().upper()\n for method in os.environ.get('FALCON_CUSTOM_HTTP_METHODS', '').split(',')\n if method.strip() != ''\n]\n\nCOMBINED_METHODS = HTTP_METHODS + WEBDAV_METHODS + FALCON_CUSTOM_HTTP_METHODS\n\n# NOTE(kgriffs): According to RFC 7159, most JSON parsers assume\n# UTF-8 and so it is the recommended default charset going forward,\n# and indeed, other charsets should not be specified to ensure\n# maximum interoperability.\nMEDIA_JSON = 'application/json'\n\n# NOTE(kgriffs): An internet media type for MessagePack has not\n# yet been registered. 'application/x-msgpack' is commonly used,\n# but the use of the 'x-' prefix is discouraged by RFC 6838.\nMEDIA_MSGPACK = 'application/msgpack'\n\n# NOTE(kgriffs): An internet media type for YAML has not been\n# registered. RoR uses 'application/x-yaml', but since use of\n# 'x-' is discouraged by RFC 6838, we don't use it in Falcon.\n#\n# The YAML specification requires that parsers deduce the character\n# encoding by examining the first few bytes of the document itself.\n# Therefore, it does not make sense to include the charset in the\n# media type string.\nMEDIA_YAML = 'application/yaml'\n\n# NOTE(kgriffs): According to RFC 7303, when the charset is\n# omitted, preference is given to the encoding specified in the\n# document itself (either via a BOM, or via the XML declaration). If\n# the document does not explicitly specify the encoding, UTF-8 is\n# assumed. We do not specify the charset here, because many parsers\n# ignore it anyway and just use what is specified in the document,\n# contrary to the RFCs.\nMEDIA_XML = 'application/xml'\n\n# NOTE(kgriffs): RFC 4329 recommends application/* over text/.\n# futhermore, parsers are required to respect the Unicode\n# encoding signature, if present in the document, and to default\n# to UTF-8 when not present. Note, however, that implementations\n# are not required to support anything besides UTF-8, so it is\n# unclear how much utility an encoding signature (or the charset\n# parameter for that matter) has in practice.\nMEDIA_JS = 'application/javascript'\n\n# NOTE(kgriffs): According to RFC 6838, most text media types should\n# include the charset parameter.\nMEDIA_HTML = 'text/html; charset=utf-8'\nMEDIA_TEXT = 'text/plain; charset=utf-8'\n\nMEDIA_JPEG = 'image/jpeg'\nMEDIA_PNG = 'image/png'\nMEDIA_GIF = 'image/gif'\n\nDEFAULT_MEDIA_TYPE = MEDIA_JSON\n", "path": "falcon/constants.py"}]} | 1,294 | 226 |
gh_patches_debug_1925 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IndexError when deleting a column
## Description
<!-- A clear and concise description of what the bug is. -->
An indexError occurs when deleting a column through the API. Most of the time the error occurs when deleting the first or second column of a table. Deleting the last columns in a table does not seem to produce this error.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
- A column should be deleted
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
1. Delete the first or second column of a table via API. Example: api/v0/tables/1/columns/1/
2. Delete the first or second column of another table via API. Example: api/v0/tables/2/columns/0/
## Screenshots


## Environment
- OS: (_eg._ macOS 10.14.6; Fedora 32)
- Browser: (_eg._ Safari; Firefox)
- Browser Version: (_eg._ 13; 73)
- Other info:
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/columns/operations/select.py`
Content:
```
1 import warnings
2
3 from sqlalchemy import Table, MetaData, and_, select, text, func
4
5 from db.tables.operations.select import reflect_table_from_oid
6 from db.utils import execute_statement
7
8
9 def get_column_index_from_name(table_oid, column_name, engine, connection_to_use=None):
10 with warnings.catch_warnings():
11 warnings.filterwarnings("ignore", message="Did not recognize type")
12 pg_attribute = Table("pg_attribute", MetaData(), autoload_with=engine)
13 sel = select(pg_attribute.c.attnum).where(
14 and_(
15 pg_attribute.c.attrelid == table_oid,
16 pg_attribute.c.attname == column_name
17 )
18 )
19 result = execute_statement(engine, sel, connection_to_use).fetchone()[0]
20
21 # Account for dropped columns that don't appear in the SQLAlchemy tables
22 sel = (
23 select(func.count())
24 .where(and_(
25 pg_attribute.c.attisdropped.is_(True),
26 pg_attribute.c.attnum < result,
27 ))
28 )
29 dropped_count = execute_statement(engine, sel, connection_to_use).fetchone()[0]
30
31 return result - 1 - dropped_count
32
33
34 def get_column_default(table_oid, column_index, engine, connection_to_use=None):
35 table = reflect_table_from_oid(table_oid, engine, connection_to_use)
36 column = table.columns[column_index]
37 if column.server_default is None:
38 return None
39
40 metadata = MetaData()
41 with warnings.catch_warnings():
42 warnings.filterwarnings("ignore", message="Did not recognize type")
43 pg_attribute = Table("pg_attribute", metadata, autoload_with=engine)
44 pg_attrdef = Table("pg_attrdef", metadata, autoload_with=engine)
45
46 query = (
47 select(pg_attrdef.c.adbin)
48 .select_from(
49 pg_attrdef
50 .join(
51 pg_attribute,
52 and_(
53 pg_attribute.c.attnum == pg_attrdef.c.adnum,
54 pg_attribute.c.attrelid == pg_attrdef.c.adrelid
55 )
56 )
57 )
58 .where(and_(
59 pg_attribute.c.attrelid == table_oid,
60 pg_attribute.c.attname == column.name,
61 pg_attribute.c.attnum >= 1,
62 ))
63 )
64
65 result = execute_statement(engine, query, connection_to_use).first()[0]
66
67 # Here, we get the 'adbin' value for the current column, stored in the attrdef
68 # system table. The prefix of this value tells us whether the default is static
69 # ('{CONSTANT') or generated ('{FUNCEXPR'). We do not return generated defaults.
70 if result.startswith("{FUNCEXPR"):
71 return None
72
73 default_textual_sql = column.server_default.arg.text
74 # Defaults are stored as text with SQL casts appended
75 # Ex: "'test default string'::character varying" or "'2020-01-01'::date"
76 # Here, we execute the cast to get the proper python value
77 return execute_statement(engine, select(text(default_textual_sql)), connection_to_use).first()[0]
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/columns/operations/select.py b/db/columns/operations/select.py
--- a/db/columns/operations/select.py
+++ b/db/columns/operations/select.py
@@ -22,6 +22,7 @@
sel = (
select(func.count())
.where(and_(
+ pg_attribute.c.attrelid == table_oid,
pg_attribute.c.attisdropped.is_(True),
pg_attribute.c.attnum < result,
))
| {"golden_diff": "diff --git a/db/columns/operations/select.py b/db/columns/operations/select.py\n--- a/db/columns/operations/select.py\n+++ b/db/columns/operations/select.py\n@@ -22,6 +22,7 @@\n sel = (\n select(func.count())\n .where(and_(\n+ pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attisdropped.is_(True),\n pg_attribute.c.attnum < result,\n ))\n", "issue": "IndexError when deleting a column\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\nAn indexError occurs when deleting a column through the API. Most of the time the error occurs when deleting the first or second column of a table. Deleting the last columns in a table does not seem to produce this error. \r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n- A column should be deleted\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\n1. Delete the first or second column of a table via API. Example: api/v0/tables/1/columns/1/\r\n2. Delete the first or second column of another table via API. Example: api/v0/tables/2/columns/0/\r\n\r\n## Screenshots\r\n\r\n\r\n\r\n\r\n## Environment\r\n - OS: (_eg._ macOS 10.14.6; Fedora 32)\r\n - Browser: (_eg._ Safari; Firefox)\r\n - Browser Version: (_eg._ 13; 73)\r\n - Other info:\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\n", "before_files": [{"content": "import warnings\n\nfrom sqlalchemy import Table, MetaData, and_, select, text, func\n\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.utils import execute_statement\n\n\ndef get_column_index_from_name(table_oid, column_name, engine, connection_to_use=None):\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", MetaData(), autoload_with=engine)\n sel = select(pg_attribute.c.attnum).where(\n and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column_name\n )\n )\n result = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n # Account for dropped columns that don't appear in the SQLAlchemy tables\n sel = (\n select(func.count())\n .where(and_(\n pg_attribute.c.attisdropped.is_(True),\n pg_attribute.c.attnum < result,\n ))\n )\n dropped_count = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n return result - 1 - dropped_count\n\n\ndef get_column_default(table_oid, column_index, engine, connection_to_use=None):\n table = reflect_table_from_oid(table_oid, engine, connection_to_use)\n column = table.columns[column_index]\n if column.server_default is None:\n return None\n\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", metadata, autoload_with=engine)\n pg_attrdef = Table(\"pg_attrdef\", metadata, autoload_with=engine)\n\n query = (\n select(pg_attrdef.c.adbin)\n .select_from(\n pg_attrdef\n .join(\n pg_attribute,\n and_(\n pg_attribute.c.attnum == pg_attrdef.c.adnum,\n pg_attribute.c.attrelid == pg_attrdef.c.adrelid\n )\n )\n )\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column.name,\n pg_attribute.c.attnum >= 1,\n ))\n )\n\n result = execute_statement(engine, query, connection_to_use).first()[0]\n\n # Here, we get the 'adbin' value for the current column, stored in the attrdef\n # system table. The prefix of this value tells us whether the default is static\n # ('{CONSTANT') or generated ('{FUNCEXPR'). We do not return generated defaults.\n if result.startswith(\"{FUNCEXPR\"):\n return None\n\n default_textual_sql = column.server_default.arg.text\n # Defaults are stored as text with SQL casts appended\n # Ex: \"'test default string'::character varying\" or \"'2020-01-01'::date\"\n # Here, we execute the cast to get the proper python value\n return execute_statement(engine, select(text(default_textual_sql)), connection_to_use).first()[0]\n", "path": "db/columns/operations/select.py"}], "after_files": [{"content": "import warnings\n\nfrom sqlalchemy import Table, MetaData, and_, select, text, func\n\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.utils import execute_statement\n\n\ndef get_column_index_from_name(table_oid, column_name, engine, connection_to_use=None):\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", MetaData(), autoload_with=engine)\n sel = select(pg_attribute.c.attnum).where(\n and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column_name\n )\n )\n result = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n # Account for dropped columns that don't appear in the SQLAlchemy tables\n sel = (\n select(func.count())\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attisdropped.is_(True),\n pg_attribute.c.attnum < result,\n ))\n )\n dropped_count = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n return result - 1 - dropped_count\n\n\ndef get_column_default(table_oid, column_index, engine, connection_to_use=None):\n table = reflect_table_from_oid(table_oid, engine, connection_to_use)\n column = table.columns[column_index]\n if column.server_default is None:\n return None\n\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", metadata, autoload_with=engine)\n pg_attrdef = Table(\"pg_attrdef\", metadata, autoload_with=engine)\n\n query = (\n select(pg_attrdef.c.adbin)\n .select_from(\n pg_attrdef\n .join(\n pg_attribute,\n and_(\n pg_attribute.c.attnum == pg_attrdef.c.adnum,\n pg_attribute.c.attrelid == pg_attrdef.c.adrelid\n )\n )\n )\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column.name,\n pg_attribute.c.attnum >= 1,\n ))\n )\n\n result = execute_statement(engine, query, connection_to_use).first()[0]\n\n # Here, we get the 'adbin' value for the current column, stored in the attrdef\n # system table. The prefix of this value tells us whether the default is static\n # ('{CONSTANT') or generated ('{FUNCEXPR'). We do not return generated defaults.\n if result.startswith(\"{FUNCEXPR\"):\n return None\n\n default_textual_sql = column.server_default.arg.text\n # Defaults are stored as text with SQL casts appended\n # Ex: \"'test default string'::character varying\" or \"'2020-01-01'::date\"\n # Here, we execute the cast to get the proper python value\n return execute_statement(engine, select(text(default_textual_sql)), connection_to_use).first()[0]\n", "path": "db/columns/operations/select.py"}]} | 1,470 | 103 |
gh_patches_debug_59246 | rasdani/github-patches | git_diff | projectmesa__mesa-451 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Datacollector fix
Simplify the `DataCollector` API by allowing the user-provided model- and agent-level reporters to be the names of attributes, in addition to methods. e.g. instead of needing to write
```
agent_reporters={"Wealth": lambda a: a.wealth}
```
you can write
```
agent_reporters={"Wealth":"wealth"}
```
This PR implements this feature, and updates the tests and documentation accordingly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Mesa Agent-Based Modeling Framework
4
5 Core Objects: Model, and Agent.
6
7 """
8 import datetime
9
10 from .model import Model
11 from .agent import Agent
12
13
14 __all__ = ["Model", "Agent"]
15
16 __title__ = 'mesa'
17 __version__ = '0.8.2'
18 __license__ = 'Apache 2.0'
19 __copyright__ = 'Copyright %s Project Mesa Team' % datetime.date.today().year
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mesa/__init__.py b/mesa/__init__.py
--- a/mesa/__init__.py
+++ b/mesa/__init__.py
@@ -14,6 +14,6 @@
__all__ = ["Model", "Agent"]
__title__ = 'mesa'
-__version__ = '0.8.2'
+__version__ = '0.8.3'
__license__ = 'Apache 2.0'
__copyright__ = 'Copyright %s Project Mesa Team' % datetime.date.today().year
| {"golden_diff": "diff --git a/mesa/__init__.py b/mesa/__init__.py\n--- a/mesa/__init__.py\n+++ b/mesa/__init__.py\n@@ -14,6 +14,6 @@\n __all__ = [\"Model\", \"Agent\"]\n \n __title__ = 'mesa'\n-__version__ = '0.8.2'\n+__version__ = '0.8.3'\n __license__ = 'Apache 2.0'\n __copyright__ = 'Copyright %s Project Mesa Team' % datetime.date.today().year\n", "issue": "Datacollector fix\nSimplify the `DataCollector` API by allowing the user-provided model- and agent-level reporters to be the names of attributes, in addition to methods. e.g. instead of needing to write\r\n\r\n```\r\nagent_reporters={\"Wealth\": lambda a: a.wealth}\r\n```\r\n\r\nyou can write\r\n```\r\nagent_reporters={\"Wealth\":\"wealth\"}\r\n```\r\n\r\nThis PR implements this feature, and updates the tests and documentation accordingly.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom .model import Model\nfrom .agent import Agent\n\n\n__all__ = [\"Model\", \"Agent\"]\n\n__title__ = 'mesa'\n__version__ = '0.8.2'\n__license__ = 'Apache 2.0'\n__copyright__ = 'Copyright %s Project Mesa Team' % datetime.date.today().year\n", "path": "mesa/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMesa Agent-Based Modeling Framework\n\nCore Objects: Model, and Agent.\n\n\"\"\"\nimport datetime\n\nfrom .model import Model\nfrom .agent import Agent\n\n\n__all__ = [\"Model\", \"Agent\"]\n\n__title__ = 'mesa'\n__version__ = '0.8.3'\n__license__ = 'Apache 2.0'\n__copyright__ = 'Copyright %s Project Mesa Team' % datetime.date.today().year\n", "path": "mesa/__init__.py"}]} | 490 | 121 |
gh_patches_debug_31990 | rasdani/github-patches | git_diff | cloudtools__troposphere-1811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add AWS::Elasticsearch::Domain DomainEndpointOptions
Cloudformation now allows you to specify additional options for the domain endpoint, such as whether to require HTTPS for all traffic, with an Elasticseach Domain.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticsearch-domain-domainendpointoptions.html
Adding support for this would be super.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/elasticsearch.py`
Content:
```
1 # Copyright (c) 2012-2015, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSProperty, AWSObject, Tags
7 from .compat import policytypes
8 from .validators import boolean, integer, integer_range, positive_integer
9
10 VALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')
11
12
13 def validate_volume_type(volume_type):
14 """Validate VolumeType for ElasticsearchDomain"""
15 if volume_type not in VALID_VOLUME_TYPES:
16 raise ValueError("Elasticsearch Domain VolumeType must be one of: %s" %
17 ", ".join(VALID_VOLUME_TYPES))
18 return volume_type
19
20
21 class CognitoOptions(AWSProperty):
22 props = {
23 'Enabled': (boolean, False),
24 'IdentityPoolId': (basestring, False),
25 'RoleArn': (basestring, False),
26 'UserPoolId': (basestring, False),
27 }
28
29
30 class EBSOptions(AWSProperty):
31 props = {
32 'EBSEnabled': (boolean, False),
33 'Iops': (positive_integer, False),
34 'VolumeSize': (integer, False),
35 'VolumeType': (validate_volume_type, False)
36 }
37
38 def validate(self):
39 volume_type = self.properties.get('VolumeType')
40 iops = self.properties.get('Iops')
41 if volume_type == 'io1' and not iops:
42 raise ValueError("Must specify Iops if VolumeType is 'io1'.")
43
44
45 class ZoneAwarenessConfig(AWSProperty):
46 props = {
47 'AvailabilityZoneCount': (integer, False),
48 }
49
50
51 class ElasticsearchClusterConfig(AWSProperty):
52 props = {
53 'DedicatedMasterCount': (integer, False),
54 'DedicatedMasterEnabled': (boolean, False),
55 'DedicatedMasterType': (basestring, False),
56 'InstanceCount': (integer, False),
57 'InstanceType': (basestring, False),
58 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),
59 'ZoneAwarenessEnabled': (boolean, False)
60 }
61
62
63 class EncryptionAtRestOptions(AWSProperty):
64 props = {
65 'Enabled': (boolean, False),
66 'KmsKeyId': (basestring, False),
67 }
68
69
70 class NodeToNodeEncryptionOptions(AWSProperty):
71 props = {
72 'Enabled': (boolean, False),
73 }
74
75
76 class SnapshotOptions(AWSProperty):
77 props = {
78 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)
79 }
80
81
82 class VPCOptions(AWSProperty):
83 props = {
84 'SecurityGroupIds': ([basestring], False),
85 'SubnetIds': ([basestring], False)
86 }
87
88
89 class MasterUserOptions(AWSProperty):
90 props = {
91 'MasterUserARN': (basestring, False),
92 'MasterUserName': (basestring, False),
93 'MasterUserPassword': (basestring, False),
94 }
95
96
97 class AdvancedSecurityOptionsInput(AWSProperty):
98 props = {
99 'Enabled': (boolean, False),
100 'InternalUserDatabaseEnabled': (boolean, False),
101 'MasterUserOptions': (MasterUserOptions, False),
102 }
103
104
105 class Domain(AWSObject):
106 resource_type = "AWS::Elasticsearch::Domain"
107
108 props = {
109 'AccessPolicies': (policytypes, False),
110 'AdvancedOptions': (dict, False),
111 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),
112 'CognitoOptions': (CognitoOptions, False),
113 'DomainName': (basestring, False),
114 'EBSOptions': (EBSOptions, False),
115 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),
116 'ElasticsearchVersion': (basestring, False),
117 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),
118 'LogPublishingOptions': (dict, False),
119 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),
120 'SnapshotOptions': (SnapshotOptions, False),
121 'Tags': ((Tags, list), False),
122 'VPCOptions': (VPCOptions, False),
123 }
124
125
126 # Backward compatibility
127 ElasticsearchDomain = Domain
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/elasticsearch.py b/troposphere/elasticsearch.py
--- a/troposphere/elasticsearch.py
+++ b/troposphere/elasticsearch.py
@@ -8,6 +8,10 @@
from .validators import boolean, integer, integer_range, positive_integer
VALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')
+VALID_TLS_SECURITY_POLICIES = (
+ 'Policy-Min-TLS-1-0-2019-07',
+ 'Policy-Min-TLS-1-2-2019-07'
+ )
def validate_volume_type(volume_type):
@@ -18,6 +22,14 @@
return volume_type
+def validate_tls_security_policy(tls_security_policy):
+ """Validate TLS Security Policy for ElasticsearchDomain"""
+ if tls_security_policy not in VALID_TLS_SECURITY_POLICIES:
+ raise ValueError("Minimum TLS Security Policy must be one of: %s" %
+ ", ".join(VALID_TLS_SECURITY_POLICIES))
+ return tls_security_policy
+
+
class CognitoOptions(AWSProperty):
props = {
'Enabled': (boolean, False),
@@ -27,6 +39,13 @@
}
+class DomainEndpointOptions(AWSProperty):
+ props = {
+ 'EnforceHTTPS': (boolean, False),
+ 'TLSSecurityPolicy': (validate_tls_security_policy, False),
+ }
+
+
class EBSOptions(AWSProperty):
props = {
'EBSEnabled': (boolean, False),
@@ -111,6 +130,7 @@
'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),
'CognitoOptions': (CognitoOptions, False),
'DomainName': (basestring, False),
+ 'DomainEndpointOptions': (DomainEndpointOptions, False),
'EBSOptions': (EBSOptions, False),
'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),
'ElasticsearchVersion': (basestring, False),
| {"golden_diff": "diff --git a/troposphere/elasticsearch.py b/troposphere/elasticsearch.py\n--- a/troposphere/elasticsearch.py\n+++ b/troposphere/elasticsearch.py\n@@ -8,6 +8,10 @@\n from .validators import boolean, integer, integer_range, positive_integer\n \n VALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')\n+VALID_TLS_SECURITY_POLICIES = (\n+ 'Policy-Min-TLS-1-0-2019-07',\n+ 'Policy-Min-TLS-1-2-2019-07'\n+ )\n \n \n def validate_volume_type(volume_type):\n@@ -18,6 +22,14 @@\n return volume_type\n \n \n+def validate_tls_security_policy(tls_security_policy):\n+ \"\"\"Validate TLS Security Policy for ElasticsearchDomain\"\"\"\n+ if tls_security_policy not in VALID_TLS_SECURITY_POLICIES:\n+ raise ValueError(\"Minimum TLS Security Policy must be one of: %s\" %\n+ \", \".join(VALID_TLS_SECURITY_POLICIES))\n+ return tls_security_policy\n+\n+\n class CognitoOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n@@ -27,6 +39,13 @@\n }\n \n \n+class DomainEndpointOptions(AWSProperty):\n+ props = {\n+ 'EnforceHTTPS': (boolean, False),\n+ 'TLSSecurityPolicy': (validate_tls_security_policy, False),\n+ }\n+\n+\n class EBSOptions(AWSProperty):\n props = {\n 'EBSEnabled': (boolean, False),\n@@ -111,6 +130,7 @@\n 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n+ 'DomainEndpointOptions': (DomainEndpointOptions, False),\n 'EBSOptions': (EBSOptions, False),\n 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),\n 'ElasticsearchVersion': (basestring, False),\n", "issue": "Add AWS::Elasticsearch::Domain DomainEndpointOptions \nCloudformation now allows you to specify additional options for the domain endpoint, such as whether to require HTTPS for all traffic, with an Elasticseach Domain. \r\n\r\nhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticsearch-domain-domainendpointoptions.html\r\n\r\nAdding support for this would be super. \n", "before_files": [{"content": "# Copyright (c) 2012-2015, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSProperty, AWSObject, Tags\nfrom .compat import policytypes\nfrom .validators import boolean, integer, integer_range, positive_integer\n\nVALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')\n\n\ndef validate_volume_type(volume_type):\n \"\"\"Validate VolumeType for ElasticsearchDomain\"\"\"\n if volume_type not in VALID_VOLUME_TYPES:\n raise ValueError(\"Elasticsearch Domain VolumeType must be one of: %s\" %\n \", \".join(VALID_VOLUME_TYPES))\n return volume_type\n\n\nclass CognitoOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'IdentityPoolId': (basestring, False),\n 'RoleArn': (basestring, False),\n 'UserPoolId': (basestring, False),\n }\n\n\nclass EBSOptions(AWSProperty):\n props = {\n 'EBSEnabled': (boolean, False),\n 'Iops': (positive_integer, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (validate_volume_type, False)\n }\n\n def validate(self):\n volume_type = self.properties.get('VolumeType')\n iops = self.properties.get('Iops')\n if volume_type == 'io1' and not iops:\n raise ValueError(\"Must specify Iops if VolumeType is 'io1'.\")\n\n\nclass ZoneAwarenessConfig(AWSProperty):\n props = {\n 'AvailabilityZoneCount': (integer, False),\n }\n\n\nclass ElasticsearchClusterConfig(AWSProperty):\n props = {\n 'DedicatedMasterCount': (integer, False),\n 'DedicatedMasterEnabled': (boolean, False),\n 'DedicatedMasterType': (basestring, False),\n 'InstanceCount': (integer, False),\n 'InstanceType': (basestring, False),\n 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),\n 'ZoneAwarenessEnabled': (boolean, False)\n }\n\n\nclass EncryptionAtRestOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'KmsKeyId': (basestring, False),\n }\n\n\nclass NodeToNodeEncryptionOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n }\n\n\nclass SnapshotOptions(AWSProperty):\n props = {\n 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)\n }\n\n\nclass VPCOptions(AWSProperty):\n props = {\n 'SecurityGroupIds': ([basestring], False),\n 'SubnetIds': ([basestring], False)\n }\n\n\nclass MasterUserOptions(AWSProperty):\n props = {\n 'MasterUserARN': (basestring, False),\n 'MasterUserName': (basestring, False),\n 'MasterUserPassword': (basestring, False),\n }\n\n\nclass AdvancedSecurityOptionsInput(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'InternalUserDatabaseEnabled': (boolean, False),\n 'MasterUserOptions': (MasterUserOptions, False),\n }\n\n\nclass Domain(AWSObject):\n resource_type = \"AWS::Elasticsearch::Domain\"\n\n props = {\n 'AccessPolicies': (policytypes, False),\n 'AdvancedOptions': (dict, False),\n 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n 'EBSOptions': (EBSOptions, False),\n 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),\n 'ElasticsearchVersion': (basestring, False),\n 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),\n 'LogPublishingOptions': (dict, False),\n 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),\n 'SnapshotOptions': (SnapshotOptions, False),\n 'Tags': ((Tags, list), False),\n 'VPCOptions': (VPCOptions, False),\n }\n\n\n# Backward compatibility\nElasticsearchDomain = Domain\n", "path": "troposphere/elasticsearch.py"}], "after_files": [{"content": "# Copyright (c) 2012-2015, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSProperty, AWSObject, Tags\nfrom .compat import policytypes\nfrom .validators import boolean, integer, integer_range, positive_integer\n\nVALID_VOLUME_TYPES = ('standard', 'gp2', 'io1')\nVALID_TLS_SECURITY_POLICIES = (\n 'Policy-Min-TLS-1-0-2019-07',\n 'Policy-Min-TLS-1-2-2019-07'\n )\n\n\ndef validate_volume_type(volume_type):\n \"\"\"Validate VolumeType for ElasticsearchDomain\"\"\"\n if volume_type not in VALID_VOLUME_TYPES:\n raise ValueError(\"Elasticsearch Domain VolumeType must be one of: %s\" %\n \", \".join(VALID_VOLUME_TYPES))\n return volume_type\n\n\ndef validate_tls_security_policy(tls_security_policy):\n \"\"\"Validate TLS Security Policy for ElasticsearchDomain\"\"\"\n if tls_security_policy not in VALID_TLS_SECURITY_POLICIES:\n raise ValueError(\"Minimum TLS Security Policy must be one of: %s\" %\n \", \".join(VALID_TLS_SECURITY_POLICIES))\n return tls_security_policy\n\n\nclass CognitoOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'IdentityPoolId': (basestring, False),\n 'RoleArn': (basestring, False),\n 'UserPoolId': (basestring, False),\n }\n\n\nclass DomainEndpointOptions(AWSProperty):\n props = {\n 'EnforceHTTPS': (boolean, False),\n 'TLSSecurityPolicy': (validate_tls_security_policy, False),\n }\n\n\nclass EBSOptions(AWSProperty):\n props = {\n 'EBSEnabled': (boolean, False),\n 'Iops': (positive_integer, False),\n 'VolumeSize': (integer, False),\n 'VolumeType': (validate_volume_type, False)\n }\n\n def validate(self):\n volume_type = self.properties.get('VolumeType')\n iops = self.properties.get('Iops')\n if volume_type == 'io1' and not iops:\n raise ValueError(\"Must specify Iops if VolumeType is 'io1'.\")\n\n\nclass ZoneAwarenessConfig(AWSProperty):\n props = {\n 'AvailabilityZoneCount': (integer, False),\n }\n\n\nclass ElasticsearchClusterConfig(AWSProperty):\n props = {\n 'DedicatedMasterCount': (integer, False),\n 'DedicatedMasterEnabled': (boolean, False),\n 'DedicatedMasterType': (basestring, False),\n 'InstanceCount': (integer, False),\n 'InstanceType': (basestring, False),\n 'ZoneAwarenessConfig': (ZoneAwarenessConfig, False),\n 'ZoneAwarenessEnabled': (boolean, False)\n }\n\n\nclass EncryptionAtRestOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'KmsKeyId': (basestring, False),\n }\n\n\nclass NodeToNodeEncryptionOptions(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n }\n\n\nclass SnapshotOptions(AWSProperty):\n props = {\n 'AutomatedSnapshotStartHour': (integer_range(0, 23), False)\n }\n\n\nclass VPCOptions(AWSProperty):\n props = {\n 'SecurityGroupIds': ([basestring], False),\n 'SubnetIds': ([basestring], False)\n }\n\n\nclass MasterUserOptions(AWSProperty):\n props = {\n 'MasterUserARN': (basestring, False),\n 'MasterUserName': (basestring, False),\n 'MasterUserPassword': (basestring, False),\n }\n\n\nclass AdvancedSecurityOptionsInput(AWSProperty):\n props = {\n 'Enabled': (boolean, False),\n 'InternalUserDatabaseEnabled': (boolean, False),\n 'MasterUserOptions': (MasterUserOptions, False),\n }\n\n\nclass Domain(AWSObject):\n resource_type = \"AWS::Elasticsearch::Domain\"\n\n props = {\n 'AccessPolicies': (policytypes, False),\n 'AdvancedOptions': (dict, False),\n 'AdvancedSecurityOptions': (AdvancedSecurityOptionsInput, False),\n 'CognitoOptions': (CognitoOptions, False),\n 'DomainName': (basestring, False),\n 'DomainEndpointOptions': (DomainEndpointOptions, False),\n 'EBSOptions': (EBSOptions, False),\n 'ElasticsearchClusterConfig': (ElasticsearchClusterConfig, False),\n 'ElasticsearchVersion': (basestring, False),\n 'EncryptionAtRestOptions': (EncryptionAtRestOptions, False),\n 'LogPublishingOptions': (dict, False),\n 'NodeToNodeEncryptionOptions': (NodeToNodeEncryptionOptions, False),\n 'SnapshotOptions': (SnapshotOptions, False),\n 'Tags': ((Tags, list), False),\n 'VPCOptions': (VPCOptions, False),\n }\n\n\n# Backward compatibility\nElasticsearchDomain = Domain\n", "path": "troposphere/elasticsearch.py"}]} | 1,536 | 451 |
gh_patches_debug_12129 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-1670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Filters not applied when calculating count of items within group
## Reproduce
1. Go to the Library Management schema.
1. Load the Table Page for the Publications table.
1. Group by "Publication Year".
1. Observe the first group, for year 1900, to contain 10 records and to display a "Count" of 10. Good.
1. Add a filter condition requiring Title to contain the string "To".
1. Observe the first group, for year 1900, to contain 2 records.
1. Expect "Count" to display 2.
1. Observe "Count" displays 10.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/transforms/operations/apply.py`
Content:
```
1 from db.transforms.base import enforce_relation_type_expectations, Transform
2 from db.transforms import base
3
4
5 def apply_transformations(relation, transformations):
6 enforce_relation_type_expectations(relation)
7 for transform in transformations:
8 relation = _apply_transform(relation, transform)
9 return relation
10
11
12 def _apply_transform(relation, transform):
13 assert isinstance(transform, Transform)
14 relation = transform.apply_to_relation(relation)
15 enforce_relation_type_expectations(relation)
16 return relation
17
18
19 # NOTE deprecated; this will be replaced with apply_transformations
20 def apply_transformations_deprecated(
21 table,
22 limit=None,
23 offset=None,
24 order_by=None,
25 filter=None,
26 columns_to_select=None,
27 group_by=None,
28 duplicate_only=None,
29 search=[],
30 ):
31 # TODO rename the actual method parameter
32 relation = table
33
34 enforce_relation_type_expectations(relation)
35
36 transforms = []
37
38 if duplicate_only:
39 transforms.append(base.DuplicateOnly(duplicate_only))
40 if group_by:
41 transforms.append(base.Group(group_by))
42 if order_by:
43 transforms.append(base.Order(order_by))
44 if filter:
45 transforms.append(base.Filter(filter))
46 if search:
47 transforms.append(base.Search([search, limit]))
48 if columns_to_select:
49 transforms.append(base.SelectSubsetOfColumns(columns_to_select))
50 if offset:
51 transforms.append(base.Offset(offset))
52 if limit:
53 transforms.append(base.Limit(limit))
54
55 relation = apply_transformations(relation, transforms)
56 return relation
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/transforms/operations/apply.py b/db/transforms/operations/apply.py
--- a/db/transforms/operations/apply.py
+++ b/db/transforms/operations/apply.py
@@ -35,14 +35,14 @@
transforms = []
+ if filter:
+ transforms.append(base.Filter(filter))
if duplicate_only:
transforms.append(base.DuplicateOnly(duplicate_only))
if group_by:
transforms.append(base.Group(group_by))
if order_by:
transforms.append(base.Order(order_by))
- if filter:
- transforms.append(base.Filter(filter))
if search:
transforms.append(base.Search([search, limit]))
if columns_to_select:
| {"golden_diff": "diff --git a/db/transforms/operations/apply.py b/db/transforms/operations/apply.py\n--- a/db/transforms/operations/apply.py\n+++ b/db/transforms/operations/apply.py\n@@ -35,14 +35,14 @@\n \n transforms = []\n \n+ if filter:\n+ transforms.append(base.Filter(filter))\n if duplicate_only:\n transforms.append(base.DuplicateOnly(duplicate_only))\n if group_by:\n transforms.append(base.Group(group_by))\n if order_by:\n transforms.append(base.Order(order_by))\n- if filter:\n- transforms.append(base.Filter(filter))\n if search:\n transforms.append(base.Search([search, limit]))\n if columns_to_select:\n", "issue": "Filters not applied when calculating count of items within group\n## Reproduce\n\n1. Go to the Library Management schema.\n1. Load the Table Page for the Publications table.\n1. Group by \"Publication Year\".\n1. Observe the first group, for year 1900, to contain 10 records and to display a \"Count\" of 10. Good.\n1. Add a filter condition requiring Title to contain the string \"To\".\n1. Observe the first group, for year 1900, to contain 2 records.\n1. Expect \"Count\" to display 2.\n1. Observe \"Count\" displays 10.\n\n\n\n", "before_files": [{"content": "from db.transforms.base import enforce_relation_type_expectations, Transform\nfrom db.transforms import base\n\n\ndef apply_transformations(relation, transformations):\n enforce_relation_type_expectations(relation)\n for transform in transformations:\n relation = _apply_transform(relation, transform)\n return relation\n\n\ndef _apply_transform(relation, transform):\n assert isinstance(transform, Transform)\n relation = transform.apply_to_relation(relation)\n enforce_relation_type_expectations(relation)\n return relation\n\n\n# NOTE deprecated; this will be replaced with apply_transformations\ndef apply_transformations_deprecated(\n table,\n limit=None,\n offset=None,\n order_by=None,\n filter=None,\n columns_to_select=None,\n group_by=None,\n duplicate_only=None,\n search=[],\n):\n # TODO rename the actual method parameter\n relation = table\n\n enforce_relation_type_expectations(relation)\n\n transforms = []\n\n if duplicate_only:\n transforms.append(base.DuplicateOnly(duplicate_only))\n if group_by:\n transforms.append(base.Group(group_by))\n if order_by:\n transforms.append(base.Order(order_by))\n if filter:\n transforms.append(base.Filter(filter))\n if search:\n transforms.append(base.Search([search, limit]))\n if columns_to_select:\n transforms.append(base.SelectSubsetOfColumns(columns_to_select))\n if offset:\n transforms.append(base.Offset(offset))\n if limit:\n transforms.append(base.Limit(limit))\n\n relation = apply_transformations(relation, transforms)\n return relation\n", "path": "db/transforms/operations/apply.py"}], "after_files": [{"content": "from db.transforms.base import enforce_relation_type_expectations, Transform\nfrom db.transforms import base\n\n\ndef apply_transformations(relation, transformations):\n enforce_relation_type_expectations(relation)\n for transform in transformations:\n relation = _apply_transform(relation, transform)\n return relation\n\n\ndef _apply_transform(relation, transform):\n assert isinstance(transform, Transform)\n relation = transform.apply_to_relation(relation)\n enforce_relation_type_expectations(relation)\n return relation\n\n\n# NOTE deprecated; this will be replaced with apply_transformations\ndef apply_transformations_deprecated(\n table,\n limit=None,\n offset=None,\n order_by=None,\n filter=None,\n columns_to_select=None,\n group_by=None,\n duplicate_only=None,\n search=[],\n):\n # TODO rename the actual method parameter\n relation = table\n\n enforce_relation_type_expectations(relation)\n\n transforms = []\n\n if filter:\n transforms.append(base.Filter(filter))\n if duplicate_only:\n transforms.append(base.DuplicateOnly(duplicate_only))\n if group_by:\n transforms.append(base.Group(group_by))\n if order_by:\n transforms.append(base.Order(order_by))\n if search:\n transforms.append(base.Search([search, limit]))\n if columns_to_select:\n transforms.append(base.SelectSubsetOfColumns(columns_to_select))\n if offset:\n transforms.append(base.Offset(offset))\n if limit:\n transforms.append(base.Limit(limit))\n\n relation = apply_transformations(relation, transforms)\n return relation\n", "path": "db/transforms/operations/apply.py"}]} | 884 | 152 |
gh_patches_debug_1479 | rasdani/github-patches | git_diff | fidals__shopelectro-870 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add absolute urls to the canonical links. stb2
Необходимо поправить построение канонических ссылок на сайте
Адрес в канонической ссылке должен быть обязательно абсолютный
<link rel="canonical" href="https://www.сайт.ру/адрес_страницы" >
а не так
<link rel="canonical" href="/адрес_страницы" > - это неверно
Поисковики игнорируют этот тег, если указан относительный адрес в теге...
У меня при скане появляется много страниц дублей (пагинация), в коде указан каноникал. а при сканировании методом аналогичным поисковому роботу сраницы как канонические не помечаются
Вероятно, на STB нужно сделать так же.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/context_processors.py`
Content:
```
1 from django.conf import settings
2
3
4 def shop(request):
5 """
6 Inject shop dict into request.
7
8 Shop dict contains information about shop:
9 emails, phones, API-integrations.
10 """
11 return {
12 'shop': settings.SHOP,
13 'DEBUG': settings.DEBUG,
14 'BASE_URL': settings.BASE_URL,
15 'SENTRY_FRONT_DSN': settings.SENTRY_FRONT_DSN,
16 }
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/context_processors.py b/shopelectro/context_processors.py
--- a/shopelectro/context_processors.py
+++ b/shopelectro/context_processors.py
@@ -11,6 +11,6 @@
return {
'shop': settings.SHOP,
'DEBUG': settings.DEBUG,
- 'BASE_URL': settings.BASE_URL,
+ 'base_url': settings.BASE_URL,
'SENTRY_FRONT_DSN': settings.SENTRY_FRONT_DSN,
}
| {"golden_diff": "diff --git a/shopelectro/context_processors.py b/shopelectro/context_processors.py\n--- a/shopelectro/context_processors.py\n+++ b/shopelectro/context_processors.py\n@@ -11,6 +11,6 @@\n return {\n 'shop': settings.SHOP,\n 'DEBUG': settings.DEBUG,\n- 'BASE_URL': settings.BASE_URL,\n+ 'base_url': settings.BASE_URL,\n 'SENTRY_FRONT_DSN': settings.SENTRY_FRONT_DSN,\n }\n", "issue": "Add absolute urls to the canonical links. stb2\n\u041d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e \u043f\u043e\u043f\u0440\u0430\u0432\u0438\u0442\u044c \u043f\u043e\u0441\u0442\u0440\u043e\u0435\u043d\u0438\u0435 \u043a\u0430\u043d\u043e\u043d\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u0441\u0441\u044b\u043b\u043e\u043a \u043d\u0430 \u0441\u0430\u0439\u0442\u0435\r\n\u0410\u0434\u0440\u0435\u0441 \u0432 \u043a\u0430\u043d\u043e\u043d\u0438\u0447\u0435\u0441\u043a\u043e\u0439 \u0441\u0441\u044b\u043b\u043a\u0435 \u0434\u043e\u043b\u0436\u0435\u043d \u0431\u044b\u0442\u044c \u043e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u043d\u043e \u0430\u0431\u0441\u043e\u043b\u044e\u0442\u043d\u044b\u0439\r\n<link rel=\"canonical\" href=\"https://www.\u0441\u0430\u0439\u0442.\u0440\u0443/\u0430\u0434\u0440\u0435\u0441_\u0441\u0442\u0440\u0430\u043d\u0438\u0446\u044b\" >\r\n\u0430 \u043d\u0435 \u0442\u0430\u043a\r\n<link rel=\"canonical\" href=\"/\u0430\u0434\u0440\u0435\u0441_\u0441\u0442\u0440\u0430\u043d\u0438\u0446\u044b\" > - \u044d\u0442\u043e \u043d\u0435\u0432\u0435\u0440\u043d\u043e\r\n\u041f\u043e\u0438\u0441\u043a\u043e\u0432\u0438\u043a\u0438 \u0438\u0433\u043d\u043e\u0440\u0438\u0440\u0443\u044e\u0442 \u044d\u0442\u043e\u0442 \u0442\u0435\u0433, \u0435\u0441\u043b\u0438 \u0443\u043a\u0430\u0437\u0430\u043d \u043e\u0442\u043d\u043e\u0441\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0439 \u0430\u0434\u0440\u0435\u0441 \u0432 \u0442\u0435\u0433\u0435...\r\n\u0423 \u043c\u0435\u043d\u044f \u043f\u0440\u0438 \u0441\u043a\u0430\u043d\u0435 \u043f\u043e\u044f\u0432\u043b\u044f\u0435\u0442\u0441\u044f \u043c\u043d\u043e\u0433\u043e \u0441\u0442\u0440\u0430\u043d\u0438\u0446 \u0434\u0443\u0431\u043b\u0435\u0439 (\u043f\u0430\u0433\u0438\u043d\u0430\u0446\u0438\u044f), \u0432 \u043a\u043e\u0434\u0435 \u0443\u043a\u0430\u0437\u0430\u043d \u043a\u0430\u043d\u043e\u043d\u0438\u043a\u0430\u043b. \u0430 \u043f\u0440\u0438 \u0441\u043a\u0430\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0438 \u043c\u0435\u0442\u043e\u0434\u043e\u043c \u0430\u043d\u0430\u043b\u043e\u0433\u0438\u0447\u043d\u044b\u043c \u043f\u043e\u0438\u0441\u043a\u043e\u0432\u043e\u043c\u0443 \u0440\u043e\u0431\u043e\u0442\u0443 \u0441\u0440\u0430\u043d\u0438\u0446\u044b \u043a\u0430\u043a \u043a\u0430\u043d\u043e\u043d\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u043d\u0435 \u043f\u043e\u043c\u0435\u0447\u0430\u044e\u0442\u0441\u044f\r\n\r\n\u0412\u0435\u0440\u043e\u044f\u0442\u043d\u043e, \u043d\u0430 STB \u043d\u0443\u0436\u043d\u043e \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u0442\u0430\u043a \u0436\u0435. \n", "before_files": [{"content": "from django.conf import settings\n\n\ndef shop(request):\n \"\"\"\n Inject shop dict into request.\n\n Shop dict contains information about shop:\n emails, phones, API-integrations.\n \"\"\"\n return {\n 'shop': settings.SHOP,\n 'DEBUG': settings.DEBUG,\n 'BASE_URL': settings.BASE_URL,\n 'SENTRY_FRONT_DSN': settings.SENTRY_FRONT_DSN,\n }\n", "path": "shopelectro/context_processors.py"}], "after_files": [{"content": "from django.conf import settings\n\n\ndef shop(request):\n \"\"\"\n Inject shop dict into request.\n\n Shop dict contains information about shop:\n emails, phones, API-integrations.\n \"\"\"\n return {\n 'shop': settings.SHOP,\n 'DEBUG': settings.DEBUG,\n 'base_url': settings.BASE_URL,\n 'SENTRY_FRONT_DSN': settings.SENTRY_FRONT_DSN,\n }\n", "path": "shopelectro/context_processors.py"}]} | 565 | 107 |
gh_patches_debug_16334 | rasdani/github-patches | git_diff | falconry__falcon-741 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor setup.py external module selection for Cython
Reduce complexity and duplication of the external module selection for Cython in setup.py.
At the time of this issue, this cruft was located in:
https://github.com/kgriffs/falcon/blob/routing/setup.py#L35
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import glob
2 import imp
3 import io
4 import os
5 from os import path
6 from setuptools import setup, find_packages, Extension
7 import sys
8
9 MYDIR = path.abspath(os.path.dirname(__file__))
10
11 VERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))
12 VERSION = VERSION.__version__
13
14 # NOTE(kgriffs): python-mimeparse is newer than mimeparse, supports Py3
15 # TODO(kgriffs): Fork and optimize/modernize python-mimeparse
16 REQUIRES = ['six>=1.4.0', 'python-mimeparse']
17
18 JYTHON = 'java' in sys.platform
19
20 try:
21 sys.pypy_version_info
22 PYPY = True
23 except AttributeError:
24 PYPY = False
25
26 if PYPY or JYTHON:
27 CYTHON = False
28 else:
29 try:
30 from Cython.Distutils import build_ext
31 CYTHON = True
32 except ImportError:
33 # TODO(kgriffs): pip now ignores all output, so the user
34 # may not see this message. See also:
35 #
36 # https://github.com/pypa/pip/issues/2732
37 #
38 print('\nNOTE: Cython not installed. '
39 'Falcon will still work fine, but may run '
40 'a bit slower.\n')
41 CYTHON = False
42
43 if CYTHON:
44 def list_modules(dirname):
45 filenames = glob.glob(path.join(dirname, '*.py'))
46
47 module_names = []
48 for name in filenames:
49 module, ext = path.splitext(path.basename(name))
50 if module != '__init__':
51 module_names.append(module)
52
53 return module_names
54
55 ext_modules = [
56 Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])
57 for ext in list_modules(path.join(MYDIR, 'falcon'))]
58
59 ext_modules += [
60 Extension('falcon.util.' + ext,
61 [path.join('falcon', 'util', ext + '.py')])
62
63 for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]
64
65 ext_modules += [
66 Extension('falcon.routing.' + ext,
67 [path.join('falcon', 'routing', ext + '.py')])
68
69 for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]
70
71 cmdclass = {'build_ext': build_ext}
72
73 else:
74 cmdclass = {}
75 ext_modules = []
76
77 setup(
78 name='falcon',
79 version=VERSION,
80 description='An unladen web framework for building APIs and app backends.',
81 long_description=io.open('README.rst', 'r', encoding='utf-8').read(),
82 classifiers=[
83 'Development Status :: 5 - Production/Stable',
84 'Environment :: Web Environment',
85 'Natural Language :: English',
86 'Intended Audience :: Developers',
87 'Intended Audience :: System Administrators',
88 'License :: OSI Approved :: Apache Software License',
89 'Operating System :: MacOS :: MacOS X',
90 'Operating System :: Microsoft :: Windows',
91 'Operating System :: POSIX',
92 'Topic :: Internet :: WWW/HTTP :: WSGI',
93 'Topic :: Software Development :: Libraries :: Application Frameworks',
94 'Programming Language :: Python',
95 'Programming Language :: Python :: Implementation :: CPython',
96 'Programming Language :: Python :: Implementation :: PyPy',
97 'Programming Language :: Python :: Implementation :: Jython',
98 'Programming Language :: Python :: 2.6',
99 'Programming Language :: Python :: 2.7',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Programming Language :: Python :: 3.5',
103 ],
104 keywords='wsgi web api framework rest http cloud',
105 author='Kurt Griffiths',
106 author_email='[email protected]',
107 url='http://falconframework.org',
108 license='Apache 2.0',
109 packages=find_packages(exclude=['tests']),
110 include_package_data=True,
111 zip_safe=False,
112 install_requires=REQUIRES,
113 setup_requires=[],
114 cmdclass=cmdclass,
115 ext_modules=ext_modules,
116 test_suite='nose.collector',
117 entry_points={
118 'console_scripts': [
119 'falcon-bench = falcon.cmd.bench:main'
120 ]
121 }
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,21 +52,15 @@
return module_names
+ package_names = ['falcon', 'falcon.util', 'falcon.routing']
ext_modules = [
- Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])
- for ext in list_modules(path.join(MYDIR, 'falcon'))]
-
- ext_modules += [
- Extension('falcon.util.' + ext,
- [path.join('falcon', 'util', ext + '.py')])
-
- for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]
-
- ext_modules += [
- Extension('falcon.routing.' + ext,
- [path.join('falcon', 'routing', ext + '.py')])
-
- for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]
+ Extension(
+ package + '.' + module,
+ [path.join(*(package.split('.') + [module + '.py']))]
+ )
+ for package in package_names
+ for module in list_modules(path.join(MYDIR, *package.split('.')))
+ ]
cmdclass = {'build_ext': build_ext}
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -52,21 +52,15 @@\n \n return module_names\n \n+ package_names = ['falcon', 'falcon.util', 'falcon.routing']\n ext_modules = [\n- Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])\n- for ext in list_modules(path.join(MYDIR, 'falcon'))]\n-\n- ext_modules += [\n- Extension('falcon.util.' + ext,\n- [path.join('falcon', 'util', ext + '.py')])\n-\n- for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]\n-\n- ext_modules += [\n- Extension('falcon.routing.' + ext,\n- [path.join('falcon', 'routing', ext + '.py')])\n-\n- for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]\n+ Extension(\n+ package + '.' + module,\n+ [path.join(*(package.split('.') + [module + '.py']))]\n+ )\n+ for package in package_names\n+ for module in list_modules(path.join(MYDIR, *package.split('.')))\n+ ]\n \n cmdclass = {'build_ext': build_ext}\n", "issue": "Refactor setup.py external module selection for Cython\nReduce complexity and duplication of the external module selection for Cython in setup.py.\n\nAt the time of this issue, this cruft was located in: \nhttps://github.com/kgriffs/falcon/blob/routing/setup.py#L35\n\n", "before_files": [{"content": "import glob\nimport imp\nimport io\nimport os\nfrom os import path\nfrom setuptools import setup, find_packages, Extension\nimport sys\n\nMYDIR = path.abspath(os.path.dirname(__file__))\n\nVERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))\nVERSION = VERSION.__version__\n\n# NOTE(kgriffs): python-mimeparse is newer than mimeparse, supports Py3\n# TODO(kgriffs): Fork and optimize/modernize python-mimeparse\nREQUIRES = ['six>=1.4.0', 'python-mimeparse']\n\nJYTHON = 'java' in sys.platform\n\ntry:\n sys.pypy_version_info\n PYPY = True\nexcept AttributeError:\n PYPY = False\n\nif PYPY or JYTHON:\n CYTHON = False\nelse:\n try:\n from Cython.Distutils import build_ext\n CYTHON = True\n except ImportError:\n # TODO(kgriffs): pip now ignores all output, so the user\n # may not see this message. See also:\n #\n # https://github.com/pypa/pip/issues/2732\n #\n print('\\nNOTE: Cython not installed. '\n 'Falcon will still work fine, but may run '\n 'a bit slower.\\n')\n CYTHON = False\n\nif CYTHON:\n def list_modules(dirname):\n filenames = glob.glob(path.join(dirname, '*.py'))\n\n module_names = []\n for name in filenames:\n module, ext = path.splitext(path.basename(name))\n if module != '__init__':\n module_names.append(module)\n\n return module_names\n\n ext_modules = [\n Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])\n for ext in list_modules(path.join(MYDIR, 'falcon'))]\n\n ext_modules += [\n Extension('falcon.util.' + ext,\n [path.join('falcon', 'util', ext + '.py')])\n\n for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]\n\n ext_modules += [\n Extension('falcon.routing.' + ext,\n [path.join('falcon', 'routing', ext + '.py')])\n\n for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]\n\n cmdclass = {'build_ext': build_ext}\n\nelse:\n cmdclass = {}\n ext_modules = []\n\nsetup(\n name='falcon',\n version=VERSION,\n description='An unladen web framework for building APIs and app backends.',\n long_description=io.open('README.rst', 'r', encoding='utf-8').read(),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Natural Language :: English',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Programming Language :: Python :: Implementation :: Jython',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n author_email='[email protected]',\n url='http://falconframework.org',\n license='Apache 2.0',\n packages=find_packages(exclude=['tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIRES,\n setup_requires=[],\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n test_suite='nose.collector',\n entry_points={\n 'console_scripts': [\n 'falcon-bench = falcon.cmd.bench:main'\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "import glob\nimport imp\nimport io\nimport os\nfrom os import path\nfrom setuptools import setup, find_packages, Extension\nimport sys\n\nMYDIR = path.abspath(os.path.dirname(__file__))\n\nVERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))\nVERSION = VERSION.__version__\n\n# NOTE(kgriffs): python-mimeparse is newer than mimeparse, supports Py3\n# TODO(kgriffs): Fork and optimize/modernize python-mimeparse\nREQUIRES = ['six>=1.4.0', 'python-mimeparse']\n\nJYTHON = 'java' in sys.platform\n\ntry:\n sys.pypy_version_info\n PYPY = True\nexcept AttributeError:\n PYPY = False\n\nif PYPY or JYTHON:\n CYTHON = False\nelse:\n try:\n from Cython.Distutils import build_ext\n CYTHON = True\n except ImportError:\n # TODO(kgriffs): pip now ignores all output, so the user\n # may not see this message. See also:\n #\n # https://github.com/pypa/pip/issues/2732\n #\n print('\\nNOTE: Cython not installed. '\n 'Falcon will still work fine, but may run '\n 'a bit slower.\\n')\n CYTHON = False\n\nif CYTHON:\n def list_modules(dirname):\n filenames = glob.glob(path.join(dirname, '*.py'))\n\n module_names = []\n for name in filenames:\n module, ext = path.splitext(path.basename(name))\n if module != '__init__':\n module_names.append(module)\n\n return module_names\n\n package_names = ['falcon', 'falcon.util', 'falcon.routing']\n ext_modules = [\n Extension(\n package + '.' + module,\n [path.join(*(package.split('.') + [module + '.py']))]\n )\n for package in package_names\n for module in list_modules(path.join(MYDIR, *package.split('.')))\n ]\n\n cmdclass = {'build_ext': build_ext}\n\nelse:\n cmdclass = {}\n ext_modules = []\n\nsetup(\n name='falcon',\n version=VERSION,\n description='An unladen web framework for building APIs and app backends.',\n long_description=io.open('README.rst', 'r', encoding='utf-8').read(),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Natural Language :: English',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Programming Language :: Python :: Implementation :: Jython',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n author_email='[email protected]',\n url='http://falconframework.org',\n license='Apache 2.0',\n packages=find_packages(exclude=['tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIRES,\n setup_requires=[],\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n test_suite='nose.collector',\n entry_points={\n 'console_scripts': [\n 'falcon-bench = falcon.cmd.bench:main'\n ]\n }\n)\n", "path": "setup.py"}]} | 1,516 | 291 |
gh_patches_debug_89 | rasdani/github-patches | git_diff | encode__httpx-286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
stop nox on first error
During nox is running, if it gets an error, continues running. if we add this: `nox.options.stop_on_first_error = True`, we don't overlook check errors or others before PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 import nox
2
3 source_files = ("httpx", "tests", "setup.py", "noxfile.py")
4
5
6 @nox.session(reuse_venv=True)
7 def lint(session):
8 session.install("autoflake", "black", "flake8", "isort", "seed-isort-config")
9
10 session.run("autoflake", "--in-place", "--recursive", *source_files)
11 session.run("seed-isort-config", "--application-directories=httpx")
12 session.run("isort", "--project=httpx", "--recursive", "--apply", *source_files)
13 session.run("black", "--target-version=py36", *source_files)
14
15 check(session)
16
17
18 @nox.session(reuse_venv=True)
19 def check(session):
20 session.install(
21 "black", "flake8", "flake8-bugbear", "flake8-comprehensions", "mypy"
22 )
23
24 session.run("black", "--check", "--diff", "--target-version=py36", *source_files)
25 session.run("flake8", *source_files)
26 session.run("mypy", "httpx")
27
28
29 @nox.session(reuse_venv=True)
30 def docs(session):
31 session.install("mkdocs", "mkdocs-material")
32
33 session.run("mkdocs", "build")
34
35
36 @nox.session(python=["3.6", "3.7", "3.8"])
37 def test(session):
38 session.install("-r", "test-requirements.txt")
39
40 session.run("python", "-m", "pytest")
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -1,5 +1,7 @@
import nox
+nox.options.stop_on_first_error = True
+
source_files = ("httpx", "tests", "setup.py", "noxfile.py")
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -1,5 +1,7 @@\n import nox\n \n+nox.options.stop_on_first_error = True\n+\n source_files = (\"httpx\", \"tests\", \"setup.py\", \"noxfile.py\")\n", "issue": "stop nox on first error\nDuring nox is running, if it gets an error, continues running. if we add this: `nox.options.stop_on_first_error = True`, we don't overlook check errors or others before PR.\n", "before_files": [{"content": "import nox\n\nsource_files = (\"httpx\", \"tests\", \"setup.py\", \"noxfile.py\")\n\n\[email protected](reuse_venv=True)\ndef lint(session):\n session.install(\"autoflake\", \"black\", \"flake8\", \"isort\", \"seed-isort-config\")\n\n session.run(\"autoflake\", \"--in-place\", \"--recursive\", *source_files)\n session.run(\"seed-isort-config\", \"--application-directories=httpx\")\n session.run(\"isort\", \"--project=httpx\", \"--recursive\", \"--apply\", *source_files)\n session.run(\"black\", \"--target-version=py36\", *source_files)\n\n check(session)\n\n\[email protected](reuse_venv=True)\ndef check(session):\n session.install(\n \"black\", \"flake8\", \"flake8-bugbear\", \"flake8-comprehensions\", \"mypy\"\n )\n\n session.run(\"black\", \"--check\", \"--diff\", \"--target-version=py36\", *source_files)\n session.run(\"flake8\", *source_files)\n session.run(\"mypy\", \"httpx\")\n\n\[email protected](reuse_venv=True)\ndef docs(session):\n session.install(\"mkdocs\", \"mkdocs-material\")\n\n session.run(\"mkdocs\", \"build\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef test(session):\n session.install(\"-r\", \"test-requirements.txt\")\n\n session.run(\"python\", \"-m\", \"pytest\")\n", "path": "noxfile.py"}], "after_files": [{"content": "import nox\n\nnox.options.stop_on_first_error = True\n\nsource_files = (\"httpx\", \"tests\", \"setup.py\", \"noxfile.py\")\n\n\[email protected](reuse_venv=True)\ndef lint(session):\n session.install(\"autoflake\", \"black\", \"flake8\", \"isort\", \"seed-isort-config\")\n\n session.run(\"autoflake\", \"--in-place\", \"--recursive\", *source_files)\n session.run(\"seed-isort-config\", \"--application-directories=httpx\")\n session.run(\"isort\", \"--project=httpx\", \"--recursive\", \"--apply\", *source_files)\n session.run(\"black\", \"--target-version=py36\", *source_files)\n\n check(session)\n\n\[email protected](reuse_venv=True)\ndef check(session):\n session.install(\n \"black\", \"flake8\", \"flake8-bugbear\", \"flake8-comprehensions\", \"mypy\"\n )\n\n session.run(\"black\", \"--check\", \"--diff\", \"--target-version=py36\", *source_files)\n session.run(\"flake8\", *source_files)\n session.run(\"mypy\", \"httpx\")\n\n\[email protected](reuse_venv=True)\ndef docs(session):\n session.install(\"mkdocs\", \"mkdocs-material\")\n\n session.run(\"mkdocs\", \"build\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef test(session):\n session.install(\"-r\", \"test-requirements.txt\")\n\n session.run(\"python\", \"-m\", \"pytest\")\n", "path": "noxfile.py"}]} | 709 | 72 |
gh_patches_debug_24283 | rasdani/github-patches | git_diff | ESMCI__cime-260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove ESMF from driver code
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/SystemTests/pfs.py`
Content:
```
1 """
2 CIME performance test This class inherits from SystemTestsCommon
3
4 20 days performance test, no restart files written
5 """
6
7 from CIME.XML.standard_module_setup import *
8 from system_tests_common import SystemTestsCommon
9
10 logger = logging.getLogger(__name__)
11
12 class PFS(SystemTestsCommon):
13
14 def __init__(self, case):
15 """
16 initialize an object interface to the PFS system test
17 """
18 SystemTestsCommon.__init__(self, case)
19
20 def run(self):
21 self._case_set_value("STOP_OPTION", "ndays")
22 self._case.set_value("STOP_N", 20)
23 self._case.set_value("REST_OPTION","none")
24 self._case.set_value("CONTINUE_RUN", False)
25 self._case.flush()
26
27 logger.info("doing an 20 day initial test, no restarts written")
28 return SystemTestsCommon._run(self)
29
30 def report(self):
31 SystemTestsCommon.report(self)
32
```
Path: `utils/python/CIME/preview_namelists.py`
Content:
```
1 """
2 API for preview namelist
3 """
4
5 from CIME.XML.standard_module_setup import *
6 from CIME.utils import expect, run_cmd
7 from CIME.XML.env_mach_specific import EnvMachSpecific
8
9 import glob, shutil
10 logger = logging.getLogger(__name__)
11
12 def preview_namelists(case, dryrun=False, casedir=None):
13 # refresh case xml files from object
14 case.flush()
15
16 # Get data from XML
17 exeroot = case.get_value("EXEROOT")
18 libroot = case.get_value("LIBROOT")
19 incroot = case.get_value("INCROOT")
20 rundir = case.get_value("RUNDIR")
21 caseroot = case.get_value("CASEROOT")
22 casebuild = case.get_value("CASEBUILD")
23 testcase = case.get_value("TESTCASE")
24
25 logger.debug("LID is: '%s'" % os.getenv("LID", ""))
26 logger.debug("caseroot is: '%s'" % caseroot)
27
28 dryrun = True if (testcase == "SBN") else dryrun
29
30 models = ["atm", "lnd", "ice", "ocn", "glc", "wav", "rof", "cpl"]
31 docdir = os.path.join(caseroot, "CaseDocs")
32
33 if (dryrun):
34 # Only create rundir
35 try:
36 os.makedirs(rundir)
37 except OSError:
38 logger.warning("Not able to create $RUNDIR, trying a subdirectory of $CASEROOT")
39 rundir = os.path.join(caseroot, rundir)
40 try:
41 os.makedirs(rundir)
42 logger.info("Success! Setting RUNDIR=%s" % rundir)
43 case.set_value("RUNDIR", rundir)
44 except OSError:
45 expect(False, "Could not create rundir")
46
47 else:
48
49 # Load modules
50 env_module = case._get_env("mach_specific")
51 env_module.load_env_for_case(compiler=case.get_value("COMPILER"),
52 debug=case.get_value("DEBUG"),
53 mpilib=case.get_value("MPILIB"))
54
55 # Make necessary directories
56 dirs_to_make = [os.path.join(exeroot, model, "obj") for model in models]
57 dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])
58
59 for dir_to_make in dirs_to_make:
60 if (not os.path.isdir(dir_to_make)):
61 try:
62 logger.debug("Making dir '%s'" % dir_to_make)
63 os.makedirs(dir_to_make)
64 except OSError as e:
65 expect(False, "Could not make directory '%s', error: %s" % (dir_to_make, e))
66
67 # Create namelists
68 for model in models:
69 model_str = "drv" if model == "cpl" else model
70 config_file = case.get_value("CONFIG_%s_FILE" % model_str.upper())
71 config_dir = os.path.dirname(config_file)
72 cmd = os.path.join(config_dir, "buildnml")
73 logger.info("Running %s"%cmd)
74 if (logger.level == logging.DEBUG):
75 run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot))
76 else:
77 run_cmd("%s %s" % (cmd, caseroot))
78 # refresh case xml object from file
79 case.read_xml(caseroot)
80 # Save namelists to docdir
81 if (not os.path.isdir(docdir)):
82 os.makedirs(docdir)
83 try:
84 with open(os.path.join(docdir, "README"), "w") as fd:
85 fd.write(" CESM Resolved Namelist Files\n For documentation only DO NOT MODIFY\n")
86 except (OSError, IOError) as e:
87 expect(False, "Failed to write %s/README: %s" % (docdir, e))
88
89
90 for cpglob in ["*_in_[0-9]*", "*modelio*", "*_in",
91 "*streams*txt*", "*stxt", "*maps.rc", "*cism.config*"]:
92 for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):
93 logger.debug("Copy file from '%s' to '%s'" % (file_to_copy, docdir))
94 shutil.copy2(file_to_copy, docdir)
95
96 # Copy over chemistry mechanism docs if they exist
97 if (os.path.isdir(os.path.join(casebuild, "camconf"))):
98 for file_to_copy in glob.glob(os.path.join(casebuild, "camconf", "*chem_mech*")):
99 shutil.copy2(file_to_copy, docdir)
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py
--- a/utils/python/CIME/SystemTests/pfs.py
+++ b/utils/python/CIME/SystemTests/pfs.py
@@ -18,7 +18,7 @@
SystemTestsCommon.__init__(self, case)
def run(self):
- self._case_set_value("STOP_OPTION", "ndays")
+ self._case.set_value("STOP_OPTION", "ndays")
self._case.set_value("STOP_N", 20)
self._case.set_value("REST_OPTION","none")
self._case.set_value("CONTINUE_RUN", False)
diff --git a/utils/python/CIME/preview_namelists.py b/utils/python/CIME/preview_namelists.py
--- a/utils/python/CIME/preview_namelists.py
+++ b/utils/python/CIME/preview_namelists.py
@@ -72,9 +72,12 @@
cmd = os.path.join(config_dir, "buildnml")
logger.info("Running %s"%cmd)
if (logger.level == logging.DEBUG):
- run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot))
+ rc, out, err = run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot), ok_to_fail=True)
+ expect(rc==0,"Command %s failed rc=%d\nout=%s\nerr=%s"%(cmd,rc,out,err))
else:
- run_cmd("%s %s" % (cmd, caseroot))
+ rc, out, err = run_cmd("%s %s" % (cmd, caseroot), ok_to_fail=True)
+ expect(rc==0,"Command %s failed rc=%d\nout=%s\nerr=%s"%(cmd,rc,out,err))
+
# refresh case xml object from file
case.read_xml(caseroot)
# Save namelists to docdir
| {"golden_diff": "diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py\n--- a/utils/python/CIME/SystemTests/pfs.py\n+++ b/utils/python/CIME/SystemTests/pfs.py\n@@ -18,7 +18,7 @@\n SystemTestsCommon.__init__(self, case)\n \n def run(self):\n- self._case_set_value(\"STOP_OPTION\", \"ndays\")\n+ self._case.set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\ndiff --git a/utils/python/CIME/preview_namelists.py b/utils/python/CIME/preview_namelists.py\n--- a/utils/python/CIME/preview_namelists.py\n+++ b/utils/python/CIME/preview_namelists.py\n@@ -72,9 +72,12 @@\n cmd = os.path.join(config_dir, \"buildnml\")\n logger.info(\"Running %s\"%cmd)\n if (logger.level == logging.DEBUG):\n- run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot))\n+ rc, out, err = run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot), ok_to_fail=True)\n+ expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n else:\n- run_cmd(\"%s %s\" % (cmd, caseroot))\n+ rc, out, err = run_cmd(\"%s %s\" % (cmd, caseroot), ok_to_fail=True)\n+ expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n+\n # refresh case xml object from file\n case.read_xml(caseroot)\n # Save namelists to docdir\n", "issue": "Remove ESMF from driver code\n\n", "before_files": [{"content": "\"\"\"\nCIME performance test This class inherits from SystemTestsCommon\n\n20 days performance test, no restart files written\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom system_tests_common import SystemTestsCommon\n\nlogger = logging.getLogger(__name__)\n\nclass PFS(SystemTestsCommon):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the PFS system test\n \"\"\"\n SystemTestsCommon.__init__(self, case)\n\n def run(self):\n self._case_set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\n self._case.flush()\n\n logger.info(\"doing an 20 day initial test, no restarts written\")\n return SystemTestsCommon._run(self)\n\n def report(self):\n SystemTestsCommon.report(self)\n", "path": "utils/python/CIME/SystemTests/pfs.py"}, {"content": "\"\"\"\nAPI for preview namelist\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_cmd\nfrom CIME.XML.env_mach_specific import EnvMachSpecific\n\nimport glob, shutil\nlogger = logging.getLogger(__name__)\n\ndef preview_namelists(case, dryrun=False, casedir=None):\n # refresh case xml files from object\n case.flush()\n\n # Get data from XML\n exeroot = case.get_value(\"EXEROOT\")\n libroot = case.get_value(\"LIBROOT\")\n incroot = case.get_value(\"INCROOT\")\n rundir = case.get_value(\"RUNDIR\")\n caseroot = case.get_value(\"CASEROOT\")\n casebuild = case.get_value(\"CASEBUILD\")\n testcase = case.get_value(\"TESTCASE\")\n\n logger.debug(\"LID is: '%s'\" % os.getenv(\"LID\", \"\"))\n logger.debug(\"caseroot is: '%s'\" % caseroot)\n\n dryrun = True if (testcase == \"SBN\") else dryrun\n\n models = [\"atm\", \"lnd\", \"ice\", \"ocn\", \"glc\", \"wav\", \"rof\", \"cpl\"]\n docdir = os.path.join(caseroot, \"CaseDocs\")\n\n if (dryrun):\n # Only create rundir\n try:\n os.makedirs(rundir)\n except OSError:\n logger.warning(\"Not able to create $RUNDIR, trying a subdirectory of $CASEROOT\")\n rundir = os.path.join(caseroot, rundir)\n try:\n os.makedirs(rundir)\n logger.info(\"Success! Setting RUNDIR=%s\" % rundir)\n case.set_value(\"RUNDIR\", rundir)\n except OSError:\n expect(False, \"Could not create rundir\")\n\n else:\n\n # Load modules\n env_module = case._get_env(\"mach_specific\")\n env_module.load_env_for_case(compiler=case.get_value(\"COMPILER\"),\n debug=case.get_value(\"DEBUG\"),\n mpilib=case.get_value(\"MPILIB\"))\n\n # Make necessary directories\n dirs_to_make = [os.path.join(exeroot, model, \"obj\") for model in models]\n dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])\n\n for dir_to_make in dirs_to_make:\n if (not os.path.isdir(dir_to_make)):\n try:\n logger.debug(\"Making dir '%s'\" % dir_to_make)\n os.makedirs(dir_to_make)\n except OSError as e:\n expect(False, \"Could not make directory '%s', error: %s\" % (dir_to_make, e))\n\n # Create namelists\n for model in models:\n model_str = \"drv\" if model == \"cpl\" else model\n config_file = case.get_value(\"CONFIG_%s_FILE\" % model_str.upper())\n config_dir = os.path.dirname(config_file)\n cmd = os.path.join(config_dir, \"buildnml\")\n logger.info(\"Running %s\"%cmd)\n if (logger.level == logging.DEBUG):\n run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot))\n else:\n run_cmd(\"%s %s\" % (cmd, caseroot))\n # refresh case xml object from file\n case.read_xml(caseroot)\n # Save namelists to docdir\n if (not os.path.isdir(docdir)):\n os.makedirs(docdir)\n try:\n with open(os.path.join(docdir, \"README\"), \"w\") as fd:\n fd.write(\" CESM Resolved Namelist Files\\n For documentation only DO NOT MODIFY\\n\")\n except (OSError, IOError) as e:\n expect(False, \"Failed to write %s/README: %s\" % (docdir, e))\n\n\n for cpglob in [\"*_in_[0-9]*\", \"*modelio*\", \"*_in\",\n \"*streams*txt*\", \"*stxt\", \"*maps.rc\", \"*cism.config*\"]:\n for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):\n logger.debug(\"Copy file from '%s' to '%s'\" % (file_to_copy, docdir))\n shutil.copy2(file_to_copy, docdir)\n\n # Copy over chemistry mechanism docs if they exist\n if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\n for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\n shutil.copy2(file_to_copy, docdir)\n", "path": "utils/python/CIME/preview_namelists.py"}], "after_files": [{"content": "\"\"\"\nCIME performance test This class inherits from SystemTestsCommon\n\n20 days performance test, no restart files written\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom system_tests_common import SystemTestsCommon\n\nlogger = logging.getLogger(__name__)\n\nclass PFS(SystemTestsCommon):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the PFS system test\n \"\"\"\n SystemTestsCommon.__init__(self, case)\n\n def run(self):\n self._case.set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\n self._case.flush()\n\n logger.info(\"doing an 20 day initial test, no restarts written\")\n return SystemTestsCommon._run(self)\n\n def report(self):\n SystemTestsCommon.report(self)\n", "path": "utils/python/CIME/SystemTests/pfs.py"}, {"content": "\"\"\"\nAPI for preview namelist\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_cmd\nfrom CIME.XML.env_mach_specific import EnvMachSpecific\n\nimport glob, shutil\nlogger = logging.getLogger(__name__)\n\ndef preview_namelists(case, dryrun=False, casedir=None):\n # refresh case xml files from object\n case.flush()\n\n # Get data from XML\n exeroot = case.get_value(\"EXEROOT\")\n libroot = case.get_value(\"LIBROOT\")\n incroot = case.get_value(\"INCROOT\")\n rundir = case.get_value(\"RUNDIR\")\n caseroot = case.get_value(\"CASEROOT\")\n casebuild = case.get_value(\"CASEBUILD\")\n testcase = case.get_value(\"TESTCASE\")\n\n logger.debug(\"LID is: '%s'\" % os.getenv(\"LID\", \"\"))\n logger.debug(\"caseroot is: '%s'\" % caseroot)\n\n dryrun = True if (testcase == \"SBN\") else dryrun\n\n models = [\"atm\", \"lnd\", \"ice\", \"ocn\", \"glc\", \"wav\", \"rof\", \"cpl\"]\n docdir = os.path.join(caseroot, \"CaseDocs\")\n\n if (dryrun):\n # Only create rundir\n try:\n os.makedirs(rundir)\n except OSError:\n logger.warning(\"Not able to create $RUNDIR, trying a subdirectory of $CASEROOT\")\n rundir = os.path.join(caseroot, rundir)\n try:\n os.makedirs(rundir)\n logger.info(\"Success! Setting RUNDIR=%s\" % rundir)\n case.set_value(\"RUNDIR\", rundir)\n except OSError:\n expect(False, \"Could not create rundir\")\n\n else:\n\n # Load modules\n env_module = case._get_env(\"mach_specific\")\n env_module.load_env_for_case(compiler=case.get_value(\"COMPILER\"),\n debug=case.get_value(\"DEBUG\"),\n mpilib=case.get_value(\"MPILIB\"))\n\n # Make necessary directories\n dirs_to_make = [os.path.join(exeroot, model, \"obj\") for model in models]\n dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])\n\n for dir_to_make in dirs_to_make:\n if (not os.path.isdir(dir_to_make)):\n try:\n logger.debug(\"Making dir '%s'\" % dir_to_make)\n os.makedirs(dir_to_make)\n except OSError as e:\n expect(False, \"Could not make directory '%s', error: %s\" % (dir_to_make, e))\n\n # Create namelists\n for model in models:\n model_str = \"drv\" if model == \"cpl\" else model\n config_file = case.get_value(\"CONFIG_%s_FILE\" % model_str.upper())\n config_dir = os.path.dirname(config_file)\n cmd = os.path.join(config_dir, \"buildnml\")\n logger.info(\"Running %s\"%cmd)\n if (logger.level == logging.DEBUG):\n rc, out, err = run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot), ok_to_fail=True)\n expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n else:\n rc, out, err = run_cmd(\"%s %s\" % (cmd, caseroot), ok_to_fail=True)\n expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n\n # refresh case xml object from file\n case.read_xml(caseroot)\n # Save namelists to docdir\n if (not os.path.isdir(docdir)):\n os.makedirs(docdir)\n try:\n with open(os.path.join(docdir, \"README\"), \"w\") as fd:\n fd.write(\" CESM Resolved Namelist Files\\n For documentation only DO NOT MODIFY\\n\")\n except (OSError, IOError) as e:\n expect(False, \"Failed to write %s/README: %s\" % (docdir, e))\n\n\n for cpglob in [\"*_in_[0-9]*\", \"*modelio*\", \"*_in\",\n \"*streams*txt*\", \"*stxt\", \"*maps.rc\", \"*cism.config*\"]:\n for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):\n logger.debug(\"Copy file from '%s' to '%s'\" % (file_to_copy, docdir))\n shutil.copy2(file_to_copy, docdir)\n\n # Copy over chemistry mechanism docs if they exist\n if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\n for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\n shutil.copy2(file_to_copy, docdir)\n", "path": "utils/python/CIME/preview_namelists.py"}]} | 1,744 | 437 |
gh_patches_debug_14453 | rasdani/github-patches | git_diff | interlegis__sapl-3226 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Erro encontrado quando a audiência não possui matéria ligada.
<!--- Forneça um resumo geral da _issue_ no título acima -->
Erro encontrado quando a audiência não possui matéria ligada. Derivado do ticket [273270](https://suporte.interlegis.leg.br/scp/tickets.php?id=37122)
## Contexto
<!--- Como esse problema o afeta? O que você está tentando realizar? -->
<!--- Fornecer o contexto nos ajuda a encontrar uma solução que seja mais útil no mundo real -->
## Imagens do Ocorrido
<!--- Representação visual em vídeo ou imagem do ocorrido -->
<!--- Se está descrevendo um bug poste imagens ou vídeos na reprodução do bug citado, caso se aplique -->
## Seu Ambiente
<!--- Inclua detalhes relevantes sobre o ambiente em que você presenciou/experienciou o bug. -->
* Versão usada (_Release_):
* Nome e versão do navegador:
* Nome e versão do Sistema Operacional (desktop ou mobile):
* Link para o seu projeto (Caso de fork deste projeto):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/audiencia/views.py`
Content:
```
1 import sapl
2
3 from django.http import HttpResponse
4 from django.core.urlresolvers import reverse
5 from django.views.decorators.clickjacking import xframe_options_exempt
6 from django.views.generic import UpdateView
7 from sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud
8
9 from .forms import AudienciaForm, AnexoAudienciaPublicaForm
10 from .models import AudienciaPublica, AnexoAudienciaPublica
11
12
13 def index(request):
14 return HttpResponse("Audiência Pública")
15
16
17 class AudienciaCrud(Crud):
18 model = AudienciaPublica
19 public = [RP_LIST, RP_DETAIL, ]
20
21 class BaseMixin(Crud.BaseMixin):
22 list_field_names = ['numero', 'nome', 'tipo', 'materia',
23 'data']
24 ordering = '-data', 'nome', 'numero', 'tipo'
25
26 class ListView(Crud.ListView):
27 paginate_by = 10
28
29 def get_context_data(self, **kwargs):
30 context = super().get_context_data(**kwargs)
31
32 audiencia_materia = {}
33 for o in context['object_list']:
34 # indexado pelo numero da audiencia
35 audiencia_materia[str(o.numero)] = o.materia
36
37 for row in context['rows']:
38 coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
39 if coluna_materia[0]:
40 materia = audiencia_materia[row[0][0]]
41 url_materia = reverse('sapl.materia:materialegislativa_detail',
42 kwargs={'pk': materia.id})
43 row[3] = (coluna_materia[0], url_materia)
44 return context
45
46 class CreateView(Crud.CreateView):
47 form_class = AudienciaForm
48
49 def form_valid(self, form):
50 return super(Crud.CreateView, self).form_valid(form)
51
52 class UpdateView(Crud.UpdateView):
53 form_class = AudienciaForm
54
55 def get_initial(self):
56 initial = super(UpdateView, self).get_initial()
57 if self.object.materia:
58 initial['tipo_materia'] = self.object.materia.tipo.id
59 initial['numero_materia'] = self.object.materia.numero
60 initial['ano_materia'] = self.object.materia.ano
61 return initial
62
63 class DeleteView(Crud.DeleteView):
64 pass
65
66 class DetailView(Crud.DetailView):
67
68 layout_key = 'AudienciaPublicaDetail'
69
70 @xframe_options_exempt
71 def get(self, request, *args, **kwargs):
72 return super().get(request, *args, **kwargs)
73
74
75 class AudienciaPublicaMixin:
76
77 def has_permission(self):
78 app_config = sapl.base.models.AppConfig.objects.last()
79 if app_config and app_config.documentos_administrativos == 'O':
80 return True
81
82 return super().has_permission()
83
84
85 class AnexoAudienciaPublicaCrud(MasterDetailCrud):
86 model = AnexoAudienciaPublica
87 parent_field = 'audiencia'
88 help_topic = 'numeracao_docsacess'
89 public = [RP_LIST, RP_DETAIL, ]
90
91 class BaseMixin(MasterDetailCrud.BaseMixin):
92 list_field_names = ['assunto']
93
94 class CreateView(MasterDetailCrud.CreateView):
95 form_class = AnexoAudienciaPublicaForm
96 layout_key = None
97
98 class UpdateView(MasterDetailCrud.UpdateView):
99 form_class = AnexoAudienciaPublicaForm
100
101 class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):
102
103 def get_queryset(self):
104 qs = super(MasterDetailCrud.ListView, self).get_queryset()
105 kwargs = {self.crud.parent_field: self.kwargs['pk']}
106 return qs.filter(**kwargs).order_by('-data', '-id')
107
108 class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):
109 pass
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py
--- a/sapl/audiencia/views.py
+++ b/sapl/audiencia/views.py
@@ -38,8 +38,11 @@
coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
if coluna_materia[0]:
materia = audiencia_materia[row[0][0]]
- url_materia = reverse('sapl.materia:materialegislativa_detail',
- kwargs={'pk': materia.id})
+ if materia:
+ url_materia = reverse('sapl.materia:materialegislativa_detail',
+ kwargs={'pk': materia.id})
+ else:
+ url_materia = None
row[3] = (coluna_materia[0], url_materia)
return context
| {"golden_diff": "diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py\n--- a/sapl/audiencia/views.py\n+++ b/sapl/audiencia/views.py\n@@ -38,8 +38,11 @@\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n- url_materia = reverse('sapl.materia:materialegislativa_detail',\n- kwargs={'pk': materia.id})\n+ if materia:\n+ url_materia = reverse('sapl.materia:materialegislativa_detail',\n+ kwargs={'pk': materia.id})\n+ else:\n+ url_materia = None\n row[3] = (coluna_materia[0], url_materia)\n return context\n", "issue": "Erro encontrado quando a audi\u00eancia n\u00e3o possui mat\u00e9ria ligada.\n<!--- Forne\u00e7a um resumo geral da _issue_ no t\u00edtulo acima -->\r\nErro encontrado quando a audi\u00eancia n\u00e3o possui mat\u00e9ria ligada. Derivado do ticket [273270](https://suporte.interlegis.leg.br/scp/tickets.php?id=37122)\r\n\r\n\r\n\r\n## Contexto\r\n<!--- Como esse problema o afeta? O que voc\u00ea est\u00e1 tentando realizar? -->\r\n<!--- Fornecer o contexto nos ajuda a encontrar uma solu\u00e7\u00e3o que seja mais \u00fatil no mundo real -->\r\n\r\n## Imagens do Ocorrido\r\n<!--- Representa\u00e7\u00e3o visual em v\u00eddeo ou imagem do ocorrido -->\r\n<!--- Se est\u00e1 descrevendo um bug poste imagens ou v\u00eddeos na reprodu\u00e7\u00e3o do bug citado, caso se aplique -->\r\n\r\n## Seu Ambiente\r\n<!--- Inclua detalhes relevantes sobre o ambiente em que voc\u00ea presenciou/experienciou o bug. -->\r\n* Vers\u00e3o usada (_Release_):\r\n* Nome e vers\u00e3o do navegador:\r\n* Nome e vers\u00e3o do Sistema Operacional (desktop ou mobile):\r\n* Link para o seu projeto (Caso de fork deste projeto):\r\n\n", "before_files": [{"content": "import sapl\n\nfrom django.http import HttpResponse\nfrom django.core.urlresolvers import reverse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud\n\nfrom .forms import AudienciaForm, AnexoAudienciaPublicaForm\nfrom .models import AudienciaPublica, AnexoAudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data'] \n ordering = '-data', 'nome', 'numero', 'tipo'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n audiencia_materia = {}\n for o in context['object_list']:\n # indexado pelo numero da audiencia\n audiencia_materia[str(o.numero)] = o.materia\n\n for row in context['rows']:\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n url_materia = reverse('sapl.materia:materialegislativa_detail',\n kwargs={'pk': materia.id})\n row[3] = (coluna_materia[0], url_materia)\n return context\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n if self.object.materia:\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n\nclass AudienciaPublicaMixin:\n\n def has_permission(self):\n app_config = sapl.base.models.AppConfig.objects.last()\n if app_config and app_config.documentos_administrativos == 'O':\n return True\n\n return super().has_permission()\n\n\nclass AnexoAudienciaPublicaCrud(MasterDetailCrud):\n model = AnexoAudienciaPublica\n parent_field = 'audiencia'\n help_topic = 'numeracao_docsacess'\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['assunto']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = AnexoAudienciaPublicaForm\n layout_key = None\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = AnexoAudienciaPublicaForm\n\n class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):\n\n def get_queryset(self):\n qs = super(MasterDetailCrud.ListView, self).get_queryset()\n kwargs = {self.crud.parent_field: self.kwargs['pk']}\n return qs.filter(**kwargs).order_by('-data', '-id')\n\n class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):\n pass\n", "path": "sapl/audiencia/views.py"}], "after_files": [{"content": "import sapl\n\nfrom django.http import HttpResponse\nfrom django.core.urlresolvers import reverse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud\n\nfrom .forms import AudienciaForm, AnexoAudienciaPublicaForm\nfrom .models import AudienciaPublica, AnexoAudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data'] \n ordering = '-data', 'nome', 'numero', 'tipo'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n audiencia_materia = {}\n for o in context['object_list']:\n # indexado pelo numero da audiencia\n audiencia_materia[str(o.numero)] = o.materia\n\n for row in context['rows']:\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n if materia:\n url_materia = reverse('sapl.materia:materialegislativa_detail',\n kwargs={'pk': materia.id})\n else:\n url_materia = None\n row[3] = (coluna_materia[0], url_materia)\n return context\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n if self.object.materia:\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n\nclass AudienciaPublicaMixin:\n\n def has_permission(self):\n app_config = sapl.base.models.AppConfig.objects.last()\n if app_config and app_config.documentos_administrativos == 'O':\n return True\n\n return super().has_permission()\n\n\nclass AnexoAudienciaPublicaCrud(MasterDetailCrud):\n model = AnexoAudienciaPublica\n parent_field = 'audiencia'\n help_topic = 'numeracao_docsacess'\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['assunto']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = AnexoAudienciaPublicaForm\n layout_key = None\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = AnexoAudienciaPublicaForm\n\n class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):\n\n def get_queryset(self):\n qs = super(MasterDetailCrud.ListView, self).get_queryset()\n kwargs = {self.crud.parent_field: self.kwargs['pk']}\n return qs.filter(**kwargs).order_by('-data', '-id')\n\n class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):\n pass\n", "path": "sapl/audiencia/views.py"}]} | 1,588 | 196 |
gh_patches_debug_62155 | rasdani/github-patches | git_diff | Parsl__parsl-597 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make `GlobusScheme` inherit from `RepresentationMixin`
Otherwise, the config printed in the log is not copy-and-pasteable:
```
storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>],
working_dir=None
```
Make `GlobusScheme` inherit from `RepresentationMixin`
Otherwise, the config printed in the log is not copy-and-pasteable:
```
storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>],
working_dir=None
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/data_provider/scheme.py`
Content:
```
1
2 class GlobusScheme(object):
3 """Specification for accessing data on a remote executor via Globus.
4
5 Parameters
6 ----------
7 endpoint_uuid : str
8 Universally unique identifier of the Globus endpoint at which the data can be accessed.
9 This can be found in the `Manage Endpoints <https://www.globus.org/app/endpoints>`_ page.
10 endpoint_path : str, optional
11 FIXME
12 local_path : str, optional
13 FIXME
14 """
15 def __init__(self, endpoint_uuid, endpoint_path=None, local_path=None):
16 self.endpoint_uuid = endpoint_uuid
17 self.endpoint_path = endpoint_path
18 self.local_path = local_path
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/data_provider/scheme.py b/parsl/data_provider/scheme.py
--- a/parsl/data_provider/scheme.py
+++ b/parsl/data_provider/scheme.py
@@ -1,5 +1,7 @@
+from parsl.utils import RepresentationMixin
-class GlobusScheme(object):
+
+class GlobusScheme(RepresentationMixin):
"""Specification for accessing data on a remote executor via Globus.
Parameters
| {"golden_diff": "diff --git a/parsl/data_provider/scheme.py b/parsl/data_provider/scheme.py\n--- a/parsl/data_provider/scheme.py\n+++ b/parsl/data_provider/scheme.py\n@@ -1,5 +1,7 @@\n+from parsl.utils import RepresentationMixin\n \n-class GlobusScheme(object):\n+\n+class GlobusScheme(RepresentationMixin):\n \"\"\"Specification for accessing data on a remote executor via Globus.\n \n Parameters\n", "issue": "Make `GlobusScheme` inherit from `RepresentationMixin`\nOtherwise, the config printed in the log is not copy-and-pasteable:\r\n\r\n```\r\n storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>], \r\n working_dir=None\r\n```\r\n\r\n\nMake `GlobusScheme` inherit from `RepresentationMixin`\nOtherwise, the config printed in the log is not copy-and-pasteable:\r\n\r\n```\r\n storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>], \r\n working_dir=None\r\n```\r\n\r\n\n", "before_files": [{"content": "\nclass GlobusScheme(object):\n \"\"\"Specification for accessing data on a remote executor via Globus.\n\n Parameters\n ----------\n endpoint_uuid : str\n Universally unique identifier of the Globus endpoint at which the data can be accessed.\n This can be found in the `Manage Endpoints <https://www.globus.org/app/endpoints>`_ page.\n endpoint_path : str, optional\n FIXME\n local_path : str, optional\n FIXME\n \"\"\"\n def __init__(self, endpoint_uuid, endpoint_path=None, local_path=None):\n self.endpoint_uuid = endpoint_uuid\n self.endpoint_path = endpoint_path\n self.local_path = local_path\n", "path": "parsl/data_provider/scheme.py"}], "after_files": [{"content": "from parsl.utils import RepresentationMixin\n\n\nclass GlobusScheme(RepresentationMixin):\n \"\"\"Specification for accessing data on a remote executor via Globus.\n\n Parameters\n ----------\n endpoint_uuid : str\n Universally unique identifier of the Globus endpoint at which the data can be accessed.\n This can be found in the `Manage Endpoints <https://www.globus.org/app/endpoints>`_ page.\n endpoint_path : str, optional\n FIXME\n local_path : str, optional\n FIXME\n \"\"\"\n def __init__(self, endpoint_uuid, endpoint_path=None, local_path=None):\n self.endpoint_uuid = endpoint_uuid\n self.endpoint_path = endpoint_path\n self.local_path = local_path\n", "path": "parsl/data_provider/scheme.py"}]} | 575 | 97 |
gh_patches_debug_5521 | rasdani/github-patches | git_diff | aws__aws-cli-2510 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version 1.11.17 and above breaks EC2 describe volumes with JSON input
Related Issue: #1976
JSON File:
```json
{
"VolumeIds": [
"vol-<id>"
]
}
```
Command:
```
aws ec2 describe-volumes --region <region> --cli-input-json file://<file-name>.json
```
Error Message:
```
An error occurred (InvalidParameterCombination) when calling the DescribeVolumes operation: The parameter volumeSet cannot be used with the parameter maxResults
```
OS Release:
```
NAME="Amazon Linux AMI"
VERSION="2016.09"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.09"
PRETTY_NAME="Amazon Linux AMI 2016.09"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.09:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
```
Works:
```
python27-botocore-1.4.46-1.58.amzn1.noarch
aws-cli-1.10.56-1.41.amzn1.noarch
```
Breaks:
```
python27-botocore-1.4.74-1.60.amzn1.noarch
aws-cli-1.11.17-1.43.amzn1.noarch
```
```
python27-botocore-1.4.86-1.62.amzn1.noarch
aws-cli-1.11.29-1.45.amzn1.noarch
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/cliinputjson.py`
Content:
```
1 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import json
14
15 from awscli.paramfile import get_paramfile
16 from awscli.argprocess import ParamError
17 from awscli.customizations.arguments import OverrideRequiredArgsArgument
18
19
20 def register_cli_input_json(cli):
21 cli.register('building-argument-table', add_cli_input_json)
22
23
24 def add_cli_input_json(session, argument_table, **kwargs):
25 # This argument cannot support operations with streaming output which
26 # is designated by the argument name `outfile`.
27 if 'outfile' not in argument_table:
28 cli_input_json_argument = CliInputJSONArgument(session)
29 cli_input_json_argument.add_to_arg_table(argument_table)
30
31
32 class CliInputJSONArgument(OverrideRequiredArgsArgument):
33 """This argument inputs a JSON string as the entire input for a command.
34
35 Ideally, the value to this argument should be a filled out JSON file
36 generated by ``--generate-cli-skeleton``. The items in the JSON string
37 will not clobber other arguments entered into the command line.
38 """
39 ARG_DATA = {
40 'name': 'cli-input-json',
41 'help_text': 'Performs service operation based on the JSON string '
42 'provided. The JSON string follows the format provided '
43 'by ``--generate-cli-skeleton``. If other arguments are '
44 'provided on the command line, the CLI values will override '
45 'the JSON-provided values.'
46 }
47
48 def __init__(self, session):
49 super(CliInputJSONArgument, self).__init__(session)
50
51 def _register_argument_action(self):
52 self._session.register(
53 'calling-command', self.add_to_call_parameters)
54 super(CliInputJSONArgument, self)._register_argument_action()
55
56 def add_to_call_parameters(self, call_parameters, parsed_args,
57 parsed_globals, **kwargs):
58
59 # Check if ``--cli-input-json`` was specified in the command line.
60 input_json = getattr(parsed_args, 'cli_input_json', None)
61 if input_json is not None:
62 # Retrieve the JSON from the file if needed.
63 retrieved_json = get_paramfile(input_json)
64 # Nothing was retrieved from the file. So assume the argument
65 # is already a JSON string.
66 if retrieved_json is None:
67 retrieved_json = input_json
68 try:
69 # Try to load the JSON string into a python dictionary
70 input_data = json.loads(retrieved_json)
71 except ValueError as e:
72 raise ParamError(
73 self.name, "Invalid JSON: %s\nJSON received: %s"
74 % (e, retrieved_json))
75 # Add the members from the input JSON to the call parameters.
76 self._update_call_parameters(call_parameters, input_data)
77
78 def _update_call_parameters(self, call_parameters, input_data):
79 for input_key in input_data.keys():
80 # Only add the values to ``call_parameters`` if not already
81 # present.
82 if input_key not in call_parameters:
83 call_parameters[input_key] = input_data[input_key]
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/cliinputjson.py b/awscli/customizations/cliinputjson.py
--- a/awscli/customizations/cliinputjson.py
+++ b/awscli/customizations/cliinputjson.py
@@ -50,7 +50,7 @@
def _register_argument_action(self):
self._session.register(
- 'calling-command', self.add_to_call_parameters)
+ 'calling-command.*', self.add_to_call_parameters)
super(CliInputJSONArgument, self)._register_argument_action()
def add_to_call_parameters(self, call_parameters, parsed_args,
| {"golden_diff": "diff --git a/awscli/customizations/cliinputjson.py b/awscli/customizations/cliinputjson.py\n--- a/awscli/customizations/cliinputjson.py\n+++ b/awscli/customizations/cliinputjson.py\n@@ -50,7 +50,7 @@\n \n def _register_argument_action(self):\n self._session.register(\n- 'calling-command', self.add_to_call_parameters)\n+ 'calling-command.*', self.add_to_call_parameters)\n super(CliInputJSONArgument, self)._register_argument_action()\n \n def add_to_call_parameters(self, call_parameters, parsed_args,\n", "issue": "Version 1.11.17 and above breaks EC2 describe volumes with JSON input\nRelated Issue: #1976 \r\n\r\nJSON File:\r\n```json\r\n{\r\n \"VolumeIds\": [\r\n \"vol-<id>\"\r\n ]\r\n}\r\n```\r\n\r\nCommand:\r\n```\r\naws ec2 describe-volumes --region <region> --cli-input-json file://<file-name>.json\r\n```\r\n\r\nError Message:\r\n```\r\nAn error occurred (InvalidParameterCombination) when calling the DescribeVolumes operation: The parameter volumeSet cannot be used with the parameter maxResults\r\n```\r\n\r\nOS Release:\r\n```\r\nNAME=\"Amazon Linux AMI\"\r\nVERSION=\"2016.09\"\r\nID=\"amzn\"\r\nID_LIKE=\"rhel fedora\"\r\nVERSION_ID=\"2016.09\"\r\nPRETTY_NAME=\"Amazon Linux AMI 2016.09\"\r\nANSI_COLOR=\"0;33\"\r\nCPE_NAME=\"cpe:/o:amazon:linux:2016.09:ga\"\r\nHOME_URL=\"http://aws.amazon.com/amazon-linux-ami/\"\r\n```\r\n\r\nWorks:\r\n```\r\npython27-botocore-1.4.46-1.58.amzn1.noarch\r\naws-cli-1.10.56-1.41.amzn1.noarch\r\n```\r\n\r\nBreaks:\r\n```\r\npython27-botocore-1.4.74-1.60.amzn1.noarch\r\naws-cli-1.11.17-1.43.amzn1.noarch\r\n```\r\n```\r\npython27-botocore-1.4.86-1.62.amzn1.noarch\r\naws-cli-1.11.29-1.45.amzn1.noarch\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport json\n\nfrom awscli.paramfile import get_paramfile\nfrom awscli.argprocess import ParamError\nfrom awscli.customizations.arguments import OverrideRequiredArgsArgument\n\n\ndef register_cli_input_json(cli):\n cli.register('building-argument-table', add_cli_input_json)\n\n\ndef add_cli_input_json(session, argument_table, **kwargs):\n # This argument cannot support operations with streaming output which\n # is designated by the argument name `outfile`.\n if 'outfile' not in argument_table:\n cli_input_json_argument = CliInputJSONArgument(session)\n cli_input_json_argument.add_to_arg_table(argument_table)\n\n\nclass CliInputJSONArgument(OverrideRequiredArgsArgument):\n \"\"\"This argument inputs a JSON string as the entire input for a command.\n\n Ideally, the value to this argument should be a filled out JSON file\n generated by ``--generate-cli-skeleton``. The items in the JSON string\n will not clobber other arguments entered into the command line.\n \"\"\"\n ARG_DATA = {\n 'name': 'cli-input-json',\n 'help_text': 'Performs service operation based on the JSON string '\n 'provided. The JSON string follows the format provided '\n 'by ``--generate-cli-skeleton``. If other arguments are '\n 'provided on the command line, the CLI values will override '\n 'the JSON-provided values.'\n }\n\n def __init__(self, session):\n super(CliInputJSONArgument, self).__init__(session)\n\n def _register_argument_action(self):\n self._session.register(\n 'calling-command', self.add_to_call_parameters)\n super(CliInputJSONArgument, self)._register_argument_action()\n\n def add_to_call_parameters(self, call_parameters, parsed_args,\n parsed_globals, **kwargs):\n\n # Check if ``--cli-input-json`` was specified in the command line.\n input_json = getattr(parsed_args, 'cli_input_json', None)\n if input_json is not None:\n # Retrieve the JSON from the file if needed.\n retrieved_json = get_paramfile(input_json)\n # Nothing was retrieved from the file. So assume the argument\n # is already a JSON string.\n if retrieved_json is None:\n retrieved_json = input_json\n try:\n # Try to load the JSON string into a python dictionary\n input_data = json.loads(retrieved_json)\n except ValueError as e:\n raise ParamError(\n self.name, \"Invalid JSON: %s\\nJSON received: %s\"\n % (e, retrieved_json))\n # Add the members from the input JSON to the call parameters.\n self._update_call_parameters(call_parameters, input_data)\n\n def _update_call_parameters(self, call_parameters, input_data):\n for input_key in input_data.keys():\n # Only add the values to ``call_parameters`` if not already\n # present.\n if input_key not in call_parameters:\n call_parameters[input_key] = input_data[input_key]\n", "path": "awscli/customizations/cliinputjson.py"}], "after_files": [{"content": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport json\n\nfrom awscli.paramfile import get_paramfile\nfrom awscli.argprocess import ParamError\nfrom awscli.customizations.arguments import OverrideRequiredArgsArgument\n\n\ndef register_cli_input_json(cli):\n cli.register('building-argument-table', add_cli_input_json)\n\n\ndef add_cli_input_json(session, argument_table, **kwargs):\n # This argument cannot support operations with streaming output which\n # is designated by the argument name `outfile`.\n if 'outfile' not in argument_table:\n cli_input_json_argument = CliInputJSONArgument(session)\n cli_input_json_argument.add_to_arg_table(argument_table)\n\n\nclass CliInputJSONArgument(OverrideRequiredArgsArgument):\n \"\"\"This argument inputs a JSON string as the entire input for a command.\n\n Ideally, the value to this argument should be a filled out JSON file\n generated by ``--generate-cli-skeleton``. The items in the JSON string\n will not clobber other arguments entered into the command line.\n \"\"\"\n ARG_DATA = {\n 'name': 'cli-input-json',\n 'help_text': 'Performs service operation based on the JSON string '\n 'provided. The JSON string follows the format provided '\n 'by ``--generate-cli-skeleton``. If other arguments are '\n 'provided on the command line, the CLI values will override '\n 'the JSON-provided values.'\n }\n\n def __init__(self, session):\n super(CliInputJSONArgument, self).__init__(session)\n\n def _register_argument_action(self):\n self._session.register(\n 'calling-command.*', self.add_to_call_parameters)\n super(CliInputJSONArgument, self)._register_argument_action()\n\n def add_to_call_parameters(self, call_parameters, parsed_args,\n parsed_globals, **kwargs):\n\n # Check if ``--cli-input-json`` was specified in the command line.\n input_json = getattr(parsed_args, 'cli_input_json', None)\n if input_json is not None:\n # Retrieve the JSON from the file if needed.\n retrieved_json = get_paramfile(input_json)\n # Nothing was retrieved from the file. So assume the argument\n # is already a JSON string.\n if retrieved_json is None:\n retrieved_json = input_json\n try:\n # Try to load the JSON string into a python dictionary\n input_data = json.loads(retrieved_json)\n except ValueError as e:\n raise ParamError(\n self.name, \"Invalid JSON: %s\\nJSON received: %s\"\n % (e, retrieved_json))\n # Add the members from the input JSON to the call parameters.\n self._update_call_parameters(call_parameters, input_data)\n\n def _update_call_parameters(self, call_parameters, input_data):\n for input_key in input_data.keys():\n # Only add the values to ``call_parameters`` if not already\n # present.\n if input_key not in call_parameters:\n call_parameters[input_key] = input_data[input_key]\n", "path": "awscli/customizations/cliinputjson.py"}]} | 1,569 | 125 |
gh_patches_debug_62141 | rasdani/github-patches | git_diff | searx__searx-1277 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
findx crashes
... with message on web page: findx (unexpected crash: string indices must be integers)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/findx.py`
Content:
```
1 """
2 FindX (General, Images, Videos)
3
4 @website https://www.findx.com
5 @provide-api no
6 @using-api no
7 @results HTML
8 @stable no
9 @parse url, title, content, embedded, img_src, thumbnail_src
10 """
11
12 from dateutil import parser
13 from json import loads
14 import re
15
16 from lxml import html
17
18 from searx import logger
19 from searx.engines.xpath import extract_text
20 from searx.engines.youtube_noapi import base_youtube_url, embedded_url
21 from searx.url_utils import urlencode
22
23
24 paging = True
25 results_xpath = '//script[@id="initial-state"]'
26 search_url = 'https://www.findx.com/{category}?{q}'
27 type_map = {
28 'none': 'web',
29 'general': 'web',
30 'images': 'images',
31 'videos': 'videos',
32 }
33
34
35 def request(query, params):
36 params['url'] = search_url.format(
37 category=type_map[params['category']],
38 q=urlencode({
39 'q': query,
40 'page': params['pageno']
41 })
42 )
43 return params
44
45
46 def response(resp):
47 dom = html.fromstring(resp.text)
48 results_raw_json = dom.xpath(results_xpath)
49 results_json = loads(extract_text(results_raw_json))
50
51 if len(results_json['web']['results']) > 0:
52 return _general_results(results_json['web']['results'])
53
54 if len(results_json['images']['results']) > 0:
55 return _images_results(results_json['images']['results'])
56
57 if len(results_json['video']['results']) > 0:
58 return _videos_results(results_json['video']['results'])
59
60 return []
61
62
63 def _general_results(general_results):
64 results = []
65 for result in general_results:
66 results.append({
67 'url': result['url'],
68 'title': result['title'],
69 'content': result['sum'],
70 })
71 return results
72
73
74 def _images_results(image_results):
75 results = []
76 for result in image_results:
77 results.append({
78 'url': result['sourceURL'],
79 'title': result['title'],
80 'content': result['source'],
81 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
82 'img_src': _extract_url(result['assets']['file']['url']),
83 'template': 'images.html',
84 })
85 return results
86
87
88 def _videos_results(video_results):
89 results = []
90 for result in video_results:
91 if not result['kind'].startswith('youtube'):
92 logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
93 continue
94
95 description = result['snippet']['description']
96 if len(description) > 300:
97 description = description[:300] + '...'
98
99 results.append({
100 'url': base_youtube_url + result['id'],
101 'title': result['snippet']['title'],
102 'content': description,
103 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
104 'publishedDate': parser.parse(result['snippet']['publishedAt']),
105 'embedded': embedded_url.format(videoid=result['id']),
106 'template': 'videos.html',
107 })
108 return results
109
110
111 def _extract_url(url):
112 matching = re.search('(/https?://[^)]+)', url)
113 if matching:
114 return matching.group(0)[1:]
115 return ''
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/findx.py b/searx/engines/findx.py
--- a/searx/engines/findx.py
+++ b/searx/engines/findx.py
@@ -49,7 +49,7 @@
results_json = loads(extract_text(results_raw_json))
if len(results_json['web']['results']) > 0:
- return _general_results(results_json['web']['results'])
+ return _general_results(results_json['web']['results']['webSearch']['results'])
if len(results_json['images']['results']) > 0:
return _images_results(results_json['images']['results'])
| {"golden_diff": "diff --git a/searx/engines/findx.py b/searx/engines/findx.py\n--- a/searx/engines/findx.py\n+++ b/searx/engines/findx.py\n@@ -49,7 +49,7 @@\n results_json = loads(extract_text(results_raw_json))\n \n if len(results_json['web']['results']) > 0:\n- return _general_results(results_json['web']['results'])\n+ return _general_results(results_json['web']['results']['webSearch']['results'])\n \n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n", "issue": "findx crashes\n... with message on web page: findx (unexpected crash: string indices must be integers) \n", "before_files": [{"content": "\"\"\"\nFindX (General, Images, Videos)\n\n@website https://www.findx.com\n@provide-api no\n@using-api no\n@results HTML\n@stable no\n@parse url, title, content, embedded, img_src, thumbnail_src\n\"\"\"\n\nfrom dateutil import parser\nfrom json import loads\nimport re\n\nfrom lxml import html\n\nfrom searx import logger\nfrom searx.engines.xpath import extract_text\nfrom searx.engines.youtube_noapi import base_youtube_url, embedded_url\nfrom searx.url_utils import urlencode\n\n\npaging = True\nresults_xpath = '//script[@id=\"initial-state\"]'\nsearch_url = 'https://www.findx.com/{category}?{q}'\ntype_map = {\n 'none': 'web',\n 'general': 'web',\n 'images': 'images',\n 'videos': 'videos',\n}\n\n\ndef request(query, params):\n params['url'] = search_url.format(\n category=type_map[params['category']],\n q=urlencode({\n 'q': query,\n 'page': params['pageno']\n })\n )\n return params\n\n\ndef response(resp):\n dom = html.fromstring(resp.text)\n results_raw_json = dom.xpath(results_xpath)\n results_json = loads(extract_text(results_raw_json))\n\n if len(results_json['web']['results']) > 0:\n return _general_results(results_json['web']['results'])\n\n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n\n if len(results_json['video']['results']) > 0:\n return _videos_results(results_json['video']['results'])\n\n return []\n\n\ndef _general_results(general_results):\n results = []\n for result in general_results:\n results.append({\n 'url': result['url'],\n 'title': result['title'],\n 'content': result['sum'],\n })\n return results\n\n\ndef _images_results(image_results):\n results = []\n for result in image_results:\n results.append({\n 'url': result['sourceURL'],\n 'title': result['title'],\n 'content': result['source'],\n 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n 'img_src': _extract_url(result['assets']['file']['url']),\n 'template': 'images.html',\n })\n return results\n\n\ndef _videos_results(video_results):\n results = []\n for result in video_results:\n if not result['kind'].startswith('youtube'):\n logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n continue\n\n description = result['snippet']['description']\n if len(description) > 300:\n description = description[:300] + '...'\n\n results.append({\n 'url': base_youtube_url + result['id'],\n 'title': result['snippet']['title'],\n 'content': description,\n 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n 'embedded': embedded_url.format(videoid=result['id']),\n 'template': 'videos.html',\n })\n return results\n\n\ndef _extract_url(url):\n matching = re.search('(/https?://[^)]+)', url)\n if matching:\n return matching.group(0)[1:]\n return ''\n", "path": "searx/engines/findx.py"}], "after_files": [{"content": "\"\"\"\nFindX (General, Images, Videos)\n\n@website https://www.findx.com\n@provide-api no\n@using-api no\n@results HTML\n@stable no\n@parse url, title, content, embedded, img_src, thumbnail_src\n\"\"\"\n\nfrom dateutil import parser\nfrom json import loads\nimport re\n\nfrom lxml import html\n\nfrom searx import logger\nfrom searx.engines.xpath import extract_text\nfrom searx.engines.youtube_noapi import base_youtube_url, embedded_url\nfrom searx.url_utils import urlencode\n\n\npaging = True\nresults_xpath = '//script[@id=\"initial-state\"]'\nsearch_url = 'https://www.findx.com/{category}?{q}'\ntype_map = {\n 'none': 'web',\n 'general': 'web',\n 'images': 'images',\n 'videos': 'videos',\n}\n\n\ndef request(query, params):\n params['url'] = search_url.format(\n category=type_map[params['category']],\n q=urlencode({\n 'q': query,\n 'page': params['pageno']\n })\n )\n return params\n\n\ndef response(resp):\n dom = html.fromstring(resp.text)\n results_raw_json = dom.xpath(results_xpath)\n results_json = loads(extract_text(results_raw_json))\n\n if len(results_json['web']['results']) > 0:\n return _general_results(results_json['web']['results']['webSearch']['results'])\n\n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n\n if len(results_json['video']['results']) > 0:\n return _videos_results(results_json['video']['results'])\n\n return []\n\n\ndef _general_results(general_results):\n results = []\n for result in general_results:\n results.append({\n 'url': result['url'],\n 'title': result['title'],\n 'content': result['sum'],\n })\n return results\n\n\ndef _images_results(image_results):\n results = []\n for result in image_results:\n results.append({\n 'url': result['sourceURL'],\n 'title': result['title'],\n 'content': result['source'],\n 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n 'img_src': _extract_url(result['assets']['file']['url']),\n 'template': 'images.html',\n })\n return results\n\n\ndef _videos_results(video_results):\n results = []\n for result in video_results:\n if not result['kind'].startswith('youtube'):\n logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n continue\n\n description = result['snippet']['description']\n if len(description) > 300:\n description = description[:300] + '...'\n\n results.append({\n 'url': base_youtube_url + result['id'],\n 'title': result['snippet']['title'],\n 'content': description,\n 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n 'embedded': embedded_url.format(videoid=result['id']),\n 'template': 'videos.html',\n })\n return results\n\n\ndef _extract_url(url):\n matching = re.search('(/https?://[^)]+)', url)\n if matching:\n return matching.group(0)[1:]\n return ''\n", "path": "searx/engines/findx.py"}]} | 1,266 | 145 |
gh_patches_debug_21871 | rasdani/github-patches | git_diff | kubeflow__pipelines-4611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SDK - Drop support for Python 3.5
Python 3.5 has reached EoL: https://www.python.org/downloads/release/python-3510/#:~:text=Python%203.5%20will%20reach%20its,release%20of%20the%203.5%20series.
We're going to stop supporting Python 3.5 soon.
Please feel free to comment or vote on this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 # NOTICE, after any updates to the following, ./requirements.in should be updated
23 # accordingly.
24 REQUIRES = [
25 'absl-py>=0.9,<=0.11',
26 'PyYAML>=5.3,<6',
27 'google-cloud-storage>=1.13.0,<2',
28 'kubernetes>=8.0.0,<12.0.0',
29 'google-auth>=1.6.1,<2',
30 'requests-toolbelt>=0.8.0,<1',
31 'cloudpickle>=1.3.0,<2',
32 # Update the upper version whenever a new major version of the
33 # kfp-server-api package is released.
34 # Update the lower version when kfp sdk depends on new apis/fields in
35 # kfp-server-api.
36 # Note, please also update ./requirements.in
37 'kfp-server-api>=1.1.2,<2.0.0',
38 'jsonschema>=3.0.1,<4',
39 'tabulate>=0.8.6,<1',
40 'click>=7.1.1,<8',
41 'Deprecated>=1.2.7,<2',
42 'strip-hints>=0.1.8,<1',
43 'docstring-parser>=0.7.3,<1',
44 'kfp-pipeline-spec>=0.1.5,<0.2.0',
45 'fire>=0.3.1,<1',
46 'protobuf>=3.13.0,<4'
47 ]
48
49 TESTS_REQUIRE = [
50 'mock',
51 ]
52
53
54 def find_version(*file_path_parts):
55 here = os.path.abspath(os.path.dirname(__file__))
56 with open(os.path.join(here, *file_path_parts), 'r') as fp:
57 version_file_text = fp.read()
58
59 version_match = re.search(
60 r"^__version__ = ['\"]([^'\"]*)['\"]",
61 version_file_text,
62 re.M,
63 )
64 if version_match:
65 return version_match.group(1)
66
67 raise RuntimeError('Unable to find version string.')
68
69
70 setup(
71 name=NAME,
72 version=find_version('kfp', '__init__.py'),
73 description='KubeFlow Pipelines SDK',
74 author='google',
75 install_requires=REQUIRES,
76 tests_require=TESTS_REQUIRE,
77 packages=[
78 'kfp',
79 'kfp.cli',
80 'kfp.cli.diagnose_me',
81 'kfp.compiler',
82 'kfp.components',
83 'kfp.components.structures',
84 'kfp.containers',
85 'kfp.dsl',
86 'kfp.dsl.extensions',
87 'kfp.notebook',
88 'kfp.v2',
89 'kfp.v2.compiler',
90 'kfp.v2.components',
91 'kfp.v2.dsl',
92 ],
93 classifiers=[
94 'Intended Audience :: Developers',
95 'Intended Audience :: Education',
96 'Intended Audience :: Science/Research',
97 'License :: OSI Approved :: Apache Software License',
98 'Programming Language :: Python :: 3',
99 'Programming Language :: Python :: 3.5',
100 'Programming Language :: Python :: 3.6',
101 'Programming Language :: Python :: 3.7',
102 'Topic :: Scientific/Engineering',
103 'Topic :: Scientific/Engineering :: Artificial Intelligence',
104 'Topic :: Software Development',
105 'Topic :: Software Development :: Libraries',
106 'Topic :: Software Development :: Libraries :: Python Modules',
107 ],
108 python_requires='>=3.5.3',
109 include_package_data=True,
110 entry_points={
111 'console_scripts': [
112 'dsl-compile = kfp.compiler.main:main',
113 'dsl-compile-v2 = kfp.v2.compiler.main:main',
114 'kfp=kfp.__main__:main'
115 ]
116 }
117 )
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -96,16 +96,16 @@
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
- python_requires='>=3.5.3',
+ python_requires='>=3.6.1',
include_package_data=True,
entry_points={
'console_scripts': [
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -96,16 +96,16 @@\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n- python_requires='>=3.5.3',\n+ python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n", "issue": "SDK - Drop support for Python 3.5\nPython 3.5 has reached EoL: https://www.python.org/downloads/release/python-3510/#:~:text=Python%203.5%20will%20reach%20its,release%20of%20the%203.5%20series.\r\n\r\nWe're going to stop supporting Python 3.5 soon.\r\n\r\nPlease feel free to comment or vote on this issue.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n 'google-cloud-storage>=1.13.0,<2',\n 'kubernetes>=8.0.0,<12.0.0',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.5,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4'\n]\n\nTESTS_REQUIRE = [\n 'mock',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.dsl',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n }\n)\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n 'google-cloud-storage>=1.13.0,<2',\n 'kubernetes>=8.0.0,<12.0.0',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.5,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4'\n]\n\nTESTS_REQUIRE = [\n 'mock',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.dsl',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n }\n)\n", "path": "sdk/python/setup.py"}]} | 1,587 | 212 |
gh_patches_debug_2002 | rasdani/github-patches | git_diff | uclapi__uclapi-4023 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] Add /authorize Oauth route
**Is your feature request related to a problem? Please describe.**
I have been attempting to use 'auth0-react' to implement Oauth with UCL API, however, this requires a fair bit of tinkering as
the defaults of this and many other auth libraries are to redirect to a "/authorize?client_id..." endpoint which the UCL API does not support.
While this can be avoided through customisation, would it be possible to add a "/authorize" route, as I believe this could make it easier to use some of the "plug and play" Americanized auth libraries available?
**Describe the solution you'd like**
Edit uclapi/backend/uclapi/oauth/urls.py as below
```
urlpatterns = [
url(r'authorise/$', views.authorise),
url(r'authorize/$', views.authorise), <===== Including views.authorise for the 'authorize/$' route.
url(r'shibcallback', views.shibcallback),
url(r'token$', views.token),
url(r'tokens/scopes$', views.scope_map),
url(r'tokens/test$', views.token_test),
url(r'user/allow$', views.userallow),
url(r'user/deny$', views.userdeny),
url(r'user/data$', views.userdata),
url(r'user/studentnumber$', views.get_student_number),
url(r'deauthorise$', views.deauthorise_app),
url(r'user/settings$', views.get_settings)
]
```


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/oauth/urls.py`
Content:
```
1 from django.conf.urls import url
2
3 from . import views
4
5 urlpatterns = [
6 url(r'authorise/$', views.authorise),
7 url(r'adcallback', views.adcallback),
8 url(r'token$', views.token),
9 url(r'tokens/scopes$', views.scope_map),
10 url(r'tokens/test$', views.token_test),
11 url(r'user/allow$', views.userallow),
12 url(r'user/deny$', views.userdeny),
13 url(r'user/data$', views.userdata),
14 url(r'user/studentnumber$', views.get_student_number),
15 url(r'deauthorise$', views.deauthorise_app),
16 url(r'user/settings$', views.get_settings)
17 ]
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/oauth/urls.py b/backend/uclapi/oauth/urls.py
--- a/backend/uclapi/oauth/urls.py
+++ b/backend/uclapi/oauth/urls.py
@@ -4,6 +4,7 @@
urlpatterns = [
url(r'authorise/$', views.authorise),
+ url(r'authorize/$', views.authorise),
url(r'adcallback', views.adcallback),
url(r'token$', views.token),
url(r'tokens/scopes$', views.scope_map),
| {"golden_diff": "diff --git a/backend/uclapi/oauth/urls.py b/backend/uclapi/oauth/urls.py\n--- a/backend/uclapi/oauth/urls.py\n+++ b/backend/uclapi/oauth/urls.py\n@@ -4,6 +4,7 @@\n \n urlpatterns = [\n url(r'authorise/$', views.authorise),\n+ url(r'authorize/$', views.authorise),\n url(r'adcallback', views.adcallback),\n url(r'token$', views.token),\n url(r'tokens/scopes$', views.scope_map),\n", "issue": "[Feature Request] Add /authorize Oauth route\n**Is your feature request related to a problem? Please describe.**\r\nI have been attempting to use 'auth0-react' to implement Oauth with UCL API, however, this requires a fair bit of tinkering as\r\nthe defaults of this and many other auth libraries are to redirect to a \"/authorize?client_id...\" endpoint which the UCL API does not support. \r\n\r\nWhile this can be avoided through customisation, would it be possible to add a \"/authorize\" route, as I believe this could make it easier to use some of the \"plug and play\" Americanized auth libraries available?\r\n\r\n**Describe the solution you'd like**\r\n\r\n Edit uclapi/backend/uclapi/oauth/urls.py as below \r\n```\r\nurlpatterns = [\r\n url(r'authorise/$', views.authorise),\r\n url(r'authorize/$', views.authorise), <===== Including views.authorise for the 'authorize/$' route.\r\n url(r'shibcallback', views.shibcallback),\r\n url(r'token$', views.token),\r\n url(r'tokens/scopes$', views.scope_map),\r\n url(r'tokens/test$', views.token_test),\r\n url(r'user/allow$', views.userallow),\r\n url(r'user/deny$', views.userdeny),\r\n url(r'user/data$', views.userdata),\r\n url(r'user/studentnumber$', views.get_student_number),\r\n url(r'deauthorise$', views.deauthorise_app),\r\n url(r'user/settings$', views.get_settings)\r\n]\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(r'authorise/$', views.authorise),\n url(r'adcallback', views.adcallback),\n url(r'token$', views.token),\n url(r'tokens/scopes$', views.scope_map),\n url(r'tokens/test$', views.token_test),\n url(r'user/allow$', views.userallow),\n url(r'user/deny$', views.userdeny),\n url(r'user/data$', views.userdata),\n url(r'user/studentnumber$', views.get_student_number),\n url(r'deauthorise$', views.deauthorise_app),\n url(r'user/settings$', views.get_settings)\n]\n", "path": "backend/uclapi/oauth/urls.py"}], "after_files": [{"content": "from django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(r'authorise/$', views.authorise),\n url(r'authorize/$', views.authorise),\n url(r'adcallback', views.adcallback),\n url(r'token$', views.token),\n url(r'tokens/scopes$', views.scope_map),\n url(r'tokens/test$', views.token_test),\n url(r'user/allow$', views.userallow),\n url(r'user/deny$', views.userdeny),\n url(r'user/data$', views.userdata),\n url(r'user/studentnumber$', views.get_student_number),\n url(r'deauthorise$', views.deauthorise_app),\n url(r'user/settings$', views.get_settings)\n]\n", "path": "backend/uclapi/oauth/urls.py"}]} | 876 | 112 |
gh_patches_debug_31634 | rasdani/github-patches | git_diff | Pylons__pyramid-1467 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating Integration Tests needs an example plug-in
We have test code here, without the corresponding tested code: http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/testing.html#creating-integration-tests. It would be good to see what it would look like.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/narr/MyProject/setup.py`
Content:
```
1 import os
2
3 from setuptools import setup, find_packages
4
5 here = os.path.abspath(os.path.dirname(__file__))
6 with open(os.path.join(here, 'README.txt')) as f:
7 README = f.read()
8 with open(os.path.join(here, 'CHANGES.txt')) as f:
9 CHANGES = f.read()
10
11 requires = [
12 'pyramid',
13 'pyramid_chameleon',
14 'pyramid_debugtoolbar',
15 'waitress',
16 ]
17
18 setup(name='MyProject',
19 version='0.0',
20 description='MyProject',
21 long_description=README + '\n\n' + CHANGES,
22 classifiers=[
23 "Programming Language :: Python",
24 "Framework :: Pyramid",
25 "Topic :: Internet :: WWW/HTTP",
26 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
27 ],
28 author='',
29 author_email='',
30 url='',
31 keywords='web pyramid pylons',
32 packages=find_packages(),
33 include_package_data=True,
34 zip_safe=False,
35 install_requires=requires,
36 tests_require=requires,
37 test_suite="myproject",
38 entry_points="""\
39 [paste.app_factory]
40 main = myproject:main
41 """,
42 )
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/narr/MyProject/setup.py b/docs/narr/MyProject/setup.py
--- a/docs/narr/MyProject/setup.py
+++ b/docs/narr/MyProject/setup.py
@@ -1,30 +1,42 @@
-import os
+"""Setup for the MyProject package.
+"""
+import os
from setuptools import setup, find_packages
-here = os.path.abspath(os.path.dirname(__file__))
-with open(os.path.join(here, 'README.txt')) as f:
- README = f.read()
-with open(os.path.join(here, 'CHANGES.txt')) as f:
- CHANGES = f.read()
-requires = [
+HERE = os.path.abspath(os.path.dirname(__file__))
+
+
+with open(os.path.join(HERE, 'README.txt')) as fp:
+ README = fp.read()
+
+
+with open(os.path.join(HERE, 'CHANGES.txt')) as fp:
+ CHANGES = fp.read()
+
+
+REQUIRES = [
'pyramid',
'pyramid_chameleon',
'pyramid_debugtoolbar',
'waitress',
]
+TESTS_REQUIRE = [
+ 'webtest'
+ ]
+
setup(name='MyProject',
version='0.0',
description='MyProject',
long_description=README + '\n\n' + CHANGES,
classifiers=[
- "Programming Language :: Python",
- "Framework :: Pyramid",
- "Topic :: Internet :: WWW/HTTP",
- "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
- ],
+ 'Programming Language :: Python',
+ 'Framework :: Pyramid',
+ 'Topic :: Internet :: WWW/HTTP',
+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
+ ],
author='',
author_email='',
url='',
@@ -32,11 +44,10 @@
packages=find_packages(),
include_package_data=True,
zip_safe=False,
- install_requires=requires,
- tests_require=requires,
- test_suite="myproject",
+ install_requires=REQUIRES,
+ tests_require=TESTS_REQUIRE,
+ test_suite='myproject',
entry_points="""\
[paste.app_factory]
main = myproject:main
- """,
- )
+ """)
| {"golden_diff": "diff --git a/docs/narr/MyProject/setup.py b/docs/narr/MyProject/setup.py\n--- a/docs/narr/MyProject/setup.py\n+++ b/docs/narr/MyProject/setup.py\n@@ -1,30 +1,42 @@\n-import os\n+\"\"\"Setup for the MyProject package.\n \n+\"\"\"\n+import os\n from setuptools import setup, find_packages\n \n-here = os.path.abspath(os.path.dirname(__file__))\n-with open(os.path.join(here, 'README.txt')) as f:\n- README = f.read()\n-with open(os.path.join(here, 'CHANGES.txt')) as f:\n- CHANGES = f.read()\n \n-requires = [\n+HERE = os.path.abspath(os.path.dirname(__file__))\n+\n+\n+with open(os.path.join(HERE, 'README.txt')) as fp:\n+ README = fp.read()\n+\n+\n+with open(os.path.join(HERE, 'CHANGES.txt')) as fp:\n+ CHANGES = fp.read()\n+\n+\n+REQUIRES = [\n 'pyramid',\n 'pyramid_chameleon',\n 'pyramid_debugtoolbar',\n 'waitress',\n ]\n \n+TESTS_REQUIRE = [\n+ 'webtest'\n+ ]\n+\n setup(name='MyProject',\n version='0.0',\n description='MyProject',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n- \"Programming Language :: Python\",\n- \"Framework :: Pyramid\",\n- \"Topic :: Internet :: WWW/HTTP\",\n- \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n- ],\n+ 'Programming Language :: Python',\n+ 'Framework :: Pyramid',\n+ 'Topic :: Internet :: WWW/HTTP',\n+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',\n+ ],\n author='',\n author_email='',\n url='',\n@@ -32,11 +44,10 @@\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n- install_requires=requires,\n- tests_require=requires,\n- test_suite=\"myproject\",\n+ install_requires=REQUIRES,\n+ tests_require=TESTS_REQUIRE,\n+ test_suite='myproject',\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = myproject:main\n- \"\"\",\n- )\n+ \"\"\")\n", "issue": "Creating Integration Tests needs an example plug-in\nWe have test code here, without the corresponding tested code: http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/testing.html#creating-integration-tests. It would be good to see what it would look like.\n\n", "before_files": [{"content": "import os\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(here, 'README.txt')) as f:\n README = f.read()\nwith open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\n\nrequires = [\n 'pyramid',\n 'pyramid_chameleon',\n 'pyramid_debugtoolbar',\n 'waitress',\n ]\n\nsetup(name='MyProject',\n version='0.0',\n description='MyProject',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web pyramid pylons',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=requires,\n tests_require=requires,\n test_suite=\"myproject\",\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = myproject:main\n \"\"\",\n )\n", "path": "docs/narr/MyProject/setup.py"}], "after_files": [{"content": "\"\"\"Setup for the MyProject package.\n\n\"\"\"\nimport os\nfrom setuptools import setup, find_packages\n\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\nwith open(os.path.join(HERE, 'README.txt')) as fp:\n README = fp.read()\n\n\nwith open(os.path.join(HERE, 'CHANGES.txt')) as fp:\n CHANGES = fp.read()\n\n\nREQUIRES = [\n 'pyramid',\n 'pyramid_chameleon',\n 'pyramid_debugtoolbar',\n 'waitress',\n ]\n\nTESTS_REQUIRE = [\n 'webtest'\n ]\n\nsetup(name='MyProject',\n version='0.0',\n description='MyProject',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n 'Programming Language :: Python',\n 'Framework :: Pyramid',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',\n ],\n author='',\n author_email='',\n url='',\n keywords='web pyramid pylons',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n test_suite='myproject',\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = myproject:main\n \"\"\")\n", "path": "docs/narr/MyProject/setup.py"}]} | 648 | 505 |
gh_patches_debug_6035 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3325 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in `CVAE` example
There is a bug in `CVAE` example. The `target` input to the `MaskedBCELoss` is not binary (it has values of -1, 0, 1). This was discovered by the PyTorch 2.1 update which started to validate the inputs of `F.binary_cross_entropy_loss`.
> FAILED tests/test_examples.py::test_cpu[cvae/main.py --num-quadrant-inputs=1 --num-epochs=1] - subprocess.CalledProcessError: Command '['/opt/hostedtoolcache/Python/3.8.18/x64/bin/python', '/home/runner/work/pyro/pyro/examples/cvae/main.py', '--num-quadrant-inputs=1', '--num-epochs=1']' returned non-zero exit status 1.
= 1 failed, 148 passed, 97 skipped, 26558 deselected, 2 warnings in 1948.89s (0:32:28) =
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/cvae/baseline.py`
Content:
```
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import copy
5 from pathlib import Path
6
7 import numpy as np
8 import torch
9 import torch.nn as nn
10 import torch.nn.functional as F
11 from tqdm import tqdm
12
13
14 class BaselineNet(nn.Module):
15 def __init__(self, hidden_1, hidden_2):
16 super().__init__()
17 self.fc1 = nn.Linear(784, hidden_1)
18 self.fc2 = nn.Linear(hidden_1, hidden_2)
19 self.fc3 = nn.Linear(hidden_2, 784)
20 self.relu = nn.ReLU()
21
22 def forward(self, x):
23 x = x.view(-1, 784)
24 hidden = self.relu(self.fc1(x))
25 hidden = self.relu(self.fc2(hidden))
26 y = torch.sigmoid(self.fc3(hidden))
27 return y
28
29
30 class MaskedBCELoss(nn.Module):
31 def __init__(self, masked_with=-1):
32 super().__init__()
33 self.masked_with = masked_with
34
35 def forward(self, input, target):
36 target = target.view(input.shape)
37 loss = F.binary_cross_entropy(input, target, reduction="none")
38 loss[target == self.masked_with] = 0
39 return loss.sum()
40
41
42 def train(
43 device,
44 dataloaders,
45 dataset_sizes,
46 learning_rate,
47 num_epochs,
48 early_stop_patience,
49 model_path,
50 ):
51 # Train baseline
52 baseline_net = BaselineNet(500, 500)
53 baseline_net.to(device)
54 optimizer = torch.optim.Adam(baseline_net.parameters(), lr=learning_rate)
55 criterion = MaskedBCELoss()
56 best_loss = np.inf
57 early_stop_count = 0
58
59 for epoch in range(num_epochs):
60 for phase in ["train", "val"]:
61 if phase == "train":
62 baseline_net.train()
63 else:
64 baseline_net.eval()
65
66 running_loss = 0.0
67 num_preds = 0
68
69 bar = tqdm(
70 dataloaders[phase], desc="NN Epoch {} {}".format(epoch, phase).ljust(20)
71 )
72 for i, batch in enumerate(bar):
73 inputs = batch["input"].to(device)
74 outputs = batch["output"].to(device)
75
76 optimizer.zero_grad()
77
78 with torch.set_grad_enabled(phase == "train"):
79 preds = baseline_net(inputs)
80 loss = criterion(preds, outputs) / inputs.size(0)
81 if phase == "train":
82 loss.backward()
83 optimizer.step()
84
85 running_loss += loss.item()
86 num_preds += 1
87 if i % 10 == 0:
88 bar.set_postfix(
89 loss="{:.2f}".format(running_loss / num_preds),
90 early_stop_count=early_stop_count,
91 )
92
93 epoch_loss = running_loss / dataset_sizes[phase]
94 # deep copy the model
95 if phase == "val":
96 if epoch_loss < best_loss:
97 best_loss = epoch_loss
98 best_model_wts = copy.deepcopy(baseline_net.state_dict())
99 early_stop_count = 0
100 else:
101 early_stop_count += 1
102
103 if early_stop_count >= early_stop_patience:
104 break
105
106 baseline_net.load_state_dict(best_model_wts)
107 baseline_net.eval()
108
109 # Save model weights
110 Path(model_path).parent.mkdir(parents=True, exist_ok=True)
111 torch.save(baseline_net.state_dict(), model_path)
112
113 return baseline_net
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/cvae/baseline.py b/examples/cvae/baseline.py
--- a/examples/cvae/baseline.py
+++ b/examples/cvae/baseline.py
@@ -34,8 +34,12 @@
def forward(self, input, target):
target = target.view(input.shape)
- loss = F.binary_cross_entropy(input, target, reduction="none")
- loss[target == self.masked_with] = 0
+ # only calculate loss on target pixels (value = -1)
+ loss = F.binary_cross_entropy(
+ input[target != self.masked_with],
+ target[target != self.masked_with],
+ reduction="none",
+ )
return loss.sum()
| {"golden_diff": "diff --git a/examples/cvae/baseline.py b/examples/cvae/baseline.py\n--- a/examples/cvae/baseline.py\n+++ b/examples/cvae/baseline.py\n@@ -34,8 +34,12 @@\n \n def forward(self, input, target):\n target = target.view(input.shape)\n- loss = F.binary_cross_entropy(input, target, reduction=\"none\")\n- loss[target == self.masked_with] = 0\n+ # only calculate loss on target pixels (value = -1)\n+ loss = F.binary_cross_entropy(\n+ input[target != self.masked_with],\n+ target[target != self.masked_with],\n+ reduction=\"none\",\n+ )\n return loss.sum()\n", "issue": "Bug in `CVAE` example\nThere is a bug in `CVAE` example. The `target` input to the `MaskedBCELoss` is not binary (it has values of -1, 0, 1). This was discovered by the PyTorch 2.1 update which started to validate the inputs of `F.binary_cross_entropy_loss`.\r\n\r\n> FAILED tests/test_examples.py::test_cpu[cvae/main.py --num-quadrant-inputs=1 --num-epochs=1] - subprocess.CalledProcessError: Command '['/opt/hostedtoolcache/Python/3.8.18/x64/bin/python', '/home/runner/work/pyro/pyro/examples/cvae/main.py', '--num-quadrant-inputs=1', '--num-epochs=1']' returned non-zero exit status 1.\r\n= 1 failed, 148 passed, 97 skipped, 26558 deselected, 2 warnings in 1948.89s (0:32:28) =\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nimport copy\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom tqdm import tqdm\n\n\nclass BaselineNet(nn.Module):\n def __init__(self, hidden_1, hidden_2):\n super().__init__()\n self.fc1 = nn.Linear(784, hidden_1)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n self.fc3 = nn.Linear(hidden_2, 784)\n self.relu = nn.ReLU()\n\n def forward(self, x):\n x = x.view(-1, 784)\n hidden = self.relu(self.fc1(x))\n hidden = self.relu(self.fc2(hidden))\n y = torch.sigmoid(self.fc3(hidden))\n return y\n\n\nclass MaskedBCELoss(nn.Module):\n def __init__(self, masked_with=-1):\n super().__init__()\n self.masked_with = masked_with\n\n def forward(self, input, target):\n target = target.view(input.shape)\n loss = F.binary_cross_entropy(input, target, reduction=\"none\")\n loss[target == self.masked_with] = 0\n return loss.sum()\n\n\ndef train(\n device,\n dataloaders,\n dataset_sizes,\n learning_rate,\n num_epochs,\n early_stop_patience,\n model_path,\n):\n # Train baseline\n baseline_net = BaselineNet(500, 500)\n baseline_net.to(device)\n optimizer = torch.optim.Adam(baseline_net.parameters(), lr=learning_rate)\n criterion = MaskedBCELoss()\n best_loss = np.inf\n early_stop_count = 0\n\n for epoch in range(num_epochs):\n for phase in [\"train\", \"val\"]:\n if phase == \"train\":\n baseline_net.train()\n else:\n baseline_net.eval()\n\n running_loss = 0.0\n num_preds = 0\n\n bar = tqdm(\n dataloaders[phase], desc=\"NN Epoch {} {}\".format(epoch, phase).ljust(20)\n )\n for i, batch in enumerate(bar):\n inputs = batch[\"input\"].to(device)\n outputs = batch[\"output\"].to(device)\n\n optimizer.zero_grad()\n\n with torch.set_grad_enabled(phase == \"train\"):\n preds = baseline_net(inputs)\n loss = criterion(preds, outputs) / inputs.size(0)\n if phase == \"train\":\n loss.backward()\n optimizer.step()\n\n running_loss += loss.item()\n num_preds += 1\n if i % 10 == 0:\n bar.set_postfix(\n loss=\"{:.2f}\".format(running_loss / num_preds),\n early_stop_count=early_stop_count,\n )\n\n epoch_loss = running_loss / dataset_sizes[phase]\n # deep copy the model\n if phase == \"val\":\n if epoch_loss < best_loss:\n best_loss = epoch_loss\n best_model_wts = copy.deepcopy(baseline_net.state_dict())\n early_stop_count = 0\n else:\n early_stop_count += 1\n\n if early_stop_count >= early_stop_patience:\n break\n\n baseline_net.load_state_dict(best_model_wts)\n baseline_net.eval()\n\n # Save model weights\n Path(model_path).parent.mkdir(parents=True, exist_ok=True)\n torch.save(baseline_net.state_dict(), model_path)\n\n return baseline_net\n", "path": "examples/cvae/baseline.py"}], "after_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nimport copy\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom tqdm import tqdm\n\n\nclass BaselineNet(nn.Module):\n def __init__(self, hidden_1, hidden_2):\n super().__init__()\n self.fc1 = nn.Linear(784, hidden_1)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n self.fc3 = nn.Linear(hidden_2, 784)\n self.relu = nn.ReLU()\n\n def forward(self, x):\n x = x.view(-1, 784)\n hidden = self.relu(self.fc1(x))\n hidden = self.relu(self.fc2(hidden))\n y = torch.sigmoid(self.fc3(hidden))\n return y\n\n\nclass MaskedBCELoss(nn.Module):\n def __init__(self, masked_with=-1):\n super().__init__()\n self.masked_with = masked_with\n\n def forward(self, input, target):\n target = target.view(input.shape)\n # only calculate loss on target pixels (value = -1)\n loss = F.binary_cross_entropy(\n input[target != self.masked_with],\n target[target != self.masked_with],\n reduction=\"none\",\n )\n return loss.sum()\n\n\ndef train(\n device,\n dataloaders,\n dataset_sizes,\n learning_rate,\n num_epochs,\n early_stop_patience,\n model_path,\n):\n # Train baseline\n baseline_net = BaselineNet(500, 500)\n baseline_net.to(device)\n optimizer = torch.optim.Adam(baseline_net.parameters(), lr=learning_rate)\n criterion = MaskedBCELoss()\n best_loss = np.inf\n early_stop_count = 0\n\n for epoch in range(num_epochs):\n for phase in [\"train\", \"val\"]:\n if phase == \"train\":\n baseline_net.train()\n else:\n baseline_net.eval()\n\n running_loss = 0.0\n num_preds = 0\n\n bar = tqdm(\n dataloaders[phase], desc=\"NN Epoch {} {}\".format(epoch, phase).ljust(20)\n )\n for i, batch in enumerate(bar):\n inputs = batch[\"input\"].to(device)\n outputs = batch[\"output\"].to(device)\n\n optimizer.zero_grad()\n\n with torch.set_grad_enabled(phase == \"train\"):\n preds = baseline_net(inputs)\n loss = criterion(preds, outputs) / inputs.size(0)\n if phase == \"train\":\n loss.backward()\n optimizer.step()\n\n running_loss += loss.item()\n num_preds += 1\n if i % 10 == 0:\n bar.set_postfix(\n loss=\"{:.2f}\".format(running_loss / num_preds),\n early_stop_count=early_stop_count,\n )\n\n epoch_loss = running_loss / dataset_sizes[phase]\n # deep copy the model\n if phase == \"val\":\n if epoch_loss < best_loss:\n best_loss = epoch_loss\n best_model_wts = copy.deepcopy(baseline_net.state_dict())\n early_stop_count = 0\n else:\n early_stop_count += 1\n\n if early_stop_count >= early_stop_patience:\n break\n\n baseline_net.load_state_dict(best_model_wts)\n baseline_net.eval()\n\n # Save model weights\n Path(model_path).parent.mkdir(parents=True, exist_ok=True)\n torch.save(baseline_net.state_dict(), model_path)\n\n return baseline_net\n", "path": "examples/cvae/baseline.py"}]} | 1,485 | 155 |
gh_patches_debug_23373 | rasdani/github-patches | git_diff | dotkom__onlineweb4-2485 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Updated membership "fails" for users with membership > 1 year old
Currrently the new expiration date for memberships is determined like this:
``` python
new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
```
Source: https://github.com/dotKom/onlineweb4/blob/develop/apps/approval/views.py#L86
If a user has membership that expired in 2014, they will get 2015 as the new date, which is still an expired date.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/approval/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 import datetime
4
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.http import Http404
8 from django.shortcuts import get_object_or_404, redirect
9 from django.utils import timezone
10 from django.utils.translation import gettext as _
11
12 from apps.approval.forms import FieldOfStudyApplicationForm
13 from apps.approval.models import MembershipApproval
14 from apps.authentication.models import Membership, get_length_of_membership
15
16
17 @login_required
18 def create_fos_application(request):
19 if request.method == "POST":
20 if not request.user.ntnu_username:
21 messages.error(
22 request, _("Du må knytte et NTNU-brukernavn til kontoen din.")
23 )
24 return redirect("profiles_active", active_tab="membership")
25
26 form = FieldOfStudyApplicationForm(request.POST, request.FILES)
27 if form.is_valid():
28 cleaned = form.cleaned_data
29
30 field_of_study = int(cleaned["field_of_study"])
31
32 if field_of_study == 0:
33 messages.warning(
34 request,
35 _("Denne studieretningen (Gjest) er ikke et gyldig alternativ."),
36 )
37 return redirect("profiles_active", active_tab="membership")
38
39 started_day = 1
40 started_month = 0
41 started_year = int(cleaned["started_year"])
42
43 if cleaned["started_semester"] == "h":
44 started_month = 7
45 if cleaned["started_semester"] == "v":
46 started_month = 1
47
48 started_date = datetime.date(started_year, started_month, started_day)
49
50 # Does the user already have a field of study and started date?
51 if request.user.started_date and request.user.field_of_study:
52 # If there is no change from the current settings, ignore the
53 # request
54 if (
55 request.user.started_date == started_date
56 and request.user.field_of_study == field_of_study
57 ):
58 messages.error(
59 request,
60 _(
61 "Du er allerede registrert med denne studieretningen og denne startdatoen."
62 ),
63 )
64 return redirect("profiles_active", active_tab="membership")
65
66 documentation = cleaned["documentation"]
67
68 application = MembershipApproval(
69 applicant=request.user,
70 field_of_study=field_of_study,
71 started_date=started_date,
72 documentation=documentation,
73 )
74
75 length_of_fos = get_length_of_membership(field_of_study)
76 if length_of_fos > 0:
77 application.new_expiry_date = get_expiry_date(
78 started_year, length_of_fos
79 )
80 application.save()
81
82 messages.success(request, _("Søknad om bytte av studieretning er sendt."))
83
84 return redirect("profiles_active", active_tab="membership")
85 raise Http404
86
87
88 def get_expiry_date(started_year, length_of_fos):
89 today = timezone.now().date()
90 # Expiry dates should be 15th September, so that we have time to get new
91 # lists from NTNU
92 new_expiry_date = datetime.date(started_year, 9, 16) + datetime.timedelta(
93 days=365 * length_of_fos
94 )
95 # Expiry dates in the past sets the expiry date to next september
96 if new_expiry_date < today:
97 if today < datetime.date(today.year, 9, 15):
98 new_expiry_date = datetime.date(today.year, 9, 15)
99 else:
100 new_expiry_date = datetime.date(today.year, 9, 16) + datetime.timedelta(
101 days=365
102 )
103 return new_expiry_date
104
105
106 @login_required
107 def create_membership_application(request):
108 if request.method == "POST":
109 if not request.user.has_expiring_membership:
110 messages.error(request, _("Din bruker har ikke et utløpende medlemskap."))
111 return redirect("profiles_active", active_tab="membership")
112
113 if not request.user.ntnu_username:
114 messages.error(
115 request, _("Du må knytte et NTNU-brukernavn til kontoen din.")
116 )
117 return redirect("profiles_active", active_tab="membership")
118
119 # Extend length of membership by 1 year
120 membership = Membership.objects.get(username=request.user.ntnu_username)
121 new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
122
123 application = MembershipApproval(
124 applicant=request.user,
125 field_of_study=request.user.field_of_study,
126 new_expiry_date=new_expiration_date,
127 )
128 application.save()
129
130 messages.success(request, _("Søknad om ett års forlenget medlemskap er sendt."))
131
132 return redirect("profiles_active", active_tab="membership")
133 raise Http404
134
135
136 @login_required
137 def cancel_application(request, application_id):
138 app = get_object_or_404(MembershipApproval, pk=application_id)
139
140 if app.applicant != request.user:
141 messages.error(request, _("Bare søkeren selv kan slette en søknad."))
142 return redirect("profiles_active", active_tab="membership")
143
144 if app.processed:
145 messages.error(request, _("Denne søknaden er behandlet og kan ikke slettes."))
146 return redirect("profiles_active", active_tab="membership")
147
148 app.delete()
149
150 return redirect("profiles_active", active_tab="membership")
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/approval/views.py b/apps/approval/views.py
--- a/apps/approval/views.py
+++ b/apps/approval/views.py
@@ -11,7 +11,7 @@
from apps.approval.forms import FieldOfStudyApplicationForm
from apps.approval.models import MembershipApproval
-from apps.authentication.models import Membership, get_length_of_membership
+from apps.authentication.models import get_length_of_membership
@login_required
@@ -116,9 +116,14 @@
)
return redirect("profiles_active", active_tab="membership")
- # Extend length of membership by 1 year
- membership = Membership.objects.get(username=request.user.ntnu_username)
- new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
+ # Grant membership until 16th of September this year if the request was sent previous to 1st of July,
+ # or until 16th of September next year if the request was sent after 1st of July
+ if timezone.now().date().month < 7:
+ new_expiration_date = datetime.date(timezone.now().year, 9, 16)
+ else:
+ new_expiration_date = datetime.date(
+ timezone.now().year, 9, 16
+ ) + datetime.timedelta(days=365)
application = MembershipApproval(
applicant=request.user,
| {"golden_diff": "diff --git a/apps/approval/views.py b/apps/approval/views.py\n--- a/apps/approval/views.py\n+++ b/apps/approval/views.py\n@@ -11,7 +11,7 @@\n \n from apps.approval.forms import FieldOfStudyApplicationForm\n from apps.approval.models import MembershipApproval\n-from apps.authentication.models import Membership, get_length_of_membership\n+from apps.authentication.models import get_length_of_membership\n \n \n @login_required\n@@ -116,9 +116,14 @@\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n \n- # Extend length of membership by 1 year\n- membership = Membership.objects.get(username=request.user.ntnu_username)\n- new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n+ # Grant membership until 16th of September this year if the request was sent previous to 1st of July,\n+ # or until 16th of September next year if the request was sent after 1st of July\n+ if timezone.now().date().month < 7:\n+ new_expiration_date = datetime.date(timezone.now().year, 9, 16)\n+ else:\n+ new_expiration_date = datetime.date(\n+ timezone.now().year, 9, 16\n+ ) + datetime.timedelta(days=365)\n \n application = MembershipApproval(\n applicant=request.user,\n", "issue": "Updated membership \"fails\" for users with membership > 1 year old\nCurrrently the new expiration date for memberships is determined like this:\n\n``` python\nnew_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n```\n\nSource: https://github.com/dotKom/onlineweb4/blob/develop/apps/approval/views.py#L86\n\nIf a user has membership that expired in 2014, they will get 2015 as the new date, which is still an expired date.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import gettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import Membership, get_length_of_membership\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == \"POST\":\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n form = FieldOfStudyApplicationForm(request.POST, request.FILES)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned[\"field_of_study\"])\n\n if field_of_study == 0:\n messages.warning(\n request,\n _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned[\"started_year\"])\n\n if cleaned[\"started_semester\"] == \"h\":\n started_month = 7\n if cleaned[\"started_semester\"] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the\n # request\n if (\n request.user.started_date == started_date\n and request.user.field_of_study == field_of_study\n ):\n messages.error(\n request,\n _(\n \"Du er allerede registrert med denne studieretningen og denne startdatoen.\"\n ),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n documentation = cleaned[\"documentation\"]\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date,\n documentation=documentation,\n )\n\n length_of_fos = get_length_of_membership(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(\n started_year, length_of_fos\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new\n # lists from NTNU\n new_expiry_date = datetime.date(started_year, 9, 16) + datetime.timedelta(\n days=365 * length_of_fos\n )\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(today.year, 9, 16) + datetime.timedelta(\n days=365\n )\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == \"POST\":\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n # Extend length of membership by 1 year\n membership = Membership.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=request.user.field_of_study,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n app.delete()\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n", "path": "apps/approval/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import gettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import get_length_of_membership\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == \"POST\":\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n form = FieldOfStudyApplicationForm(request.POST, request.FILES)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned[\"field_of_study\"])\n\n if field_of_study == 0:\n messages.warning(\n request,\n _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned[\"started_year\"])\n\n if cleaned[\"started_semester\"] == \"h\":\n started_month = 7\n if cleaned[\"started_semester\"] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the\n # request\n if (\n request.user.started_date == started_date\n and request.user.field_of_study == field_of_study\n ):\n messages.error(\n request,\n _(\n \"Du er allerede registrert med denne studieretningen og denne startdatoen.\"\n ),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n documentation = cleaned[\"documentation\"]\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date,\n documentation=documentation,\n )\n\n length_of_fos = get_length_of_membership(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(\n started_year, length_of_fos\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new\n # lists from NTNU\n new_expiry_date = datetime.date(started_year, 9, 16) + datetime.timedelta(\n days=365 * length_of_fos\n )\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(today.year, 9, 16) + datetime.timedelta(\n days=365\n )\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == \"POST\":\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n # Grant membership until 16th of September this year if the request was sent previous to 1st of July,\n # or until 16th of September next year if the request was sent after 1st of July\n if timezone.now().date().month < 7:\n new_expiration_date = datetime.date(timezone.now().year, 9, 16)\n else:\n new_expiration_date = datetime.date(\n timezone.now().year, 9, 16\n ) + datetime.timedelta(days=365)\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=request.user.field_of_study,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n app.delete()\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n", "path": "apps/approval/views.py"}]} | 1,898 | 322 |
gh_patches_debug_36871 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-2851 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Building Dict is too slow
**Bug description**
```
from parlai.scripts.train_model import TrainModel
TrainModel.main(
# similar to before
task='amazon_qa',
model='projects.wizard_of_wikipedia.generator.agents:EndToEndAgent',
model_file='/tmp/end2end_generator/model',
# initialize with a pretrained model
init_model='zoo:wizard_of_wikipedia/end2end_generator/model',
# arguments we get from the pretrained model.
# Unfortunately, these must be looked up separately for each model.
# eps
dict_lower=True,
dict_tokenizer='bpe',
n_layers=5,
n_heads=2,
dropout=0.20,
ffn_size=512,
embedding_size=256,
log_every_n_secs=10,
validation_patience=12,
validation_metric='ppl',
validation_metric_mode='min',
validation_every_n_epochs=0.5,
n_positions=128,
truncate=128,
max_knowledge=32,
knowledge_alpha=0.95,
knowledge_truncate=32,
learningrate=5e-4,
warmup_updates=5000,
clip=0.1,
lr_scheduler='invsqrt',
embedding_type='fasttext',
beam_size=1,
skip_generation=False,
batchsize=64,
)
```
I am trying to train amazon_qa task on wizard of Wikipedia model, just to experiment it out, I am not sure if it will work but when I run this script it says creating a task and goes to next stage building dictionary it just becomes to slow of a process.
<img width="1021" alt="Screenshot 2020-07-18 at 10 42 32 PM" src="https://user-images.githubusercontent.com/45225143/87858114-2f34b380-c949-11ea-9928-3bfc77fa91c8.png">
Like has been around 2 hrs and it hasn't crossed 0% yet.
Can anyone please point me out the error, thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/tasks/amazon_qa/agents.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from parlai.core.teachers import FixedDialogTeacher
8 from .build import build, RESOURCES
9 import os
10 import json
11
12
13 class DefaultTeacher(FixedDialogTeacher):
14 def __init__(self, opt, shared=None):
15 # store datatype
16 super().__init__(opt, shared)
17
18 dt = opt['datatype'].split(':')[0]
19 if dt != 'train':
20 raise RuntimeError('Not valid datatype (only train).')
21
22 if shared:
23 self.data = shared['data']
24 else:
25 build(opt)
26 self._setup_data()
27 self.reset()
28
29 def num_episodes(self):
30 return len(self.data)
31
32 def num_examples(self):
33 return sum([len(x) for x in self.data])
34
35 def _setup_data(self):
36 self.existing_keys = [
37 'question',
38 'answer',
39 'asin',
40 'questionType',
41 'questionTime',
42 'askerID',
43 'answerType',
44 'answerTime',
45 'unixTime',
46 'answererID',
47 'helpful',
48 'answerScore',
49 ]
50
51 self.data = []
52
53 def create_entry_single(episode):
54 entry = []
55 for key in self.existing_keys:
56 if key in episode:
57 entry.append(str(episode[key]))
58 else:
59 entry.append('N/A')
60 return entry
61
62 def create_entry_multiple(episode):
63 entries = []
64
65 for question in episode['questions']:
66 new_episode = dict()
67 new_episode['asin'] = episode['asin']
68 new_episode['askerID'] = question['askerID']
69 new_episode['questionTime'] = question['questionTime']
70 new_episode['quesitonType'] = question['questionType']
71 new_episode['question'] = question['questionText']
72
73 for answer in question['answers']:
74 answer.update(new_episode)
75 answer['answer'] = answer['answerText']
76 entries.append([create_entry_single(answer)])
77
78 return entries
79
80 fpath = os.path.join(self.opt['datapath'], 'AmazonQA')
81 for i, f in enumerate(RESOURCES):
82 json_file = f.file_name[:-3]
83 file_path = os.path.join(fpath, json_file)
84
85 with open(file_path, 'r') as infile:
86 data = infile.read()
87 new_data = data.replace('}\n{', '},\n{')
88 json_data = json.loads(f'[{new_data}]')
89
90 for ep in json_data:
91 # First 20 datasets have a different format than those later
92 if i < 21:
93 self.data.append([create_entry_single(ep)])
94 else:
95 self.data += create_entry_multiple(ep)
96
97 def get(self, episode_idx, entry_idx=0):
98 ep = self.data[episode_idx]
99 entry = ep[entry_idx]
100 action = dict()
101 action['id'] = episode_idx
102 for i, key in enumerate(self.existing_keys):
103 if i < 2:
104 continue
105 action[key] = entry[i]
106 action['episode_done'] = True
107 action['text'] = entry[0]
108 action['labels'] = [entry[1]]
109
110 return action
111
112 def share(self):
113 shared = super().share()
114 shared['data'] = self.data
115 return shared
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parlai/tasks/amazon_qa/agents.py b/parlai/tasks/amazon_qa/agents.py
--- a/parlai/tasks/amazon_qa/agents.py
+++ b/parlai/tasks/amazon_qa/agents.py
@@ -10,6 +10,22 @@
import json
+EXISTING_KEYS = [
+ 'question',
+ 'answer',
+ 'asin',
+ 'questionType',
+ 'questionTime',
+ 'askerID',
+ 'answerType',
+ 'answerTime',
+ 'unixTime',
+ 'answererID',
+ 'helpful',
+ 'answerScore',
+]
+
+
class DefaultTeacher(FixedDialogTeacher):
def __init__(self, opt, shared=None):
# store datatype
@@ -21,38 +37,27 @@
if shared:
self.data = shared['data']
+ self.num_ex = shared['num_ex']
+ self.num_ep = shared['num_ep']
else:
build(opt)
self._setup_data()
+ self.num_ex = sum([len(x) for x in self.data])
+ self.num_ep = len(self.data)
self.reset()
def num_episodes(self):
- return len(self.data)
+ return self.num_ep
def num_examples(self):
- return sum([len(x) for x in self.data])
+ return self.num_ex
def _setup_data(self):
- self.existing_keys = [
- 'question',
- 'answer',
- 'asin',
- 'questionType',
- 'questionTime',
- 'askerID',
- 'answerType',
- 'answerTime',
- 'unixTime',
- 'answererID',
- 'helpful',
- 'answerScore',
- ]
-
self.data = []
def create_entry_single(episode):
entry = []
- for key in self.existing_keys:
+ for key in EXISTING_KEYS:
if key in episode:
entry.append(str(episode[key]))
else:
@@ -99,7 +104,7 @@
entry = ep[entry_idx]
action = dict()
action['id'] = episode_idx
- for i, key in enumerate(self.existing_keys):
+ for i, key in enumerate(EXISTING_KEYS):
if i < 2:
continue
action[key] = entry[i]
@@ -112,4 +117,6 @@
def share(self):
shared = super().share()
shared['data'] = self.data
+ shared['num_ex'] = self.num_ex
+ shared['num_ep'] = self.num_ep
return shared
| {"golden_diff": "diff --git a/parlai/tasks/amazon_qa/agents.py b/parlai/tasks/amazon_qa/agents.py\n--- a/parlai/tasks/amazon_qa/agents.py\n+++ b/parlai/tasks/amazon_qa/agents.py\n@@ -10,6 +10,22 @@\n import json\n \n \n+EXISTING_KEYS = [\n+ 'question',\n+ 'answer',\n+ 'asin',\n+ 'questionType',\n+ 'questionTime',\n+ 'askerID',\n+ 'answerType',\n+ 'answerTime',\n+ 'unixTime',\n+ 'answererID',\n+ 'helpful',\n+ 'answerScore',\n+]\n+\n+\n class DefaultTeacher(FixedDialogTeacher):\n def __init__(self, opt, shared=None):\n # store datatype\n@@ -21,38 +37,27 @@\n \n if shared:\n self.data = shared['data']\n+ self.num_ex = shared['num_ex']\n+ self.num_ep = shared['num_ep']\n else:\n build(opt)\n self._setup_data()\n+ self.num_ex = sum([len(x) for x in self.data])\n+ self.num_ep = len(self.data)\n self.reset()\n \n def num_episodes(self):\n- return len(self.data)\n+ return self.num_ep\n \n def num_examples(self):\n- return sum([len(x) for x in self.data])\n+ return self.num_ex\n \n def _setup_data(self):\n- self.existing_keys = [\n- 'question',\n- 'answer',\n- 'asin',\n- 'questionType',\n- 'questionTime',\n- 'askerID',\n- 'answerType',\n- 'answerTime',\n- 'unixTime',\n- 'answererID',\n- 'helpful',\n- 'answerScore',\n- ]\n-\n self.data = []\n \n def create_entry_single(episode):\n entry = []\n- for key in self.existing_keys:\n+ for key in EXISTING_KEYS:\n if key in episode:\n entry.append(str(episode[key]))\n else:\n@@ -99,7 +104,7 @@\n entry = ep[entry_idx]\n action = dict()\n action['id'] = episode_idx\n- for i, key in enumerate(self.existing_keys):\n+ for i, key in enumerate(EXISTING_KEYS):\n if i < 2:\n continue\n action[key] = entry[i]\n@@ -112,4 +117,6 @@\n def share(self):\n shared = super().share()\n shared['data'] = self.data\n+ shared['num_ex'] = self.num_ex\n+ shared['num_ep'] = self.num_ep\n return shared\n", "issue": "Building Dict is too slow\n**Bug description**\r\n```\r\nfrom parlai.scripts.train_model import TrainModel\r\n\r\nTrainModel.main(\r\n # similar to before\r\n task='amazon_qa',\r\n model='projects.wizard_of_wikipedia.generator.agents:EndToEndAgent',\r\n model_file='/tmp/end2end_generator/model',\r\n\r\n # initialize with a pretrained model\r\n init_model='zoo:wizard_of_wikipedia/end2end_generator/model',\r\n\r\n # arguments we get from the pretrained model.\r\n # Unfortunately, these must be looked up separately for each model.\r\n # eps\r\n dict_lower=True,\r\n dict_tokenizer='bpe',\r\n n_layers=5,\r\n n_heads=2,\r\n dropout=0.20,\r\n ffn_size=512,\r\n embedding_size=256,\r\n log_every_n_secs=10,\r\n validation_patience=12,\r\n validation_metric='ppl',\r\n validation_metric_mode='min',\r\n validation_every_n_epochs=0.5,\r\n n_positions=128,\r\n truncate=128,\r\n max_knowledge=32,\r\n knowledge_alpha=0.95,\r\n knowledge_truncate=32,\r\n learningrate=5e-4,\r\n warmup_updates=5000,\r\n clip=0.1,\r\n lr_scheduler='invsqrt',\r\n embedding_type='fasttext',\r\n beam_size=1,\r\n skip_generation=False,\r\n batchsize=64,\r\n)\r\n\r\n```\r\nI am trying to train amazon_qa task on wizard of Wikipedia model, just to experiment it out, I am not sure if it will work but when I run this script it says creating a task and goes to next stage building dictionary it just becomes to slow of a process.\r\n\r\n<img width=\"1021\" alt=\"Screenshot 2020-07-18 at 10 42 32 PM\" src=\"https://user-images.githubusercontent.com/45225143/87858114-2f34b380-c949-11ea-9928-3bfc77fa91c8.png\">\r\n\r\nLike has been around 2 hrs and it hasn't crossed 0% yet.\r\nCan anyone please point me out the error, thanks.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom parlai.core.teachers import FixedDialogTeacher\nfrom .build import build, RESOURCES\nimport os\nimport json\n\n\nclass DefaultTeacher(FixedDialogTeacher):\n def __init__(self, opt, shared=None):\n # store datatype\n super().__init__(opt, shared)\n\n dt = opt['datatype'].split(':')[0]\n if dt != 'train':\n raise RuntimeError('Not valid datatype (only train).')\n\n if shared:\n self.data = shared['data']\n else:\n build(opt)\n self._setup_data()\n self.reset()\n\n def num_episodes(self):\n return len(self.data)\n\n def num_examples(self):\n return sum([len(x) for x in self.data])\n\n def _setup_data(self):\n self.existing_keys = [\n 'question',\n 'answer',\n 'asin',\n 'questionType',\n 'questionTime',\n 'askerID',\n 'answerType',\n 'answerTime',\n 'unixTime',\n 'answererID',\n 'helpful',\n 'answerScore',\n ]\n\n self.data = []\n\n def create_entry_single(episode):\n entry = []\n for key in self.existing_keys:\n if key in episode:\n entry.append(str(episode[key]))\n else:\n entry.append('N/A')\n return entry\n\n def create_entry_multiple(episode):\n entries = []\n\n for question in episode['questions']:\n new_episode = dict()\n new_episode['asin'] = episode['asin']\n new_episode['askerID'] = question['askerID']\n new_episode['questionTime'] = question['questionTime']\n new_episode['quesitonType'] = question['questionType']\n new_episode['question'] = question['questionText']\n\n for answer in question['answers']:\n answer.update(new_episode)\n answer['answer'] = answer['answerText']\n entries.append([create_entry_single(answer)])\n\n return entries\n\n fpath = os.path.join(self.opt['datapath'], 'AmazonQA')\n for i, f in enumerate(RESOURCES):\n json_file = f.file_name[:-3]\n file_path = os.path.join(fpath, json_file)\n\n with open(file_path, 'r') as infile:\n data = infile.read()\n new_data = data.replace('}\\n{', '},\\n{')\n json_data = json.loads(f'[{new_data}]')\n\n for ep in json_data:\n # First 20 datasets have a different format than those later\n if i < 21:\n self.data.append([create_entry_single(ep)])\n else:\n self.data += create_entry_multiple(ep)\n\n def get(self, episode_idx, entry_idx=0):\n ep = self.data[episode_idx]\n entry = ep[entry_idx]\n action = dict()\n action['id'] = episode_idx\n for i, key in enumerate(self.existing_keys):\n if i < 2:\n continue\n action[key] = entry[i]\n action['episode_done'] = True\n action['text'] = entry[0]\n action['labels'] = [entry[1]]\n\n return action\n\n def share(self):\n shared = super().share()\n shared['data'] = self.data\n return shared\n", "path": "parlai/tasks/amazon_qa/agents.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom parlai.core.teachers import FixedDialogTeacher\nfrom .build import build, RESOURCES\nimport os\nimport json\n\n\nEXISTING_KEYS = [\n 'question',\n 'answer',\n 'asin',\n 'questionType',\n 'questionTime',\n 'askerID',\n 'answerType',\n 'answerTime',\n 'unixTime',\n 'answererID',\n 'helpful',\n 'answerScore',\n]\n\n\nclass DefaultTeacher(FixedDialogTeacher):\n def __init__(self, opt, shared=None):\n # store datatype\n super().__init__(opt, shared)\n\n dt = opt['datatype'].split(':')[0]\n if dt != 'train':\n raise RuntimeError('Not valid datatype (only train).')\n\n if shared:\n self.data = shared['data']\n self.num_ex = shared['num_ex']\n self.num_ep = shared['num_ep']\n else:\n build(opt)\n self._setup_data()\n self.num_ex = sum([len(x) for x in self.data])\n self.num_ep = len(self.data)\n self.reset()\n\n def num_episodes(self):\n return self.num_ep\n\n def num_examples(self):\n return self.num_ex\n\n def _setup_data(self):\n self.data = []\n\n def create_entry_single(episode):\n entry = []\n for key in EXISTING_KEYS:\n if key in episode:\n entry.append(str(episode[key]))\n else:\n entry.append('N/A')\n return entry\n\n def create_entry_multiple(episode):\n entries = []\n\n for question in episode['questions']:\n new_episode = dict()\n new_episode['asin'] = episode['asin']\n new_episode['askerID'] = question['askerID']\n new_episode['questionTime'] = question['questionTime']\n new_episode['quesitonType'] = question['questionType']\n new_episode['question'] = question['questionText']\n\n for answer in question['answers']:\n answer.update(new_episode)\n answer['answer'] = answer['answerText']\n entries.append([create_entry_single(answer)])\n\n return entries\n\n fpath = os.path.join(self.opt['datapath'], 'AmazonQA')\n for i, f in enumerate(RESOURCES):\n json_file = f.file_name[:-3]\n file_path = os.path.join(fpath, json_file)\n\n with open(file_path, 'r') as infile:\n data = infile.read()\n new_data = data.replace('}\\n{', '},\\n{')\n json_data = json.loads(f'[{new_data}]')\n\n for ep in json_data:\n # First 20 datasets have a different format than those later\n if i < 21:\n self.data.append([create_entry_single(ep)])\n else:\n self.data += create_entry_multiple(ep)\n\n def get(self, episode_idx, entry_idx=0):\n ep = self.data[episode_idx]\n entry = ep[entry_idx]\n action = dict()\n action['id'] = episode_idx\n for i, key in enumerate(EXISTING_KEYS):\n if i < 2:\n continue\n action[key] = entry[i]\n action['episode_done'] = True\n action['text'] = entry[0]\n action['labels'] = [entry[1]]\n\n return action\n\n def share(self):\n shared = super().share()\n shared['data'] = self.data\n shared['num_ex'] = self.num_ex\n shared['num_ep'] = self.num_ep\n return shared\n", "path": "parlai/tasks/amazon_qa/agents.py"}]} | 1,762 | 611 |
gh_patches_debug_30682 | rasdani/github-patches | git_diff | freedomofpress__securedrop-6032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
QA: Automate basic server testing
The default QA test plan includes a basic testing section that mostly checks server configuration. These tests are duplicated in the production testinfra tests, so with some work to get `testinfra` to use production settings where available (via `install_files/ansible-base/group_vars/all/site-specific`), it should be possible to reduce tester workload by removing Basic testing in favour of `testinfra`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `molecule/testinfra/conftest.py`
Content:
```
1 """
2 Configuration for TestInfra test suite for SecureDrop.
3 Handles importing host-specific test vars, so test functions
4 can be reused across multiple hosts, with varied targets.
5
6 Vars should be placed in `testinfra/vars/<hostname>.yml`.
7 """
8
9 import io
10 import os
11 import yaml
12 from typing import Any, Dict
13
14 import testutils
15
16
17 # The config tests target staging by default. It's possible to override
18 # for e.g. prod, but the associated vars files are not yet ported.
19 target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')
20
21
22 def securedrop_import_testinfra_vars(hostname, with_header=False):
23 """
24 Import vars from a YAML file to populate tests with host-specific
25 values used in checks. For instance, the SecureDrop docroot will
26 be under /vagrant in development, but /var/www/securedrop in staging.
27
28 Vars must be stored in `testinfra/vars/<hostname>.yml`.
29 """
30 filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml")
31 with io.open(filepath, 'r') as f:
32 hostvars = yaml.safe_load(f)
33
34 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.8") # noqa: E501
35 hostvars['python_version'] = "3.8"
36 hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']
37
38 if with_header:
39 hostvars = dict(securedrop_test_vars=hostvars)
40
41 return hostvars
42
43
44 class TestVars(dict):
45 managed_attrs = {} # type: Dict[str, Any]
46
47 def __init__(self, initial: Dict[str, Any]) -> None:
48 self.securedrop_target_distribution = os.environ.get("SECUREDROP_TARGET_DISTRIBUTION")
49 self.managed_attrs.update(initial)
50
51 def __getattr__(self, name: str) -> Any:
52 """
53 If the requested attribute names a dict in managed_attrs and that
54 contains a key with the name of the target distribution,
55 e.g. "focal", return that. Otherwise return the entire item
56 under the requested name.
57 """
58 try:
59 attr = self.managed_attrs[name]
60 if isinstance(attr, dict) and self.securedrop_target_distribution in attr:
61 return attr[self.securedrop_target_distribution]
62 return attr
63 except KeyError:
64 raise AttributeError(name)
65
66 def __str__(self) -> str:
67 return str(self.managed_attrs)
68
69
70 testutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py
--- a/molecule/testinfra/conftest.py
+++ b/molecule/testinfra/conftest.py
@@ -14,8 +14,7 @@
import testutils
-# The config tests target staging by default. It's possible to override
-# for e.g. prod, but the associated vars files are not yet ported.
+# The config tests target staging by default.
target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')
@@ -35,6 +34,34 @@
hostvars['python_version'] = "3.8"
hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']
+ # If the tests are run against a production environment, check local config
+ # and override as necessary.
+ prod_filepath = os.path.join(os.path.dirname(__file__),
+ "../../install_files/ansible-base/group_vars/all/site-specific")
+ if os.path.isfile(prod_filepath):
+ with io.open(prod_filepath, 'r') as f:
+ prodvars = yaml.safe_load(f)
+
+ def _prod_override(vars_key, prod_key):
+ if prod_key in prodvars:
+ hostvars[vars_key] = prodvars[prod_key]
+
+ _prod_override('app_ip', 'app_ip')
+ _prod_override('mon_ip', 'monitor_ip')
+ _prod_override('sasl_domain', 'sasl_domain')
+ _prod_override('sasl_username', 'sasl_username')
+ _prod_override('sasl_password', 'sasl_password')
+ _prod_override('daily_reboot_time', 'daily_reboot_time')
+
+ # Check repo targeting, and update vars
+ repo_filepath = os.path.join(os.path.dirname(__file__),
+ "../../install_files/ansible-base/roles/install-fpf-repo/defaults/main.yml") # noqa: E501
+ if os.path.isfile(repo_filepath):
+ with io.open(repo_filepath, 'r') as f:
+ repovars = yaml.safe_load(f)
+ if 'apt_repo_url' in repovars:
+ hostvars['fpf_apt_repo_url'] = repovars['apt_repo_url']
+
if with_header:
hostvars = dict(securedrop_test_vars=hostvars)
| {"golden_diff": "diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py\n--- a/molecule/testinfra/conftest.py\n+++ b/molecule/testinfra/conftest.py\n@@ -14,8 +14,7 @@\n import testutils\n \n \n-# The config tests target staging by default. It's possible to override\n-# for e.g. prod, but the associated vars files are not yet ported.\n+# The config tests target staging by default.\n target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n \n \n@@ -35,6 +34,34 @@\n hostvars['python_version'] = \"3.8\"\n hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']\n \n+ # If the tests are run against a production environment, check local config\n+ # and override as necessary.\n+ prod_filepath = os.path.join(os.path.dirname(__file__),\n+ \"../../install_files/ansible-base/group_vars/all/site-specific\")\n+ if os.path.isfile(prod_filepath):\n+ with io.open(prod_filepath, 'r') as f:\n+ prodvars = yaml.safe_load(f)\n+\n+ def _prod_override(vars_key, prod_key):\n+ if prod_key in prodvars:\n+ hostvars[vars_key] = prodvars[prod_key]\n+\n+ _prod_override('app_ip', 'app_ip')\n+ _prod_override('mon_ip', 'monitor_ip')\n+ _prod_override('sasl_domain', 'sasl_domain')\n+ _prod_override('sasl_username', 'sasl_username')\n+ _prod_override('sasl_password', 'sasl_password')\n+ _prod_override('daily_reboot_time', 'daily_reboot_time')\n+\n+ # Check repo targeting, and update vars\n+ repo_filepath = os.path.join(os.path.dirname(__file__),\n+ \"../../install_files/ansible-base/roles/install-fpf-repo/defaults/main.yml\") # noqa: E501\n+ if os.path.isfile(repo_filepath):\n+ with io.open(repo_filepath, 'r') as f:\n+ repovars = yaml.safe_load(f)\n+ if 'apt_repo_url' in repovars:\n+ hostvars['fpf_apt_repo_url'] = repovars['apt_repo_url']\n+\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n", "issue": "QA: Automate basic server testing\nThe default QA test plan includes a basic testing section that mostly checks server configuration. These tests are duplicated in the production testinfra tests, so with some work to get `testinfra` to use production settings where available (via `install_files/ansible-base/group_vars/all/site-specific`), it should be possible to reduce tester workload by removing Basic testing in favour of `testinfra`.\n", "before_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nfrom typing import Any, Dict\n\nimport testutils\n\n\n# The config tests target staging by default. It's possible to override\n# for e.g. prod, but the associated vars files are not yet ported.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\nclass TestVars(dict):\n managed_attrs = {} # type: Dict[str, Any]\n\n def __init__(self, initial: Dict[str, Any]) -> None:\n self.securedrop_target_distribution = os.environ.get(\"SECUREDROP_TARGET_DISTRIBUTION\")\n self.managed_attrs.update(initial)\n\n def __getattr__(self, name: str) -> Any:\n \"\"\"\n If the requested attribute names a dict in managed_attrs and that\n contains a key with the name of the target distribution,\n e.g. \"focal\", return that. Otherwise return the entire item\n under the requested name.\n \"\"\"\n try:\n attr = self.managed_attrs[name]\n if isinstance(attr, dict) and self.securedrop_target_distribution in attr:\n return attr[self.securedrop_target_distribution]\n return attr\n except KeyError:\n raise AttributeError(name)\n\n def __str__(self) -> str:\n return str(self.managed_attrs)\n\n\ntestutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))\n", "path": "molecule/testinfra/conftest.py"}], "after_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nfrom typing import Any, Dict\n\nimport testutils\n\n\n# The config tests target staging by default.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']\n\n # If the tests are run against a production environment, check local config\n # and override as necessary.\n prod_filepath = os.path.join(os.path.dirname(__file__),\n \"../../install_files/ansible-base/group_vars/all/site-specific\")\n if os.path.isfile(prod_filepath):\n with io.open(prod_filepath, 'r') as f:\n prodvars = yaml.safe_load(f)\n\n def _prod_override(vars_key, prod_key):\n if prod_key in prodvars:\n hostvars[vars_key] = prodvars[prod_key]\n\n _prod_override('app_ip', 'app_ip')\n _prod_override('mon_ip', 'monitor_ip')\n _prod_override('sasl_domain', 'sasl_domain')\n _prod_override('sasl_username', 'sasl_username')\n _prod_override('sasl_password', 'sasl_password')\n _prod_override('daily_reboot_time', 'daily_reboot_time')\n\n # Check repo targeting, and update vars\n repo_filepath = os.path.join(os.path.dirname(__file__),\n \"../../install_files/ansible-base/roles/install-fpf-repo/defaults/main.yml\") # noqa: E501\n if os.path.isfile(repo_filepath):\n with io.open(repo_filepath, 'r') as f:\n repovars = yaml.safe_load(f)\n if 'apt_repo_url' in repovars:\n hostvars['fpf_apt_repo_url'] = repovars['apt_repo_url']\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\nclass TestVars(dict):\n managed_attrs = {} # type: Dict[str, Any]\n\n def __init__(self, initial: Dict[str, Any]) -> None:\n self.securedrop_target_distribution = os.environ.get(\"SECUREDROP_TARGET_DISTRIBUTION\")\n self.managed_attrs.update(initial)\n\n def __getattr__(self, name: str) -> Any:\n \"\"\"\n If the requested attribute names a dict in managed_attrs and that\n contains a key with the name of the target distribution,\n e.g. \"focal\", return that. Otherwise return the entire item\n under the requested name.\n \"\"\"\n try:\n attr = self.managed_attrs[name]\n if isinstance(attr, dict) and self.securedrop_target_distribution in attr:\n return attr[self.securedrop_target_distribution]\n return attr\n except KeyError:\n raise AttributeError(name)\n\n def __str__(self) -> str:\n return str(self.managed_attrs)\n\n\ntestutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))\n", "path": "molecule/testinfra/conftest.py"}]} | 1,064 | 532 |
gh_patches_debug_2091 | rasdani/github-patches | git_diff | ddionrails__ddionrails-801 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add dynamic range slider for publication year facet
see <https://opensource.appbase.io/reactive-manual/vue/range-components/dynamicrangeslider.html>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddionrails/publications/documents.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """ Search documents for indexing models from ddionrails.publications app into Elasticsearch
4
5
6 Authors:
7 * 2019 Heinz-Alexander Fütterer (DIW Berlin)
8
9 License:
10 | **AGPL-3.0 GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) 3.0**.
11 | See LICENSE at the GitHub
12 `repository <https://github.com/ddionrails/ddionrails/blob/master/LICENSE.md>`_
13 | or at
14 `<https://www.gnu.org/licenses/agpl-3.0.txt>`_.
15 """
16
17 from django.conf import settings
18 from django.db.models import QuerySet
19 from django_elasticsearch_dsl import Document, fields
20 from django_elasticsearch_dsl.registries import registry
21
22 from .models import Publication
23
24
25 @registry.register_document
26 class PublicationDocument(Document):
27 """ Search document for publications.Publication """
28
29 # doc_type was removed in Elasticsearch 7
30 type = fields.KeywordField()
31
32 @staticmethod
33 def prepare_type(publication: Publication) -> str:
34 return "publication"
35
36 # facets
37 sub_type = fields.KeywordField()
38 study = fields.KeywordField()
39 year = fields.KeywordField()
40
41 # prepare_FIELD will be executed while indexing FIELD
42 @staticmethod
43 def prepare_study(publication: Publication) -> str:
44 """ Return the related study """
45 return publication.study.title()
46
47 class Index: # pylint: disable=missing-docstring,too-few-public-methods
48 # Name of the Elasticsearch index
49 name = f"{settings.ELASTICSEARCH_DSL_INDEX_PREFIX}publications"
50
51 class Django: # pylint: disable=missing-docstring,too-few-public-methods
52 model = Publication # The model associated with this Document
53
54 # The fields of the model you want to be indexed in Elasticsearch
55 fields = ("abstract", "author", "cite", "doi", "name", "title", "url")
56
57 def get_queryset(self) -> QuerySet:
58 """
59 Return the queryset that should be indexed by this doc type,
60 with select related study.
61 """
62 return super().get_queryset().select_related("study")
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddionrails/publications/documents.py b/ddionrails/publications/documents.py
--- a/ddionrails/publications/documents.py
+++ b/ddionrails/publications/documents.py
@@ -36,7 +36,7 @@
# facets
sub_type = fields.KeywordField()
study = fields.KeywordField()
- year = fields.KeywordField()
+ year = fields.IntegerField()
# prepare_FIELD will be executed while indexing FIELD
@staticmethod
| {"golden_diff": "diff --git a/ddionrails/publications/documents.py b/ddionrails/publications/documents.py\n--- a/ddionrails/publications/documents.py\n+++ b/ddionrails/publications/documents.py\n@@ -36,7 +36,7 @@\n # facets\n sub_type = fields.KeywordField()\n study = fields.KeywordField()\n- year = fields.KeywordField()\n+ year = fields.IntegerField()\n \n # prepare_FIELD will be executed while indexing FIELD\n @staticmethod\n", "issue": "Add dynamic range slider for publication year facet\nsee <https://opensource.appbase.io/reactive-manual/vue/range-components/dynamicrangeslider.html>\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Search documents for indexing models from ddionrails.publications app into Elasticsearch\n\n\nAuthors:\n * 2019 Heinz-Alexander F\u00fctterer (DIW Berlin)\n\nLicense:\n | **AGPL-3.0 GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) 3.0**.\n | See LICENSE at the GitHub\n `repository <https://github.com/ddionrails/ddionrails/blob/master/LICENSE.md>`_\n | or at\n `<https://www.gnu.org/licenses/agpl-3.0.txt>`_.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.db.models import QuerySet\nfrom django_elasticsearch_dsl import Document, fields\nfrom django_elasticsearch_dsl.registries import registry\n\nfrom .models import Publication\n\n\[email protected]_document\nclass PublicationDocument(Document):\n \"\"\" Search document for publications.Publication \"\"\"\n\n # doc_type was removed in Elasticsearch 7\n type = fields.KeywordField()\n\n @staticmethod\n def prepare_type(publication: Publication) -> str:\n return \"publication\"\n\n # facets\n sub_type = fields.KeywordField()\n study = fields.KeywordField()\n year = fields.KeywordField()\n\n # prepare_FIELD will be executed while indexing FIELD\n @staticmethod\n def prepare_study(publication: Publication) -> str:\n \"\"\" Return the related study \"\"\"\n return publication.study.title()\n\n class Index: # pylint: disable=missing-docstring,too-few-public-methods\n # Name of the Elasticsearch index\n name = f\"{settings.ELASTICSEARCH_DSL_INDEX_PREFIX}publications\"\n\n class Django: # pylint: disable=missing-docstring,too-few-public-methods\n model = Publication # The model associated with this Document\n\n # The fields of the model you want to be indexed in Elasticsearch\n fields = (\"abstract\", \"author\", \"cite\", \"doi\", \"name\", \"title\", \"url\")\n\n def get_queryset(self) -> QuerySet:\n \"\"\"\n Return the queryset that should be indexed by this doc type,\n with select related study.\n \"\"\"\n return super().get_queryset().select_related(\"study\")\n", "path": "ddionrails/publications/documents.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Search documents for indexing models from ddionrails.publications app into Elasticsearch\n\n\nAuthors:\n * 2019 Heinz-Alexander F\u00fctterer (DIW Berlin)\n\nLicense:\n | **AGPL-3.0 GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) 3.0**.\n | See LICENSE at the GitHub\n `repository <https://github.com/ddionrails/ddionrails/blob/master/LICENSE.md>`_\n | or at\n `<https://www.gnu.org/licenses/agpl-3.0.txt>`_.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.db.models import QuerySet\nfrom django_elasticsearch_dsl import Document, fields\nfrom django_elasticsearch_dsl.registries import registry\n\nfrom .models import Publication\n\n\[email protected]_document\nclass PublicationDocument(Document):\n \"\"\" Search document for publications.Publication \"\"\"\n\n # doc_type was removed in Elasticsearch 7\n type = fields.KeywordField()\n\n @staticmethod\n def prepare_type(publication: Publication) -> str:\n return \"publication\"\n\n # facets\n sub_type = fields.KeywordField()\n study = fields.KeywordField()\n year = fields.IntegerField()\n\n # prepare_FIELD will be executed while indexing FIELD\n @staticmethod\n def prepare_study(publication: Publication) -> str:\n \"\"\" Return the related study \"\"\"\n return publication.study.title()\n\n class Index: # pylint: disable=missing-docstring,too-few-public-methods\n # Name of the Elasticsearch index\n name = f\"{settings.ELASTICSEARCH_DSL_INDEX_PREFIX}publications\"\n\n class Django: # pylint: disable=missing-docstring,too-few-public-methods\n model = Publication # The model associated with this Document\n\n # The fields of the model you want to be indexed in Elasticsearch\n fields = (\"abstract\", \"author\", \"cite\", \"doi\", \"name\", \"title\", \"url\")\n\n def get_queryset(self) -> QuerySet:\n \"\"\"\n Return the queryset that should be indexed by this doc type,\n with select related study.\n \"\"\"\n return super().get_queryset().select_related(\"study\")\n", "path": "ddionrails/publications/documents.py"}]} | 884 | 102 |
gh_patches_debug_21420 | rasdani/github-patches | git_diff | getsentry__sentry-python-1931 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Patched task factory in AsyncioIntegration loses task result.
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.14.0
### Steps to Reproduce
```python
import asyncio
import sentry_sdk
from sentry_sdk.integrations.asyncio import AsyncioIntegration
async def add(a, b):
return a + b
async def main():
sentry_sdk.init('dsn', integrations=[AsyncioIntegration()])
result = await asyncio.create_task(add(1, 2))
assert result == 3, result
asyncio.run(main())
```
### Expected Result
No `AssertionError`.
### Actual Result
```python
assert result == 3, result
AssertionError: None
```
Patched task factory always loses task result.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/asyncio.py`
Content:
```
1 from __future__ import absolute_import
2 import sys
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.consts import OP
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk._types import MYPY
9 from sentry_sdk.utils import event_from_exception
10
11 try:
12 import asyncio
13 from asyncio.tasks import Task
14 except ImportError:
15 raise DidNotEnable("asyncio not available")
16
17
18 if MYPY:
19 from typing import Any
20
21 from sentry_sdk._types import ExcInfo
22
23
24 def patch_asyncio():
25 # type: () -> None
26 orig_task_factory = None
27 try:
28 loop = asyncio.get_running_loop()
29 orig_task_factory = loop.get_task_factory()
30
31 def _sentry_task_factory(loop, coro):
32 # type: (Any, Any) -> Any
33
34 async def _coro_creating_hub_and_span():
35 # type: () -> None
36 hub = Hub(Hub.current)
37 with hub:
38 with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
39 try:
40 await coro
41 except Exception:
42 reraise(*_capture_exception(hub))
43
44 # Trying to use user set task factory (if there is one)
45 if orig_task_factory:
46 return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore
47
48 # The default task factory in `asyncio` does not have its own function
49 # but is just a couple of lines in `asyncio.base_events.create_task()`
50 # Those lines are copied here.
51
52 # WARNING:
53 # If the default behavior of the task creation in asyncio changes,
54 # this will break!
55 task = Task(_coro_creating_hub_and_span(), loop=loop)
56 if task._source_traceback: # type: ignore
57 del task._source_traceback[-1] # type: ignore
58
59 return task
60
61 loop.set_task_factory(_sentry_task_factory)
62 except RuntimeError:
63 # When there is no running loop, we have nothing to patch.
64 pass
65
66
67 def _capture_exception(hub):
68 # type: (Hub) -> ExcInfo
69 exc_info = sys.exc_info()
70
71 integration = hub.get_integration(AsyncioIntegration)
72 if integration is not None:
73 # If an integration is there, a client has to be there.
74 client = hub.client # type: Any
75
76 event, hint = event_from_exception(
77 exc_info,
78 client_options=client.options,
79 mechanism={"type": "asyncio", "handled": False},
80 )
81 hub.capture_event(event, hint=hint)
82
83 return exc_info
84
85
86 class AsyncioIntegration(Integration):
87 identifier = "asyncio"
88
89 @staticmethod
90 def setup_once():
91 # type: () -> None
92 patch_asyncio()
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py
--- a/sentry_sdk/integrations/asyncio.py
+++ b/sentry_sdk/integrations/asyncio.py
@@ -32,15 +32,19 @@
# type: (Any, Any) -> Any
async def _coro_creating_hub_and_span():
- # type: () -> None
+ # type: () -> Any
hub = Hub(Hub.current)
+ result = None
+
with hub:
with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
try:
- await coro
+ result = await coro
except Exception:
reraise(*_capture_exception(hub))
+ return result
+
# Trying to use user set task factory (if there is one)
if orig_task_factory:
return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore
| {"golden_diff": "diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py\n--- a/sentry_sdk/integrations/asyncio.py\n+++ b/sentry_sdk/integrations/asyncio.py\n@@ -32,15 +32,19 @@\n # type: (Any, Any) -> Any\n \n async def _coro_creating_hub_and_span():\n- # type: () -> None\n+ # type: () -> Any\n hub = Hub(Hub.current)\n+ result = None\n+\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n- await coro\n+ result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n \n+ return result\n+\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore\n", "issue": "Patched task factory in AsyncioIntegration loses task result.\n### How do you use Sentry?\n\nSentry Saas (sentry.io)\n\n### Version\n\n1.14.0\n\n### Steps to Reproduce\n\n```python\r\nimport asyncio\r\nimport sentry_sdk\r\nfrom sentry_sdk.integrations.asyncio import AsyncioIntegration\r\n\r\nasync def add(a, b):\r\n return a + b\r\n\r\nasync def main():\r\n sentry_sdk.init('dsn', integrations=[AsyncioIntegration()])\r\n result = await asyncio.create_task(add(1, 2))\r\n assert result == 3, result\r\n\r\nasyncio.run(main())\r\n```\n\n### Expected Result\n\nNo `AssertionError`.\n\n### Actual Result\n\n```python\r\n assert result == 3, result\r\nAssertionError: None\r\n```\r\n\r\nPatched task factory always loses task result.\n", "before_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif MYPY:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> None\n hub = Hub(Hub.current)\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif MYPY:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> Any\n hub = Hub(Hub.current)\n result = None\n\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n return result\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}]} | 1,253 | 229 |
gh_patches_debug_19032 | rasdani/github-patches | git_diff | streamlit__streamlit-3975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Slider value and description text overlap for slider in sidebar with long description
### Summary
If you use a slider in the sidebar with a long description text, the slider value and the description text overlap. See screenshot:

### Steps to reproduce
Code snippet:
```python
import streamlit as st
topn_ranking = st.sidebar.slider(
"Select the maximum amount of words for classification (higher value adds additional less frequent words to results)",
10,
1000,
(100),
)
```
**Expected behavior:**
To have non-overlapping slider value and description text.
For example, this is how it should look like (with Streamlit version 0.78.0):

**Actual behavior:**
When I start the example of the code snippet, the current value and the description text of the slider overlap.
### Is this a regression?
Yes, it was looking good with Streamlit 0.78.0. My tests showed, that it changed in version 0.83.0.
### Debug info
- Streamlit version: 0.88.0
- Python version: 3.8.10
- Using poetry with pyenv
- OS version: Ubuntu 20.04
- Browser version: Google Chrome 93.0.4577.63
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_slider.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 w1 = st.slider("Label 1", 0, 100, 25, 1)
18 st.write("Value 1:", w1)
19
20 w2 = st.slider("Label 2", 0.0, 100.0, (25.0, 75.0), 0.5)
21 st.write("Value 2:", w2)
22
23 if st._is_running_with_streamlit:
24
25 def on_change():
26 st.session_state.slider_changed = True
27
28 st.slider(
29 "Label 3",
30 min_value=0,
31 max_value=100,
32 value=25,
33 step=1,
34 key="slider3",
35 on_change=on_change,
36 )
37 st.write("Value 3:", st.session_state.slider3)
38 st.write("Slider changed:", "slider_changed" in st.session_state)
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py
--- a/e2e/scripts/st_slider.py
+++ b/e2e/scripts/st_slider.py
@@ -20,19 +20,28 @@
w2 = st.slider("Label 2", 0.0, 100.0, (25.0, 75.0), 0.5)
st.write("Value 2:", w2)
+w3 = st.slider(
+ "Label 3 - This is a very very very very very very very very very very very very very very very very very very very very long label",
+ 0.0,
+ 100.0,
+ (25.0, 75.0),
+ 0.5,
+)
+st.write("Value 3:", w3)
+
if st._is_running_with_streamlit:
def on_change():
st.session_state.slider_changed = True
st.slider(
- "Label 3",
+ "Label 4",
min_value=0,
max_value=100,
value=25,
step=1,
- key="slider3",
+ key="slider4",
on_change=on_change,
)
- st.write("Value 3:", st.session_state.slider3)
+ st.write("Value 4:", st.session_state.slider4)
st.write("Slider changed:", "slider_changed" in st.session_state)
| {"golden_diff": "diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py\n--- a/e2e/scripts/st_slider.py\n+++ b/e2e/scripts/st_slider.py\n@@ -20,19 +20,28 @@\n w2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\n st.write(\"Value 2:\", w2)\n \n+w3 = st.slider(\n+ \"Label 3 - This is a very very very very very very very very very very very very very very very very very very very very long label\",\n+ 0.0,\n+ 100.0,\n+ (25.0, 75.0),\n+ 0.5,\n+)\n+st.write(\"Value 3:\", w3)\n+\n if st._is_running_with_streamlit:\n \n def on_change():\n st.session_state.slider_changed = True\n \n st.slider(\n- \"Label 3\",\n+ \"Label 4\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n- key=\"slider3\",\n+ key=\"slider4\",\n on_change=on_change,\n )\n- st.write(\"Value 3:\", st.session_state.slider3)\n+ st.write(\"Value 4:\", st.session_state.slider4)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "issue": "Slider value and description text overlap for slider in sidebar with long description\n### Summary\r\n\r\nIf you use a slider in the sidebar with a long description text, the slider value and the description text overlap. See screenshot:\r\n\r\n\r\n\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```python\r\nimport streamlit as st\r\n\r\ntopn_ranking = st.sidebar.slider(\r\n \"Select the maximum amount of words for classification (higher value adds additional less frequent words to results)\",\r\n 10,\r\n 1000,\r\n (100),\r\n)\r\n```\r\n\r\n**Expected behavior:**\r\n\r\nTo have non-overlapping slider value and description text.\r\n\r\nFor example, this is how it should look like (with Streamlit version 0.78.0):\r\n\r\n\r\n**Actual behavior:**\r\n\r\nWhen I start the example of the code snippet, the current value and the description text of the slider overlap.\r\n\r\n### Is this a regression?\r\n\r\nYes, it was looking good with Streamlit 0.78.0. My tests showed, that it changed in version 0.83.0.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 0.88.0\r\n- Python version: 3.8.10\r\n- Using poetry with pyenv\r\n- OS version: Ubuntu 20.04\r\n- Browser version: Google Chrome 93.0.4577.63\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nw1 = st.slider(\"Label 1\", 0, 100, 25, 1)\nst.write(\"Value 1:\", w1)\n\nw2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\nst.write(\"Value 2:\", w2)\n\nif st._is_running_with_streamlit:\n\n def on_change():\n st.session_state.slider_changed = True\n\n st.slider(\n \"Label 3\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n key=\"slider3\",\n on_change=on_change,\n )\n st.write(\"Value 3:\", st.session_state.slider3)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "path": "e2e/scripts/st_slider.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nw1 = st.slider(\"Label 1\", 0, 100, 25, 1)\nst.write(\"Value 1:\", w1)\n\nw2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\nst.write(\"Value 2:\", w2)\n\nw3 = st.slider(\n \"Label 3 - This is a very very very very very very very very very very very very very very very very very very very very long label\",\n 0.0,\n 100.0,\n (25.0, 75.0),\n 0.5,\n)\nst.write(\"Value 3:\", w3)\n\nif st._is_running_with_streamlit:\n\n def on_change():\n st.session_state.slider_changed = True\n\n st.slider(\n \"Label 4\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n key=\"slider4\",\n on_change=on_change,\n )\n st.write(\"Value 4:\", st.session_state.slider4)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "path": "e2e/scripts/st_slider.py"}]} | 1,096 | 330 |
gh_patches_debug_12152 | rasdani/github-patches | git_diff | Pylons__pyramid-3657 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sphinx doesn't like the reify decorator
> Failed to get a method signature for kinto.core.resource.Resource.timestamp: <pyramid.decorator.reify object at 0x7f175ae45640> is not a callable object
If I understand correctly this is because the reify decorator doesn't pass along the function signature.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyramid/decorator.py`
Content:
```
1 from functools import update_wrapper
2
3
4 class reify:
5 """Use as a class method decorator. It operates almost exactly like the
6 Python ``@property`` decorator, but it puts the result of the method it
7 decorates into the instance dict after the first call, effectively
8 replacing the function it decorates with an instance variable. It is, in
9 Python parlance, a non-data descriptor. The following is an example and
10 its usage:
11
12 .. doctest::
13
14 >>> from pyramid.decorator import reify
15
16 >>> class Foo:
17 ... @reify
18 ... def jammy(self):
19 ... print('jammy called')
20 ... return 1
21
22 >>> f = Foo()
23 >>> v = f.jammy
24 jammy called
25 >>> print(v)
26 1
27 >>> f.jammy
28 1
29 >>> # jammy func not called the second time; it replaced itself with 1
30 >>> # Note: reassignment is possible
31 >>> f.jammy = 2
32 >>> f.jammy
33 2
34 """
35
36 def __init__(self, wrapped):
37 self.wrapped = wrapped
38 update_wrapper(self, wrapped)
39
40 def __get__(self, inst, objtype=None):
41 if inst is None:
42 return self
43 val = self.wrapped(inst)
44 # reify is a non-data-descriptor which is leveraging the fact
45 # that it is not invoked if the equivalent attribute is defined in the
46 # object's dict, so the setattr here effectively hides this descriptor
47 # from subsequent lookups
48 setattr(inst, self.wrapped.__name__, val)
49 return val
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyramid/decorator.py b/src/pyramid/decorator.py
--- a/src/pyramid/decorator.py
+++ b/src/pyramid/decorator.py
@@ -1,6 +1,3 @@
-from functools import update_wrapper
-
-
class reify:
"""Use as a class method decorator. It operates almost exactly like the
Python ``@property`` decorator, but it puts the result of the method it
@@ -35,7 +32,7 @@
def __init__(self, wrapped):
self.wrapped = wrapped
- update_wrapper(self, wrapped)
+ self.__doc__ = wrapped.__doc__
def __get__(self, inst, objtype=None):
if inst is None:
| {"golden_diff": "diff --git a/src/pyramid/decorator.py b/src/pyramid/decorator.py\n--- a/src/pyramid/decorator.py\n+++ b/src/pyramid/decorator.py\n@@ -1,6 +1,3 @@\n-from functools import update_wrapper\n-\n-\n class reify:\n \"\"\"Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n@@ -35,7 +32,7 @@\n \n def __init__(self, wrapped):\n self.wrapped = wrapped\n- update_wrapper(self, wrapped)\n+ self.__doc__ = wrapped.__doc__\n \n def __get__(self, inst, objtype=None):\n if inst is None:\n", "issue": "Sphinx doesn't like the reify decorator\n> Failed to get a method signature for kinto.core.resource.Resource.timestamp: <pyramid.decorator.reify object at 0x7f175ae45640> is not a callable object\r\n\r\nIf I understand correctly this is because the reify decorator doesn't pass along the function signature.\r\n\n", "before_files": [{"content": "from functools import update_wrapper\n\n\nclass reify:\n \"\"\"Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. The following is an example and\n its usage:\n\n .. doctest::\n\n >>> from pyramid.decorator import reify\n\n >>> class Foo:\n ... @reify\n ... def jammy(self):\n ... print('jammy called')\n ... return 1\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n # reify is a non-data-descriptor which is leveraging the fact\n # that it is not invoked if the equivalent attribute is defined in the\n # object's dict, so the setattr here effectively hides this descriptor\n # from subsequent lookups\n setattr(inst, self.wrapped.__name__, val)\n return val\n", "path": "src/pyramid/decorator.py"}], "after_files": [{"content": "class reify:\n \"\"\"Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. The following is an example and\n its usage:\n\n .. doctest::\n\n >>> from pyramid.decorator import reify\n\n >>> class Foo:\n ... @reify\n ... def jammy(self):\n ... print('jammy called')\n ... return 1\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n\n def __init__(self, wrapped):\n self.wrapped = wrapped\n self.__doc__ = wrapped.__doc__\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n # reify is a non-data-descriptor which is leveraging the fact\n # that it is not invoked if the equivalent attribute is defined in the\n # object's dict, so the setattr here effectively hides this descriptor\n # from subsequent lookups\n setattr(inst, self.wrapped.__name__, val)\n return val\n", "path": "src/pyramid/decorator.py"}]} | 802 | 165 |
gh_patches_debug_12432 | rasdani/github-patches | git_diff | pyca__cryptography-8403 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ignore FRP256v1 in wycheproof tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/backends/openssl/utils.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 import typing
6
7 from cryptography.hazmat.primitives import hashes
8 from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
9
10 if typing.TYPE_CHECKING:
11 from cryptography.hazmat.backends.openssl.backend import Backend
12
13
14 def _evp_pkey_derive(backend: "Backend", evp_pkey, peer_public_key) -> bytes:
15 ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)
16 backend.openssl_assert(ctx != backend._ffi.NULL)
17 ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)
18 res = backend._lib.EVP_PKEY_derive_init(ctx)
19 backend.openssl_assert(res == 1)
20 res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)
21 backend.openssl_assert(res == 1)
22 keylen = backend._ffi.new("size_t *")
23 res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)
24 backend.openssl_assert(res == 1)
25 backend.openssl_assert(keylen[0] > 0)
26 buf = backend._ffi.new("unsigned char[]", keylen[0])
27 res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)
28 if res != 1:
29 errors_with_text = backend._consume_errors_with_text()
30 raise ValueError("Error computing shared key.", errors_with_text)
31
32 return backend._ffi.buffer(buf, keylen[0])[:]
33
34
35 def _calculate_digest_and_algorithm(
36 data: bytes,
37 algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],
38 ) -> typing.Tuple[bytes, hashes.HashAlgorithm]:
39 if not isinstance(algorithm, Prehashed):
40 hash_ctx = hashes.Hash(algorithm)
41 hash_ctx.update(data)
42 data = hash_ctx.finalize()
43 else:
44 algorithm = algorithm._algorithm
45
46 if len(data) != algorithm.digest_size:
47 raise ValueError(
48 "The provided data must be the same length as the hash "
49 "algorithm's digest size."
50 )
51
52 return (data, algorithm)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py
--- a/src/cryptography/hazmat/backends/openssl/utils.py
+++ b/src/cryptography/hazmat/backends/openssl/utils.py
@@ -18,7 +18,10 @@
res = backend._lib.EVP_PKEY_derive_init(ctx)
backend.openssl_assert(res == 1)
res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)
- backend.openssl_assert(res == 1)
+ if res != 1:
+ errors_with_text = backend._consume_errors_with_text()
+ raise ValueError("Error computing shared key.", errors_with_text)
+
keylen = backend._ffi.new("size_t *")
res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)
backend.openssl_assert(res == 1)
| {"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py\n--- a/src/cryptography/hazmat/backends/openssl/utils.py\n+++ b/src/cryptography/hazmat/backends/openssl/utils.py\n@@ -18,7 +18,10 @@\n res = backend._lib.EVP_PKEY_derive_init(ctx)\n backend.openssl_assert(res == 1)\n res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)\n- backend.openssl_assert(res == 1)\n+ if res != 1:\n+ errors_with_text = backend._consume_errors_with_text()\n+ raise ValueError(\"Error computing shared key.\", errors_with_text)\n+\n keylen = backend._ffi.new(\"size_t *\")\n res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)\n backend.openssl_assert(res == 1)\n", "issue": "ignore FRP256v1 in wycheproof tests\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\nif typing.TYPE_CHECKING:\n from cryptography.hazmat.backends.openssl.backend import Backend\n\n\ndef _evp_pkey_derive(backend: \"Backend\", evp_pkey, peer_public_key) -> bytes:\n ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)\n backend.openssl_assert(ctx != backend._ffi.NULL)\n ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)\n res = backend._lib.EVP_PKEY_derive_init(ctx)\n backend.openssl_assert(res == 1)\n res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)\n backend.openssl_assert(res == 1)\n keylen = backend._ffi.new(\"size_t *\")\n res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)\n backend.openssl_assert(res == 1)\n backend.openssl_assert(keylen[0] > 0)\n buf = backend._ffi.new(\"unsigned char[]\", keylen[0])\n res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)\n if res != 1:\n errors_with_text = backend._consume_errors_with_text()\n raise ValueError(\"Error computing shared key.\", errors_with_text)\n\n return backend._ffi.buffer(buf, keylen[0])[:]\n\n\ndef _calculate_digest_and_algorithm(\n data: bytes,\n algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],\n) -> typing.Tuple[bytes, hashes.HashAlgorithm]:\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\nif typing.TYPE_CHECKING:\n from cryptography.hazmat.backends.openssl.backend import Backend\n\n\ndef _evp_pkey_derive(backend: \"Backend\", evp_pkey, peer_public_key) -> bytes:\n ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)\n backend.openssl_assert(ctx != backend._ffi.NULL)\n ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)\n res = backend._lib.EVP_PKEY_derive_init(ctx)\n backend.openssl_assert(res == 1)\n res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)\n if res != 1:\n errors_with_text = backend._consume_errors_with_text()\n raise ValueError(\"Error computing shared key.\", errors_with_text)\n\n keylen = backend._ffi.new(\"size_t *\")\n res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)\n backend.openssl_assert(res == 1)\n backend.openssl_assert(keylen[0] > 0)\n buf = backend._ffi.new(\"unsigned char[]\", keylen[0])\n res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)\n if res != 1:\n errors_with_text = backend._consume_errors_with_text()\n raise ValueError(\"Error computing shared key.\", errors_with_text)\n\n return backend._ffi.buffer(buf, keylen[0])[:]\n\n\ndef _calculate_digest_and_algorithm(\n data: bytes,\n algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],\n) -> typing.Tuple[bytes, hashes.HashAlgorithm]:\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}]} | 882 | 215 |
gh_patches_debug_331 | rasdani/github-patches | git_diff | InternLM__lmdeploy-205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Serving gradio报错

报错: no module named lmdeploy.serve.gradio
环境里已安装了lmdeploy 0.0.2 python包。
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lmdeploy/version.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 from typing import Tuple
3
4 __version__ = '0.0.2'
5 short_version = __version__
6
7
8 def parse_version_info(version_str: str) -> Tuple:
9 """Parse version from a string.
10
11 Args:
12 version_str (str): A string represents a version info.
13
14 Returns:
15 tuple: A sequence of integer and string represents version.
16 """
17 _version_info = []
18 for x in version_str.split('.'):
19 if x.isdigit():
20 _version_info.append(int(x))
21 elif x.find('rc') != -1:
22 patch_version = x.split('rc')
23 _version_info.append(int(patch_version[0]))
24 _version_info.append(f'rc{patch_version[1]}')
25 return tuple(_version_info)
26
27
28 version_info = parse_version_info(__version__)
29
30 __all__ = ['__version__', 'version_info', 'parse_version_info']
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lmdeploy/version.py b/lmdeploy/version.py
--- a/lmdeploy/version.py
+++ b/lmdeploy/version.py
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple
-__version__ = '0.0.2'
+__version__ = '0.0.3'
short_version = __version__
| {"golden_diff": "diff --git a/lmdeploy/version.py b/lmdeploy/version.py\n--- a/lmdeploy/version.py\n+++ b/lmdeploy/version.py\n@@ -1,7 +1,7 @@\n # Copyright (c) OpenMMLab. All rights reserved.\n from typing import Tuple\n \n-__version__ = '0.0.2'\n+__version__ = '0.0.3'\n short_version = __version__\n", "issue": "Serving gradio\u62a5\u9519\n\r\n\r\n\u62a5\u9519: no module named lmdeploy.serve.gradio\r\n\r\n\u73af\u5883\u91cc\u5df2\u5b89\u88c5\u4e86lmdeploy 0.0.2 python\u5305\u3002\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nfrom typing import Tuple\n\n__version__ = '0.0.2'\nshort_version = __version__\n\n\ndef parse_version_info(version_str: str) -> Tuple:\n \"\"\"Parse version from a string.\n\n Args:\n version_str (str): A string represents a version info.\n\n Returns:\n tuple: A sequence of integer and string represents version.\n \"\"\"\n _version_info = []\n for x in version_str.split('.'):\n if x.isdigit():\n _version_info.append(int(x))\n elif x.find('rc') != -1:\n patch_version = x.split('rc')\n _version_info.append(int(patch_version[0]))\n _version_info.append(f'rc{patch_version[1]}')\n return tuple(_version_info)\n\n\nversion_info = parse_version_info(__version__)\n\n__all__ = ['__version__', 'version_info', 'parse_version_info']\n", "path": "lmdeploy/version.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nfrom typing import Tuple\n\n__version__ = '0.0.3'\nshort_version = __version__\n\n\ndef parse_version_info(version_str: str) -> Tuple:\n \"\"\"Parse version from a string.\n\n Args:\n version_str (str): A string represents a version info.\n\n Returns:\n tuple: A sequence of integer and string represents version.\n \"\"\"\n _version_info = []\n for x in version_str.split('.'):\n if x.isdigit():\n _version_info.append(int(x))\n elif x.find('rc') != -1:\n patch_version = x.split('rc')\n _version_info.append(int(patch_version[0]))\n _version_info.append(f'rc{patch_version[1]}')\n return tuple(_version_info)\n\n\nversion_info = parse_version_info(__version__)\n\n__all__ = ['__version__', 'version_info', 'parse_version_info']\n", "path": "lmdeploy/version.py"}]} | 604 | 91 |
gh_patches_debug_935 | rasdani/github-patches | git_diff | e-valuation__EvaP-817 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
+x on update.sh, earlier apache restart
update_production.sh is missing the x bit, also because of the cache clearing the apache is restarted 2min after the code has changed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/management/commands/refresh_results_cache.py`
Content:
```
1 from django.core.management.base import BaseCommand
2 from django.core.serializers.base import ProgressBar
3 from django.core.cache import cache
4
5 from evap.evaluation.models import Course
6 from evap.evaluation.tools import calculate_results
7
8
9 class Command(BaseCommand):
10 args = ''
11 help = 'Clears the cache and pre-warms it with the results of all courses'
12
13 def handle(self, *args, **options):
14 self.stdout.write("Clearing cache...")
15 cache.clear()
16 total_count = Course.objects.count()
17
18 self.stdout.write("Calculating results for all courses...")
19
20 self.stdout.ending = None
21 progress_bar = ProgressBar(self.stdout, total_count)
22
23 for counter, course in enumerate(Course.objects.all()):
24 progress_bar.update(counter + 1)
25 calculate_results(course)
26
27 self.stdout.write("Done with updating cache.\n")
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py
--- a/evap/evaluation/management/commands/refresh_results_cache.py
+++ b/evap/evaluation/management/commands/refresh_results_cache.py
@@ -24,4 +24,4 @@
progress_bar.update(counter + 1)
calculate_results(course)
- self.stdout.write("Done with updating cache.\n")
+ self.stdout.write("Results cache has been refreshed.\n")
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py\n--- a/evap/evaluation/management/commands/refresh_results_cache.py\n+++ b/evap/evaluation/management/commands/refresh_results_cache.py\n@@ -24,4 +24,4 @@\n progress_bar.update(counter + 1)\n calculate_results(course)\n \n- self.stdout.write(\"Done with updating cache.\\n\")\n+ self.stdout.write(\"Results cache has been refreshed.\\n\")\n", "issue": "+x on update.sh, earlier apache restart\nupdate_production.sh is missing the x bit, also because of the cache clearing the apache is restarted 2min after the code has changed.\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom django.core.serializers.base import ProgressBar\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n total_count = Course.objects.count()\n\n self.stdout.write(\"Calculating results for all courses...\")\n\n self.stdout.ending = None\n progress_bar = ProgressBar(self.stdout, total_count)\n\n for counter, course in enumerate(Course.objects.all()):\n progress_bar.update(counter + 1)\n calculate_results(course)\n\n self.stdout.write(\"Done with updating cache.\\n\")\n", "path": "evap/evaluation/management/commands/refresh_results_cache.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand\nfrom django.core.serializers.base import ProgressBar\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n total_count = Course.objects.count()\n\n self.stdout.write(\"Calculating results for all courses...\")\n\n self.stdout.ending = None\n progress_bar = ProgressBar(self.stdout, total_count)\n\n for counter, course in enumerate(Course.objects.all()):\n progress_bar.update(counter + 1)\n calculate_results(course)\n\n self.stdout.write(\"Results cache has been refreshed.\\n\")\n", "path": "evap/evaluation/management/commands/refresh_results_cache.py"}]} | 533 | 122 |
gh_patches_debug_7052 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2712 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/context/process_group_initializer/initializer_sequence.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- encoding: utf-8 -*-
3 import torch.distributed as dist
4
5 from colossalai.registry import DIST_GROUP_INITIALIZER
6 from .initializer_tensor import Initializer_Tensor
7 from .process_group_initializer import ProcessGroupInitializer
8 from ..parallel_mode import ParallelMode
9
10
11 @DIST_GROUP_INITIALIZER.register_module
12 class Initializer_Sequence_DP(ProcessGroupInitializer):
13 """A ProcessGroupInitializer for sequence parallelism all-reduce.
14
15 In Sequence Parallelism, each GPU holds the full copy of model weights,
16 thus, gradient all-reduce occurs across all processes in the same pipeline stage
17
18 Args:
19 rank (int): The rank of current process
20 world_size (int): Size of whole communication world
21 config (Config): Running configuration
22 data_parallel_size (int): Size of data parallel
23 pipeline_parallel_size (int): Size of pipeline parallel
24 tensor_parallel_size (int): Size of tensor parallel
25 """
26
27 def __init__(self, *args, **kwargs):
28 super().__init__(*args, **kwargs)
29 self.dp_size = self.world_size // self.pipeline_parallel_size
30 self.num_group = self.pipeline_parallel_size
31
32 def init_dist_group(self):
33 """Initialize Sequence Parallel process groups used for gradient all-reduce.
34
35 Returns:
36 Tuple: A tuple (local_rank, group_world_size, process_group, ranks_in_group, mode).
37 """
38 local_rank = None
39 ranks_in_group = None
40 process_group = None
41 cpu_group = None
42 group_world_size = None
43 mode = ParallelMode.SEQUENCE_DP
44
45 for i in range(self.num_group):
46 ranks = [i * self.dp_size + j for j in range(self.dp_size)]
47 group = dist.new_group(ranks)
48 group_cpu = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else group
49
50 if self.rank in ranks:
51 local_rank = ranks.index(self.rank)
52 group_world_size = len(ranks)
53 process_group = group
54 cpu_group = group_cpu
55 ranks_in_group = ranks
56
57 return local_rank, group_world_size, process_group, cpu_group, ranks_in_group, mode
58
59
60 @DIST_GROUP_INITIALIZER.register_module
61 class Initializer_Sequence(ProcessGroupInitializer):
62 """A ProcessGroupInitializer for sequence parallelism.
63
64 Args:
65 rank (int): The rank of current process.
66 world_size (int): Size of whole communication world.
67 config (Config): Running configuration.
68 data_parallel_size (int): Size of data parallel.
69 pipeline_parallel_size (int): Size of pipeline parallel.
70 tensor_parallel_size (int): Size of tensor parallel.
71 """
72
73 def __init__(self, *args, **kwargs):
74 super().__init__(*args, **kwargs)
75 # reuse tensor parallel initializer code
76 self._sequence_initializer = Initializer_Tensor(*args, **kwargs)
77 self._sequence_dp_initializer = Initializer_Sequence_DP(*args, **kwargs)
78
79 def init_dist_group(self):
80 """Initialize Sequence parallel process groups and assign local_ranks and groups to each gpu.
81
82 Sequence parallelism requires 2 process groups. The first is for model forward where several processes
83 exchange partial query, key and value embedding to compute self attention values. The second is for
84 all-reduce to synchronize the model parameters.
85
86 Returns:
87 List[Tuple (local_rank, group_world_size, process_group, ranks_in_group, mode)]:
88 A Sequence parallelism's information in list of tuples.
89 """
90
91 parallel_setting = []
92
93 local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode = \
94 self._sequence_initializer.init_dist_group()
95 # change mode to sequence
96 mode = ParallelMode.SEQUENCE
97
98 parallel_setting.append((local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode))
99 parallel_setting.append(self._sequence_dp_initializer.init_dist_group())
100 return parallel_setting
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/context/process_group_initializer/initializer_sequence.py b/colossalai/context/process_group_initializer/initializer_sequence.py
--- a/colossalai/context/process_group_initializer/initializer_sequence.py
+++ b/colossalai/context/process_group_initializer/initializer_sequence.py
@@ -3,9 +3,10 @@
import torch.distributed as dist
from colossalai.registry import DIST_GROUP_INITIALIZER
+
+from ..parallel_mode import ParallelMode
from .initializer_tensor import Initializer_Tensor
from .process_group_initializer import ProcessGroupInitializer
-from ..parallel_mode import ParallelMode
@DIST_GROUP_INITIALIZER.register_module
| {"golden_diff": "diff --git a/colossalai/context/process_group_initializer/initializer_sequence.py b/colossalai/context/process_group_initializer/initializer_sequence.py\n--- a/colossalai/context/process_group_initializer/initializer_sequence.py\n+++ b/colossalai/context/process_group_initializer/initializer_sequence.py\n@@ -3,9 +3,10 @@\n import torch.distributed as dist\n \n from colossalai.registry import DIST_GROUP_INITIALIZER\n+\n+from ..parallel_mode import ParallelMode\n from .initializer_tensor import Initializer_Tensor\n from .process_group_initializer import ProcessGroupInitializer\n-from ..parallel_mode import ParallelMode\n \n \n @DIST_GROUP_INITIALIZER.register_module\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\nimport torch.distributed as dist\n\nfrom colossalai.registry import DIST_GROUP_INITIALIZER\nfrom .initializer_tensor import Initializer_Tensor\nfrom .process_group_initializer import ProcessGroupInitializer\nfrom ..parallel_mode import ParallelMode\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence_DP(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism all-reduce.\n\n In Sequence Parallelism, each GPU holds the full copy of model weights,\n thus, gradient all-reduce occurs across all processes in the same pipeline stage\n\n Args:\n rank (int): The rank of current process\n world_size (int): Size of whole communication world\n config (Config): Running configuration\n data_parallel_size (int): Size of data parallel\n pipeline_parallel_size (int): Size of pipeline parallel\n tensor_parallel_size (int): Size of tensor parallel\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.dp_size = self.world_size // self.pipeline_parallel_size\n self.num_group = self.pipeline_parallel_size\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence Parallel process groups used for gradient all-reduce.\n\n Returns:\n Tuple: A tuple (local_rank, group_world_size, process_group, ranks_in_group, mode).\n \"\"\"\n local_rank = None\n ranks_in_group = None\n process_group = None\n cpu_group = None\n group_world_size = None\n mode = ParallelMode.SEQUENCE_DP\n\n for i in range(self.num_group):\n ranks = [i * self.dp_size + j for j in range(self.dp_size)]\n group = dist.new_group(ranks)\n group_cpu = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else group\n\n if self.rank in ranks:\n local_rank = ranks.index(self.rank)\n group_world_size = len(ranks)\n process_group = group\n cpu_group = group_cpu\n ranks_in_group = ranks\n\n return local_rank, group_world_size, process_group, cpu_group, ranks_in_group, mode\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism.\n\n Args:\n rank (int): The rank of current process.\n world_size (int): Size of whole communication world.\n config (Config): Running configuration.\n data_parallel_size (int): Size of data parallel.\n pipeline_parallel_size (int): Size of pipeline parallel.\n tensor_parallel_size (int): Size of tensor parallel.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # reuse tensor parallel initializer code\n self._sequence_initializer = Initializer_Tensor(*args, **kwargs)\n self._sequence_dp_initializer = Initializer_Sequence_DP(*args, **kwargs)\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence parallel process groups and assign local_ranks and groups to each gpu.\n\n Sequence parallelism requires 2 process groups. The first is for model forward where several processes\n exchange partial query, key and value embedding to compute self attention values. The second is for\n all-reduce to synchronize the model parameters.\n\n Returns:\n List[Tuple (local_rank, group_world_size, process_group, ranks_in_group, mode)]:\n A Sequence parallelism's information in list of tuples.\n \"\"\"\n\n parallel_setting = []\n\n local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode = \\\n self._sequence_initializer.init_dist_group()\n # change mode to sequence\n mode = ParallelMode.SEQUENCE\n\n parallel_setting.append((local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode))\n parallel_setting.append(self._sequence_dp_initializer.init_dist_group())\n return parallel_setting\n", "path": "colossalai/context/process_group_initializer/initializer_sequence.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\nimport torch.distributed as dist\n\nfrom colossalai.registry import DIST_GROUP_INITIALIZER\n\nfrom ..parallel_mode import ParallelMode\nfrom .initializer_tensor import Initializer_Tensor\nfrom .process_group_initializer import ProcessGroupInitializer\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence_DP(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism all-reduce.\n\n In Sequence Parallelism, each GPU holds the full copy of model weights,\n thus, gradient all-reduce occurs across all processes in the same pipeline stage\n\n Args:\n rank (int): The rank of current process\n world_size (int): Size of whole communication world\n config (Config): Running configuration\n data_parallel_size (int): Size of data parallel\n pipeline_parallel_size (int): Size of pipeline parallel\n tensor_parallel_size (int): Size of tensor parallel\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.dp_size = self.world_size // self.pipeline_parallel_size\n self.num_group = self.pipeline_parallel_size\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence Parallel process groups used for gradient all-reduce.\n\n Returns:\n Tuple: A tuple (local_rank, group_world_size, process_group, ranks_in_group, mode).\n \"\"\"\n local_rank = None\n ranks_in_group = None\n process_group = None\n cpu_group = None\n group_world_size = None\n mode = ParallelMode.SEQUENCE_DP\n\n for i in range(self.num_group):\n ranks = [i * self.dp_size + j for j in range(self.dp_size)]\n group = dist.new_group(ranks)\n group_cpu = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else group\n\n if self.rank in ranks:\n local_rank = ranks.index(self.rank)\n group_world_size = len(ranks)\n process_group = group\n cpu_group = group_cpu\n ranks_in_group = ranks\n\n return local_rank, group_world_size, process_group, cpu_group, ranks_in_group, mode\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism.\n\n Args:\n rank (int): The rank of current process.\n world_size (int): Size of whole communication world.\n config (Config): Running configuration.\n data_parallel_size (int): Size of data parallel.\n pipeline_parallel_size (int): Size of pipeline parallel.\n tensor_parallel_size (int): Size of tensor parallel.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # reuse tensor parallel initializer code\n self._sequence_initializer = Initializer_Tensor(*args, **kwargs)\n self._sequence_dp_initializer = Initializer_Sequence_DP(*args, **kwargs)\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence parallel process groups and assign local_ranks and groups to each gpu.\n\n Sequence parallelism requires 2 process groups. The first is for model forward where several processes\n exchange partial query, key and value embedding to compute self attention values. The second is for\n all-reduce to synchronize the model parameters.\n\n Returns:\n List[Tuple (local_rank, group_world_size, process_group, ranks_in_group, mode)]:\n A Sequence parallelism's information in list of tuples.\n \"\"\"\n\n parallel_setting = []\n\n local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode = \\\n self._sequence_initializer.init_dist_group()\n # change mode to sequence\n mode = ParallelMode.SEQUENCE\n\n parallel_setting.append((local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode))\n parallel_setting.append(self._sequence_dp_initializer.init_dist_group())\n return parallel_setting\n", "path": "colossalai/context/process_group_initializer/initializer_sequence.py"}]} | 1,339 | 133 |
gh_patches_debug_10437 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-2060 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ignore `label` in the workflow if it's not tensor
**Is your feature request related to a problem? Please describe.**
During evaluation, now we use `prepare_batch` to extract `image` and `label` fields if existing the key. But maybe we don't want to load `label` for inference and didn't apply transforms, so need to ignore it for this case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/engines/utils.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union
13
14 import torch
15
16 from monai.utils import exact_version, optional_import
17 from monai.utils.enums import CommonKeys
18
19 if TYPE_CHECKING:
20 from ignite.engine import EventEnum
21 else:
22 EventEnum, _ = optional_import("ignite.engine", "0.4.4", exact_version, "EventEnum")
23
24 __all__ = [
25 "IterationEvents",
26 "GanKeys",
27 "get_devices_spec",
28 "default_prepare_batch",
29 "default_make_latent",
30 ]
31
32
33 class IterationEvents(EventEnum):
34 """
35 Additional Events engine can register and trigger in the iteration process.
36 Refer to the example in ignite: https://github.com/pytorch/ignite/blob/master/ignite/engine/events.py#L146
37 These Events can be triggered during training iteration:
38 `FORWARD_COMPLETED` is the Event when `network(image, label)` completed.
39 `LOSS_COMPLETED` is the Event when `loss(pred, label)` completed.
40 `BACKWARD_COMPLETED` is the Event when `loss.backward()` completed.
41 `MODEL_COMPLETED` is the Event when all the model related operations completed.
42
43 """
44
45 FORWARD_COMPLETED = "forward_completed"
46 LOSS_COMPLETED = "loss_completed"
47 BACKWARD_COMPLETED = "backward_completed"
48 MODEL_COMPLETED = "model_completed"
49
50
51 class GanKeys:
52 """
53 A set of common keys for generative adversarial networks.
54
55 """
56
57 REALS = "reals"
58 FAKES = "fakes"
59 LATENTS = "latents"
60 GLOSS = "g_loss"
61 DLOSS = "d_loss"
62
63
64 def get_devices_spec(devices: Optional[Sequence[torch.device]] = None) -> List[torch.device]:
65 """
66 Get a valid specification for one or more devices. If `devices` is None get devices for all CUDA devices available.
67 If `devices` is and zero-length structure a single CPU compute device is returned. In any other cases `devices` is
68 returned unchanged.
69
70 Args:
71 devices: list of devices to request, None for all GPU devices, [] for CPU.
72
73 Raises:
74 RuntimeError: When all GPUs are selected (``devices=None``) but no GPUs are available.
75
76 Returns:
77 list of torch.device: list of devices.
78
79 """
80 if devices is None:
81 devices = [torch.device(f"cuda:{d:d}") for d in range(torch.cuda.device_count())]
82
83 if len(devices) == 0:
84 raise RuntimeError("No GPU devices available.")
85
86 elif len(devices) == 0:
87 devices = [torch.device("cpu")]
88
89 else:
90 devices = list(devices)
91
92 return devices
93
94
95 def default_prepare_batch(
96 batchdata: Dict[str, torch.Tensor],
97 device: Optional[Union[str, torch.device]] = None,
98 non_blocking: bool = False,
99 ) -> Union[Tuple[torch.Tensor, Optional[torch.Tensor]], torch.Tensor]:
100 """
101 Default function to prepare the data for current iteration.
102 Refer to ignite: https://github.com/pytorch/ignite/blob/v0.4.2/ignite/engine/__init__.py#L28.
103
104 Returns:
105 image, label(optional).
106
107 """
108 if not isinstance(batchdata, dict):
109 raise AssertionError("default prepare_batch expects dictionary input data.")
110 if CommonKeys.LABEL in batchdata:
111 return (
112 batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),
113 batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),
114 )
115 if GanKeys.REALS in batchdata:
116 return batchdata[GanKeys.REALS].to(device=device, non_blocking=non_blocking)
117 return batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking), None
118
119
120 def default_make_latent(
121 num_latents: int,
122 latent_size: int,
123 device: Optional[Union[str, torch.device]] = None,
124 non_blocking: bool = False,
125 ) -> torch.Tensor:
126 return torch.randn(num_latents, latent_size).to(device=device, non_blocking=non_blocking)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monai/engines/utils.py b/monai/engines/utils.py
--- a/monai/engines/utils.py
+++ b/monai/engines/utils.py
@@ -107,7 +107,7 @@
"""
if not isinstance(batchdata, dict):
raise AssertionError("default prepare_batch expects dictionary input data.")
- if CommonKeys.LABEL in batchdata:
+ if isinstance(batchdata.get(CommonKeys.LABEL, None), torch.Tensor):
return (
batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),
batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),
| {"golden_diff": "diff --git a/monai/engines/utils.py b/monai/engines/utils.py\n--- a/monai/engines/utils.py\n+++ b/monai/engines/utils.py\n@@ -107,7 +107,7 @@\n \"\"\"\n if not isinstance(batchdata, dict):\n raise AssertionError(\"default prepare_batch expects dictionary input data.\")\n- if CommonKeys.LABEL in batchdata:\n+ if isinstance(batchdata.get(CommonKeys.LABEL, None), torch.Tensor):\n return (\n batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),\n batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),\n", "issue": "ignore `label` in the workflow if it's not tensor\n**Is your feature request related to a problem? Please describe.**\r\nDuring evaluation, now we use `prepare_batch` to extract `image` and `label` fields if existing the key. But maybe we don't want to load `label` for inference and didn't apply transforms, so need to ignore it for this case.\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union\n\nimport torch\n\nfrom monai.utils import exact_version, optional_import\nfrom monai.utils.enums import CommonKeys\n\nif TYPE_CHECKING:\n from ignite.engine import EventEnum\nelse:\n EventEnum, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"EventEnum\")\n\n__all__ = [\n \"IterationEvents\",\n \"GanKeys\",\n \"get_devices_spec\",\n \"default_prepare_batch\",\n \"default_make_latent\",\n]\n\n\nclass IterationEvents(EventEnum):\n \"\"\"\n Additional Events engine can register and trigger in the iteration process.\n Refer to the example in ignite: https://github.com/pytorch/ignite/blob/master/ignite/engine/events.py#L146\n These Events can be triggered during training iteration:\n `FORWARD_COMPLETED` is the Event when `network(image, label)` completed.\n `LOSS_COMPLETED` is the Event when `loss(pred, label)` completed.\n `BACKWARD_COMPLETED` is the Event when `loss.backward()` completed.\n `MODEL_COMPLETED` is the Event when all the model related operations completed.\n\n \"\"\"\n\n FORWARD_COMPLETED = \"forward_completed\"\n LOSS_COMPLETED = \"loss_completed\"\n BACKWARD_COMPLETED = \"backward_completed\"\n MODEL_COMPLETED = \"model_completed\"\n\n\nclass GanKeys:\n \"\"\"\n A set of common keys for generative adversarial networks.\n\n \"\"\"\n\n REALS = \"reals\"\n FAKES = \"fakes\"\n LATENTS = \"latents\"\n GLOSS = \"g_loss\"\n DLOSS = \"d_loss\"\n\n\ndef get_devices_spec(devices: Optional[Sequence[torch.device]] = None) -> List[torch.device]:\n \"\"\"\n Get a valid specification for one or more devices. If `devices` is None get devices for all CUDA devices available.\n If `devices` is and zero-length structure a single CPU compute device is returned. In any other cases `devices` is\n returned unchanged.\n\n Args:\n devices: list of devices to request, None for all GPU devices, [] for CPU.\n\n Raises:\n RuntimeError: When all GPUs are selected (``devices=None``) but no GPUs are available.\n\n Returns:\n list of torch.device: list of devices.\n\n \"\"\"\n if devices is None:\n devices = [torch.device(f\"cuda:{d:d}\") for d in range(torch.cuda.device_count())]\n\n if len(devices) == 0:\n raise RuntimeError(\"No GPU devices available.\")\n\n elif len(devices) == 0:\n devices = [torch.device(\"cpu\")]\n\n else:\n devices = list(devices)\n\n return devices\n\n\ndef default_prepare_batch(\n batchdata: Dict[str, torch.Tensor],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[Tuple[torch.Tensor, Optional[torch.Tensor]], torch.Tensor]:\n \"\"\"\n Default function to prepare the data for current iteration.\n Refer to ignite: https://github.com/pytorch/ignite/blob/v0.4.2/ignite/engine/__init__.py#L28.\n\n Returns:\n image, label(optional).\n\n \"\"\"\n if not isinstance(batchdata, dict):\n raise AssertionError(\"default prepare_batch expects dictionary input data.\")\n if CommonKeys.LABEL in batchdata:\n return (\n batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),\n batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),\n )\n if GanKeys.REALS in batchdata:\n return batchdata[GanKeys.REALS].to(device=device, non_blocking=non_blocking)\n return batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking), None\n\n\ndef default_make_latent(\n num_latents: int,\n latent_size: int,\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> torch.Tensor:\n return torch.randn(num_latents, latent_size).to(device=device, non_blocking=non_blocking)\n", "path": "monai/engines/utils.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union\n\nimport torch\n\nfrom monai.utils import exact_version, optional_import\nfrom monai.utils.enums import CommonKeys\n\nif TYPE_CHECKING:\n from ignite.engine import EventEnum\nelse:\n EventEnum, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"EventEnum\")\n\n__all__ = [\n \"IterationEvents\",\n \"GanKeys\",\n \"get_devices_spec\",\n \"default_prepare_batch\",\n \"default_make_latent\",\n]\n\n\nclass IterationEvents(EventEnum):\n \"\"\"\n Additional Events engine can register and trigger in the iteration process.\n Refer to the example in ignite: https://github.com/pytorch/ignite/blob/master/ignite/engine/events.py#L146\n These Events can be triggered during training iteration:\n `FORWARD_COMPLETED` is the Event when `network(image, label)` completed.\n `LOSS_COMPLETED` is the Event when `loss(pred, label)` completed.\n `BACKWARD_COMPLETED` is the Event when `loss.backward()` completed.\n `MODEL_COMPLETED` is the Event when all the model related operations completed.\n\n \"\"\"\n\n FORWARD_COMPLETED = \"forward_completed\"\n LOSS_COMPLETED = \"loss_completed\"\n BACKWARD_COMPLETED = \"backward_completed\"\n MODEL_COMPLETED = \"model_completed\"\n\n\nclass GanKeys:\n \"\"\"\n A set of common keys for generative adversarial networks.\n\n \"\"\"\n\n REALS = \"reals\"\n FAKES = \"fakes\"\n LATENTS = \"latents\"\n GLOSS = \"g_loss\"\n DLOSS = \"d_loss\"\n\n\ndef get_devices_spec(devices: Optional[Sequence[torch.device]] = None) -> List[torch.device]:\n \"\"\"\n Get a valid specification for one or more devices. If `devices` is None get devices for all CUDA devices available.\n If `devices` is and zero-length structure a single CPU compute device is returned. In any other cases `devices` is\n returned unchanged.\n\n Args:\n devices: list of devices to request, None for all GPU devices, [] for CPU.\n\n Raises:\n RuntimeError: When all GPUs are selected (``devices=None``) but no GPUs are available.\n\n Returns:\n list of torch.device: list of devices.\n\n \"\"\"\n if devices is None:\n devices = [torch.device(f\"cuda:{d:d}\") for d in range(torch.cuda.device_count())]\n\n if len(devices) == 0:\n raise RuntimeError(\"No GPU devices available.\")\n\n elif len(devices) == 0:\n devices = [torch.device(\"cpu\")]\n\n else:\n devices = list(devices)\n\n return devices\n\n\ndef default_prepare_batch(\n batchdata: Dict[str, torch.Tensor],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[Tuple[torch.Tensor, Optional[torch.Tensor]], torch.Tensor]:\n \"\"\"\n Default function to prepare the data for current iteration.\n Refer to ignite: https://github.com/pytorch/ignite/blob/v0.4.2/ignite/engine/__init__.py#L28.\n\n Returns:\n image, label(optional).\n\n \"\"\"\n if not isinstance(batchdata, dict):\n raise AssertionError(\"default prepare_batch expects dictionary input data.\")\n if isinstance(batchdata.get(CommonKeys.LABEL, None), torch.Tensor):\n return (\n batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),\n batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),\n )\n if GanKeys.REALS in batchdata:\n return batchdata[GanKeys.REALS].to(device=device, non_blocking=non_blocking)\n return batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking), None\n\n\ndef default_make_latent(\n num_latents: int,\n latent_size: int,\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> torch.Tensor:\n return torch.randn(num_latents, latent_size).to(device=device, non_blocking=non_blocking)\n", "path": "monai/engines/utils.py"}]} | 1,646 | 149 |
gh_patches_debug_33202 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-439 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TracePlugin not working for bottle + python 2.7.x
Hi, we have a backend using python 2.7.x, Im integrating Datadog APM following the documentation and I have the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 764, in _handle
return route.call(**args)
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 1575, in wrapper
rv = callback(*a, **ka)
File "/usr/local/lib/python2.7/site-packages/ddtrace/contrib/bottle/trace.py", line 32, in wrapped
resource = "%s %s" % (request.method, request.route.rule)
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 1237, in __getattr__
raise AttributeError('Attribute %r not defined.' % name)
```
when I go to trace.py line 32, I find this:
`resource = "%s %s" % (request.method, request.route.rule)`
somehow request.route doesn't exist, I think that probably is related to a different python or bottle version being used on our end.
Anyway `route` is already provided as a parameter for this method (`TracePlugin.apply(self, callback, route):`) so there's no need to call `request.route.rule`, you can call `route.rule` instead,
`resource = "%s %s" % (request.method, route.rule)`
Otherwise let me know if you have more details about this error and what's the right way to solve it,
thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/bottle/trace.py`
Content:
```
1
2 # 3p
3 from bottle import response, request
4
5 # stdlib
6 import ddtrace
7 from ddtrace.ext import http, AppTypes
8
9 # project
10 from ...propagation.http import HTTPPropagator
11
12 class TracePlugin(object):
13
14 name = 'trace'
15 api = 2
16
17 def __init__(self, service="bottle", tracer=None, distributed_tracing=None):
18 self.service = service
19 self.tracer = tracer or ddtrace.tracer
20 self.tracer.set_service_info(
21 service=service,
22 app="bottle",
23 app_type=AppTypes.web)
24 self.distributed_tracing = distributed_tracing
25
26 def apply(self, callback, route):
27
28 def wrapped(*args, **kwargs):
29 if not self.tracer or not self.tracer.enabled:
30 return callback(*args, **kwargs)
31
32 resource = "%s %s" % (request.method, request.route.rule)
33
34 # Propagate headers such as x-datadog-trace-id.
35 if self.distributed_tracing:
36 propagator = HTTPPropagator()
37 context = propagator.extract(request.headers)
38 if context.trace_id:
39 self.tracer.context_provider.activate(context)
40
41 with self.tracer.trace("bottle.request", service=self.service, resource=resource) as s:
42 code = 0
43 try:
44 return callback(*args, **kwargs)
45 except Exception:
46 # bottle doesn't always translate unhandled exceptions, so
47 # we mark it here.
48 code = 500
49 raise
50 finally:
51 s.set_tag(http.STATUS_CODE, code or response.status_code)
52 s.set_tag(http.URL, request.path)
53 s.set_tag(http.METHOD, request.method)
54
55 return wrapped
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/bottle/trace.py b/ddtrace/contrib/bottle/trace.py
--- a/ddtrace/contrib/bottle/trace.py
+++ b/ddtrace/contrib/bottle/trace.py
@@ -1,4 +1,3 @@
-
# 3p
from bottle import response, request
@@ -10,18 +9,18 @@
from ...propagation.http import HTTPPropagator
class TracePlugin(object):
-
name = 'trace'
api = 2
- def __init__(self, service="bottle", tracer=None, distributed_tracing=None):
+ def __init__(self, service='bottle', tracer=None, distributed_tracing=None):
self.service = service
self.tracer = tracer or ddtrace.tracer
+ self.distributed_tracing = distributed_tracing
self.tracer.set_service_info(
service=service,
- app="bottle",
- app_type=AppTypes.web)
- self.distributed_tracing = distributed_tracing
+ app='bottle',
+ app_type=AppTypes.web,
+ )
def apply(self, callback, route):
@@ -29,7 +28,7 @@
if not self.tracer or not self.tracer.enabled:
return callback(*args, **kwargs)
- resource = "%s %s" % (request.method, request.route.rule)
+ resource = '{} {}'.format(request.method, route.rule)
# Propagate headers such as x-datadog-trace-id.
if self.distributed_tracing:
@@ -38,7 +37,7 @@
if context.trace_id:
self.tracer.context_provider.activate(context)
- with self.tracer.trace("bottle.request", service=self.service, resource=resource) as s:
+ with self.tracer.trace('bottle.request', service=self.service, resource=resource) as s:
code = 0
try:
return callback(*args, **kwargs)
| {"golden_diff": "diff --git a/ddtrace/contrib/bottle/trace.py b/ddtrace/contrib/bottle/trace.py\n--- a/ddtrace/contrib/bottle/trace.py\n+++ b/ddtrace/contrib/bottle/trace.py\n@@ -1,4 +1,3 @@\n-\n # 3p\n from bottle import response, request\n \n@@ -10,18 +9,18 @@\n from ...propagation.http import HTTPPropagator\n \n class TracePlugin(object):\n-\n name = 'trace'\n api = 2\n \n- def __init__(self, service=\"bottle\", tracer=None, distributed_tracing=None):\n+ def __init__(self, service='bottle', tracer=None, distributed_tracing=None):\n self.service = service\n self.tracer = tracer or ddtrace.tracer\n+ self.distributed_tracing = distributed_tracing\n self.tracer.set_service_info(\n service=service,\n- app=\"bottle\",\n- app_type=AppTypes.web)\n- self.distributed_tracing = distributed_tracing\n+ app='bottle',\n+ app_type=AppTypes.web,\n+ )\n \n def apply(self, callback, route):\n \n@@ -29,7 +28,7 @@\n if not self.tracer or not self.tracer.enabled:\n return callback(*args, **kwargs)\n \n- resource = \"%s %s\" % (request.method, request.route.rule)\n+ resource = '{} {}'.format(request.method, route.rule)\n \n # Propagate headers such as x-datadog-trace-id.\n if self.distributed_tracing:\n@@ -38,7 +37,7 @@\n if context.trace_id:\n self.tracer.context_provider.activate(context)\n \n- with self.tracer.trace(\"bottle.request\", service=self.service, resource=resource) as s:\n+ with self.tracer.trace('bottle.request', service=self.service, resource=resource) as s:\n code = 0\n try:\n return callback(*args, **kwargs)\n", "issue": "TracePlugin not working for bottle + python 2.7.x\nHi, we have a backend using python 2.7.x, Im integrating Datadog APM following the documentation and I have the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 764, in _handle\r\n return route.call(**args)\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 1575, in wrapper\r\n rv = callback(*a, **ka)\r\n File \"/usr/local/lib/python2.7/site-packages/ddtrace/contrib/bottle/trace.py\", line 32, in wrapped\r\n resource = \"%s %s\" % (request.method, request.route.rule)\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 1237, in __getattr__\r\n raise AttributeError('Attribute %r not defined.' % name)\r\n```\r\n\r\nwhen I go to trace.py line 32, I find this:\r\n`resource = \"%s %s\" % (request.method, request.route.rule)`\r\n\r\nsomehow request.route doesn't exist, I think that probably is related to a different python or bottle version being used on our end.\r\nAnyway `route` is already provided as a parameter for this method (`TracePlugin.apply(self, callback, route):`) so there's no need to call `request.route.rule`, you can call `route.rule` instead,\r\n`resource = \"%s %s\" % (request.method, route.rule)`\r\n\r\nOtherwise let me know if you have more details about this error and what's the right way to solve it,\r\nthanks\r\n\n", "before_files": [{"content": "\n# 3p\nfrom bottle import response, request\n\n# stdlib\nimport ddtrace\nfrom ddtrace.ext import http, AppTypes\n\n# project\nfrom ...propagation.http import HTTPPropagator\n\nclass TracePlugin(object):\n\n name = 'trace'\n api = 2\n\n def __init__(self, service=\"bottle\", tracer=None, distributed_tracing=None):\n self.service = service\n self.tracer = tracer or ddtrace.tracer\n self.tracer.set_service_info(\n service=service,\n app=\"bottle\",\n app_type=AppTypes.web)\n self.distributed_tracing = distributed_tracing\n\n def apply(self, callback, route):\n\n def wrapped(*args, **kwargs):\n if not self.tracer or not self.tracer.enabled:\n return callback(*args, **kwargs)\n\n resource = \"%s %s\" % (request.method, request.route.rule)\n\n # Propagate headers such as x-datadog-trace-id.\n if self.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(request.headers)\n if context.trace_id:\n self.tracer.context_provider.activate(context)\n\n with self.tracer.trace(\"bottle.request\", service=self.service, resource=resource) as s:\n code = 0\n try:\n return callback(*args, **kwargs)\n except Exception:\n # bottle doesn't always translate unhandled exceptions, so\n # we mark it here.\n code = 500\n raise\n finally:\n s.set_tag(http.STATUS_CODE, code or response.status_code)\n s.set_tag(http.URL, request.path)\n s.set_tag(http.METHOD, request.method)\n\n return wrapped\n", "path": "ddtrace/contrib/bottle/trace.py"}], "after_files": [{"content": "# 3p\nfrom bottle import response, request\n\n# stdlib\nimport ddtrace\nfrom ddtrace.ext import http, AppTypes\n\n# project\nfrom ...propagation.http import HTTPPropagator\n\nclass TracePlugin(object):\n name = 'trace'\n api = 2\n\n def __init__(self, service='bottle', tracer=None, distributed_tracing=None):\n self.service = service\n self.tracer = tracer or ddtrace.tracer\n self.distributed_tracing = distributed_tracing\n self.tracer.set_service_info(\n service=service,\n app='bottle',\n app_type=AppTypes.web,\n )\n\n def apply(self, callback, route):\n\n def wrapped(*args, **kwargs):\n if not self.tracer or not self.tracer.enabled:\n return callback(*args, **kwargs)\n\n resource = '{} {}'.format(request.method, route.rule)\n\n # Propagate headers such as x-datadog-trace-id.\n if self.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(request.headers)\n if context.trace_id:\n self.tracer.context_provider.activate(context)\n\n with self.tracer.trace('bottle.request', service=self.service, resource=resource) as s:\n code = 0\n try:\n return callback(*args, **kwargs)\n except Exception:\n # bottle doesn't always translate unhandled exceptions, so\n # we mark it here.\n code = 500\n raise\n finally:\n s.set_tag(http.STATUS_CODE, code or response.status_code)\n s.set_tag(http.URL, request.path)\n s.set_tag(http.METHOD, request.method)\n\n return wrapped\n", "path": "ddtrace/contrib/bottle/trace.py"}]} | 1,105 | 438 |
gh_patches_debug_3827 | rasdani/github-patches | git_diff | kivy__python-for-android-1427 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin pyjnius version
This pull request adds a version pin for pyjnius as requested here: #1415
Please note I am proposing this as a **permanent measure** because this is such a core component - not this specific version of course, but that it is always pinned.
Even if you just randomly bump the version up in any random commit without checking, this is a huge improvement: it will prevent p4a master builds from randomly failing out of the blue *when not even changing the commit*, and people will be able to go back to an earlier p4a master commit to avoid sudden pyjnius breakages instead of patching around in the recipes folder (which depending on the build pipeline might be quite a time waster to do, especially compared to just going back to a known working p4a master commit).
Summed up, please pin this, carelessly bump it whenever, and have less unhappy users. :smile:
(And I'm not proposing pinning *everything*, I know you have way too many recipes and people would forget to bump it, I understand - but at least the core components like pyjnius, would that possibly sound feasible?)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/pyjnius/__init__.py`
Content:
```
1 from pythonforandroid.recipe import CythonRecipe
2 from pythonforandroid.toolchain import shprint, current_directory, info
3 from pythonforandroid.patching import will_build
4 import sh
5 from os.path import join
6
7
8 class PyjniusRecipe(CythonRecipe):
9 version = 'master'
10 url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'
11 name = 'pyjnius'
12 depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']
13 site_packages_name = 'jnius'
14
15 patches = [('sdl2_jnienv_getter.patch', will_build('sdl2')),
16 ('genericndkbuild_jnienv_getter.patch', will_build('genericndkbuild'))]
17
18 def postbuild_arch(self, arch):
19 super(PyjniusRecipe, self).postbuild_arch(arch)
20 info('Copying pyjnius java class to classes build dir')
21 with current_directory(self.get_build_dir(arch.arch)):
22 shprint(sh.cp, '-a', join('jnius', 'src', 'org'), self.ctx.javaclass_dir)
23
24
25 recipe = PyjniusRecipe()
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/pyjnius/__init__.py b/pythonforandroid/recipes/pyjnius/__init__.py
--- a/pythonforandroid/recipes/pyjnius/__init__.py
+++ b/pythonforandroid/recipes/pyjnius/__init__.py
@@ -6,7 +6,7 @@
class PyjniusRecipe(CythonRecipe):
- version = 'master'
+ version = '1.1.3'
url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'
name = 'pyjnius'
depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']
| {"golden_diff": "diff --git a/pythonforandroid/recipes/pyjnius/__init__.py b/pythonforandroid/recipes/pyjnius/__init__.py\n--- a/pythonforandroid/recipes/pyjnius/__init__.py\n+++ b/pythonforandroid/recipes/pyjnius/__init__.py\n@@ -6,7 +6,7 @@\n \n \n class PyjniusRecipe(CythonRecipe):\n- version = 'master'\n+ version = '1.1.3'\n url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'\n name = 'pyjnius'\n depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']\n", "issue": "Pin pyjnius version\nThis pull request adds a version pin for pyjnius as requested here: #1415 \r\n\r\nPlease note I am proposing this as a **permanent measure** because this is such a core component - not this specific version of course, but that it is always pinned.\r\n\r\nEven if you just randomly bump the version up in any random commit without checking, this is a huge improvement: it will prevent p4a master builds from randomly failing out of the blue *when not even changing the commit*, and people will be able to go back to an earlier p4a master commit to avoid sudden pyjnius breakages instead of patching around in the recipes folder (which depending on the build pipeline might be quite a time waster to do, especially compared to just going back to a known working p4a master commit).\r\n\r\nSummed up, please pin this, carelessly bump it whenever, and have less unhappy users. :smile:\r\n\r\n(And I'm not proposing pinning *everything*, I know you have way too many recipes and people would forget to bump it, I understand - but at least the core components like pyjnius, would that possibly sound feasible?)\n", "before_files": [{"content": "from pythonforandroid.recipe import CythonRecipe\nfrom pythonforandroid.toolchain import shprint, current_directory, info\nfrom pythonforandroid.patching import will_build\nimport sh\nfrom os.path import join\n\n\nclass PyjniusRecipe(CythonRecipe):\n version = 'master'\n url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'\n name = 'pyjnius'\n depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']\n site_packages_name = 'jnius'\n\n patches = [('sdl2_jnienv_getter.patch', will_build('sdl2')),\n ('genericndkbuild_jnienv_getter.patch', will_build('genericndkbuild'))]\n\n def postbuild_arch(self, arch):\n super(PyjniusRecipe, self).postbuild_arch(arch)\n info('Copying pyjnius java class to classes build dir')\n with current_directory(self.get_build_dir(arch.arch)):\n shprint(sh.cp, '-a', join('jnius', 'src', 'org'), self.ctx.javaclass_dir)\n\n\nrecipe = PyjniusRecipe()\n", "path": "pythonforandroid/recipes/pyjnius/__init__.py"}], "after_files": [{"content": "from pythonforandroid.recipe import CythonRecipe\nfrom pythonforandroid.toolchain import shprint, current_directory, info\nfrom pythonforandroid.patching import will_build\nimport sh\nfrom os.path import join\n\n\nclass PyjniusRecipe(CythonRecipe):\n version = '1.1.3'\n url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'\n name = 'pyjnius'\n depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']\n site_packages_name = 'jnius'\n\n patches = [('sdl2_jnienv_getter.patch', will_build('sdl2')),\n ('genericndkbuild_jnienv_getter.patch', will_build('genericndkbuild'))]\n\n def postbuild_arch(self, arch):\n super(PyjniusRecipe, self).postbuild_arch(arch)\n info('Copying pyjnius java class to classes build dir')\n with current_directory(self.get_build_dir(arch.arch)):\n shprint(sh.cp, '-a', join('jnius', 'src', 'org'), self.ctx.javaclass_dir)\n\n\nrecipe = PyjniusRecipe()\n", "path": "pythonforandroid/recipes/pyjnius/__init__.py"}]} | 812 | 157 |
gh_patches_debug_39067 | rasdani/github-patches | git_diff | ultrabug__py3status-958 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
selinux module install check refers to binary not in user path
The selinux module tries to check whether selinux is installed.
`if not self.py3.check_commands(['getenforce']):`
this effectively runs `which getenforce`. The getenforce binary is installed under /usr/sbin (gentoo, also fedora as far as i read) which is not in the regular user path and therefore not found, resulting in the message `selinux: isn't installed`.
Removing the check makes the module work as expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py3status/modules/selinux.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Display SELinux state.
4
5 This module displays the state of SELinux on your machine:
6 Enforcing (good), Permissive (bad), or Disabled (bad).
7
8 Configuration parameters:
9 cache_timeout: refresh interval for this module (default 10)
10 format: display format for this module (default 'selinux: {state}')
11 state_disabled: show when no SELinux policy is loaded.
12 (default 'disabled')
13 state_enforcing: show when SELinux security policy is enforced.
14 (default 'enforcing')
15 state_permissive: show when SELinux prints warnings instead of enforcing.
16 (default 'permissive')
17
18 Format placeholders:
19 {state} SELinux state
20
21 Color options:
22 color_bad: Enforcing
23 color_degraded: Permissive
24 color_good: Disabled
25
26 Requires:
27 libselinux-python: SELinux python bindings for libselinux
28
29 @author bstinsonmhk
30 @license BSD
31
32 SAMPLE OUTPUT
33 {'full_text': 'selinux: enforcing', 'color': '#00FF00'}
34
35 permissive
36 {'full_text': 'selinux: permissive', 'color': '#FFFF00'}
37
38 disabled
39 {'full_text': 'selinux: disabled', 'color': '#FF0000'}
40 """
41 from __future__ import absolute_import
42 import selinux
43 STRING_UNAVAILABLE = "selinux: isn't installed"
44
45
46 class Py3status:
47 """
48 """
49 # available configuration parameters
50 cache_timeout = 10
51 format = 'selinux: {state}'
52 state_disabled = 'disabled'
53 state_enforcing = 'enforcing'
54 state_permissive = 'permissive'
55
56 def selinux(self):
57 if not self.py3.check_commands(['getenforce']):
58 return {'cache_until': self.py3.CACHE_FOREVER,
59 'color': self.py3.COLOR_BAD,
60 'full_text': STRING_UNAVAILABLE}
61 try:
62 if selinux.security_getenforce():
63 state = self.state_enforcing
64 color = self.py3.COLOR_GOOD
65 else:
66 state = self.state_permissive
67 color = self.py3.COLOR_BAD
68 except:
69 state = self.state_disabled
70 color = self.py3.COLOR_BAD
71
72 return {'cached_until': self.py3.time_in(self.cache_timeout),
73 'full_text': self.py3.safe_format(self.format, {'state': state}),
74 'color': color}
75
76
77 if __name__ == '__main__':
78 """
79 Run module in test mode.
80 """
81 from py3status.module_test import module_test
82 module_test(Py3status)
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py3status/modules/selinux.py b/py3status/modules/selinux.py
--- a/py3status/modules/selinux.py
+++ b/py3status/modules/selinux.py
@@ -3,11 +3,11 @@
Display SELinux state.
This module displays the state of SELinux on your machine:
-Enforcing (good), Permissive (bad), or Disabled (bad).
+Enforcing (good), Permissive (degraded), or Disabled (bad).
Configuration parameters:
cache_timeout: refresh interval for this module (default 10)
- format: display format for this module (default 'selinux: {state}')
+ format: display format for this module (default 'SELinux: {state}')
state_disabled: show when no SELinux policy is loaded.
(default 'disabled')
state_enforcing: show when SELinux security policy is enforced.
@@ -30,17 +30,17 @@
@license BSD
SAMPLE OUTPUT
-{'full_text': 'selinux: enforcing', 'color': '#00FF00'}
+{'full_text': 'SELinux: enforcing', 'color': '#00FF00'}
permissive
-{'full_text': 'selinux: permissive', 'color': '#FFFF00'}
+{'full_text': 'SELinux: permissive', 'color': '#FFFF00'}
disabled
-{'full_text': 'selinux: disabled', 'color': '#FF0000'}
+{'full_text': 'SELinux: disabled', 'color': '#FF0000'}
"""
+
from __future__ import absolute_import
import selinux
-STRING_UNAVAILABLE = "selinux: isn't installed"
class Py3status:
@@ -48,30 +48,28 @@
"""
# available configuration parameters
cache_timeout = 10
- format = 'selinux: {state}'
+ format = 'SELinux: {state}'
state_disabled = 'disabled'
state_enforcing = 'enforcing'
state_permissive = 'permissive'
def selinux(self):
- if not self.py3.check_commands(['getenforce']):
- return {'cache_until': self.py3.CACHE_FOREVER,
- 'color': self.py3.COLOR_BAD,
- 'full_text': STRING_UNAVAILABLE}
try:
if selinux.security_getenforce():
state = self.state_enforcing
color = self.py3.COLOR_GOOD
else:
state = self.state_permissive
- color = self.py3.COLOR_BAD
+ color = self.py3.COLOR_DEGRADED
except:
state = self.state_disabled
color = self.py3.COLOR_BAD
- return {'cached_until': self.py3.time_in(self.cache_timeout),
- 'full_text': self.py3.safe_format(self.format, {'state': state}),
- 'color': color}
+ return {
+ 'cached_until': self.py3.time_in(self.cache_timeout),
+ 'full_text': self.py3.safe_format(self.format, {'state': state}),
+ 'color': color
+ }
if __name__ == '__main__':
| {"golden_diff": "diff --git a/py3status/modules/selinux.py b/py3status/modules/selinux.py\n--- a/py3status/modules/selinux.py\n+++ b/py3status/modules/selinux.py\n@@ -3,11 +3,11 @@\n Display SELinux state.\n \n This module displays the state of SELinux on your machine:\n-Enforcing (good), Permissive (bad), or Disabled (bad).\n+Enforcing (good), Permissive (degraded), or Disabled (bad).\n \n Configuration parameters:\n cache_timeout: refresh interval for this module (default 10)\n- format: display format for this module (default 'selinux: {state}')\n+ format: display format for this module (default 'SELinux: {state}')\n state_disabled: show when no SELinux policy is loaded.\n (default 'disabled')\n state_enforcing: show when SELinux security policy is enforced.\n@@ -30,17 +30,17 @@\n @license BSD\n \n SAMPLE OUTPUT\n-{'full_text': 'selinux: enforcing', 'color': '#00FF00'}\n+{'full_text': 'SELinux: enforcing', 'color': '#00FF00'}\n \n permissive\n-{'full_text': 'selinux: permissive', 'color': '#FFFF00'}\n+{'full_text': 'SELinux: permissive', 'color': '#FFFF00'}\n \n disabled\n-{'full_text': 'selinux: disabled', 'color': '#FF0000'}\n+{'full_text': 'SELinux: disabled', 'color': '#FF0000'}\n \"\"\"\n+\n from __future__ import absolute_import\n import selinux\n-STRING_UNAVAILABLE = \"selinux: isn't installed\"\n \n \n class Py3status:\n@@ -48,30 +48,28 @@\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n- format = 'selinux: {state}'\n+ format = 'SELinux: {state}'\n state_disabled = 'disabled'\n state_enforcing = 'enforcing'\n state_permissive = 'permissive'\n \n def selinux(self):\n- if not self.py3.check_commands(['getenforce']):\n- return {'cache_until': self.py3.CACHE_FOREVER,\n- 'color': self.py3.COLOR_BAD,\n- 'full_text': STRING_UNAVAILABLE}\n try:\n if selinux.security_getenforce():\n state = self.state_enforcing\n color = self.py3.COLOR_GOOD\n else:\n state = self.state_permissive\n- color = self.py3.COLOR_BAD\n+ color = self.py3.COLOR_DEGRADED\n except:\n state = self.state_disabled\n color = self.py3.COLOR_BAD\n \n- return {'cached_until': self.py3.time_in(self.cache_timeout),\n- 'full_text': self.py3.safe_format(self.format, {'state': state}),\n- 'color': color}\n+ return {\n+ 'cached_until': self.py3.time_in(self.cache_timeout),\n+ 'full_text': self.py3.safe_format(self.format, {'state': state}),\n+ 'color': color\n+ }\n \n \n if __name__ == '__main__':\n", "issue": "selinux module install check refers to binary not in user path\nThe selinux module tries to check whether selinux is installed.\r\n\r\n`if not self.py3.check_commands(['getenforce']):`\r\n\r\nthis effectively runs `which getenforce`. The getenforce binary is installed under /usr/sbin (gentoo, also fedora as far as i read) which is not in the regular user path and therefore not found, resulting in the message `selinux: isn't installed`.\r\nRemoving the check makes the module work as expected.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay SELinux state.\n\nThis module displays the state of SELinux on your machine:\nEnforcing (good), Permissive (bad), or Disabled (bad).\n\nConfiguration parameters:\n cache_timeout: refresh interval for this module (default 10)\n format: display format for this module (default 'selinux: {state}')\n state_disabled: show when no SELinux policy is loaded.\n (default 'disabled')\n state_enforcing: show when SELinux security policy is enforced.\n (default 'enforcing')\n state_permissive: show when SELinux prints warnings instead of enforcing.\n (default 'permissive')\n\nFormat placeholders:\n {state} SELinux state\n\nColor options:\n color_bad: Enforcing\n color_degraded: Permissive\n color_good: Disabled\n\nRequires:\n libselinux-python: SELinux python bindings for libselinux\n\n@author bstinsonmhk\n@license BSD\n\nSAMPLE OUTPUT\n{'full_text': 'selinux: enforcing', 'color': '#00FF00'}\n\npermissive\n{'full_text': 'selinux: permissive', 'color': '#FFFF00'}\n\ndisabled\n{'full_text': 'selinux: disabled', 'color': '#FF0000'}\n\"\"\"\nfrom __future__ import absolute_import\nimport selinux\nSTRING_UNAVAILABLE = \"selinux: isn't installed\"\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n format = 'selinux: {state}'\n state_disabled = 'disabled'\n state_enforcing = 'enforcing'\n state_permissive = 'permissive'\n\n def selinux(self):\n if not self.py3.check_commands(['getenforce']):\n return {'cache_until': self.py3.CACHE_FOREVER,\n 'color': self.py3.COLOR_BAD,\n 'full_text': STRING_UNAVAILABLE}\n try:\n if selinux.security_getenforce():\n state = self.state_enforcing\n color = self.py3.COLOR_GOOD\n else:\n state = self.state_permissive\n color = self.py3.COLOR_BAD\n except:\n state = self.state_disabled\n color = self.py3.COLOR_BAD\n\n return {'cached_until': self.py3.time_in(self.cache_timeout),\n 'full_text': self.py3.safe_format(self.format, {'state': state}),\n 'color': color}\n\n\nif __name__ == '__main__':\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n module_test(Py3status)\n", "path": "py3status/modules/selinux.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay SELinux state.\n\nThis module displays the state of SELinux on your machine:\nEnforcing (good), Permissive (degraded), or Disabled (bad).\n\nConfiguration parameters:\n cache_timeout: refresh interval for this module (default 10)\n format: display format for this module (default 'SELinux: {state}')\n state_disabled: show when no SELinux policy is loaded.\n (default 'disabled')\n state_enforcing: show when SELinux security policy is enforced.\n (default 'enforcing')\n state_permissive: show when SELinux prints warnings instead of enforcing.\n (default 'permissive')\n\nFormat placeholders:\n {state} SELinux state\n\nColor options:\n color_bad: Enforcing\n color_degraded: Permissive\n color_good: Disabled\n\nRequires:\n libselinux-python: SELinux python bindings for libselinux\n\n@author bstinsonmhk\n@license BSD\n\nSAMPLE OUTPUT\n{'full_text': 'SELinux: enforcing', 'color': '#00FF00'}\n\npermissive\n{'full_text': 'SELinux: permissive', 'color': '#FFFF00'}\n\ndisabled\n{'full_text': 'SELinux: disabled', 'color': '#FF0000'}\n\"\"\"\n\nfrom __future__ import absolute_import\nimport selinux\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n format = 'SELinux: {state}'\n state_disabled = 'disabled'\n state_enforcing = 'enforcing'\n state_permissive = 'permissive'\n\n def selinux(self):\n try:\n if selinux.security_getenforce():\n state = self.state_enforcing\n color = self.py3.COLOR_GOOD\n else:\n state = self.state_permissive\n color = self.py3.COLOR_DEGRADED\n except:\n state = self.state_disabled\n color = self.py3.COLOR_BAD\n\n return {\n 'cached_until': self.py3.time_in(self.cache_timeout),\n 'full_text': self.py3.safe_format(self.format, {'state': state}),\n 'color': color\n }\n\n\nif __name__ == '__main__':\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n module_test(Py3status)\n", "path": "py3status/modules/selinux.py"}]} | 1,091 | 694 |
gh_patches_debug_1998 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2609 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider promoting `case` to the top level
## Current behavior
*Please describe how the feature works today*
Currently, the `case` context manager must be imported from `prefect.tasks.control_flow.case`.
## Proposed behavior
*Please describe your proposed change to the current behavior*
I think we should consider promoting `case` to being importable as `prefect.case`, since it forms a fundamental part of the Python API. Other control flow utilities have "task-like" semantics (even if they are called as functions), and it's more appropriate for them to live in a `tasks` submodule. However, like `task`, `Flow`, `tags`, and `unmapped`, I believe `case` represents a significant component of Prefect's Python syntax and warrants top-level availability.
## Example
*Please give an example of how the enhancement would be useful*
```
from prefect import Flow, case
with Flow("example"):
with case(is_this_easy, True):
do_stuff()
with prefect.tasks.control_flow.case(is_this_easy, False):
do_other_stuff()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/__init__.py`
Content:
```
1 import prefect.utilities
2 from prefect.configuration import config
3
4 from prefect.utilities.context import context
5
6 from prefect.client import Client
7 import prefect.schedules
8 import prefect.triggers
9 import prefect.environments
10
11 from prefect.core import Task, Flow, Parameter
12 import prefect.engine
13 import prefect.tasks
14 from prefect.utilities.tasks import task, tags, unmapped
15
16 import prefect.serialization
17
18 import prefect.agent
19
20 from ._version import get_versions
21
22 __version__ = get_versions()["version"] # type: ignore
23 del get_versions
24
25 try:
26 import signal as _signal
27 from ._siginfo import sig_handler as _sig_handler
28
29 _signal.signal(29, _sig_handler)
30 except:
31 pass
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/__init__.py b/src/prefect/__init__.py
--- a/src/prefect/__init__.py
+++ b/src/prefect/__init__.py
@@ -11,6 +11,7 @@
from prefect.core import Task, Flow, Parameter
import prefect.engine
import prefect.tasks
+from prefect.tasks.control_flow import case
from prefect.utilities.tasks import task, tags, unmapped
import prefect.serialization
| {"golden_diff": "diff --git a/src/prefect/__init__.py b/src/prefect/__init__.py\n--- a/src/prefect/__init__.py\n+++ b/src/prefect/__init__.py\n@@ -11,6 +11,7 @@\n from prefect.core import Task, Flow, Parameter\n import prefect.engine\n import prefect.tasks\n+from prefect.tasks.control_flow import case\n from prefect.utilities.tasks import task, tags, unmapped\n \n import prefect.serialization\n", "issue": "Consider promoting `case` to the top level\n## Current behavior\r\n*Please describe how the feature works today*\r\nCurrently, the `case` context manager must be imported from `prefect.tasks.control_flow.case`.\r\n\r\n\r\n\r\n## Proposed behavior\r\n*Please describe your proposed change to the current behavior*\r\nI think we should consider promoting `case` to being importable as `prefect.case`, since it forms a fundamental part of the Python API. Other control flow utilities have \"task-like\" semantics (even if they are called as functions), and it's more appropriate for them to live in a `tasks` submodule. However, like `task`, `Flow`, `tags`, and `unmapped`, I believe `case` represents a significant component of Prefect's Python syntax and warrants top-level availability.\r\n\r\n\r\n\r\n\r\n## Example\r\n*Please give an example of how the enhancement would be useful*\r\n```\r\nfrom prefect import Flow, case\r\n\r\nwith Flow(\"example\"):\r\n with case(is_this_easy, True):\r\n do_stuff()\r\n\r\n with prefect.tasks.control_flow.case(is_this_easy, False):\r\n do_other_stuff()\r\n```\n", "before_files": [{"content": "import prefect.utilities\nfrom prefect.configuration import config\n\nfrom prefect.utilities.context import context\n\nfrom prefect.client import Client\nimport prefect.schedules\nimport prefect.triggers\nimport prefect.environments\n\nfrom prefect.core import Task, Flow, Parameter\nimport prefect.engine\nimport prefect.tasks\nfrom prefect.utilities.tasks import task, tags, unmapped\n\nimport prefect.serialization\n\nimport prefect.agent\n\nfrom ._version import get_versions\n\n__version__ = get_versions()[\"version\"] # type: ignore\ndel get_versions\n\ntry:\n import signal as _signal\n from ._siginfo import sig_handler as _sig_handler\n\n _signal.signal(29, _sig_handler)\nexcept:\n pass\n", "path": "src/prefect/__init__.py"}], "after_files": [{"content": "import prefect.utilities\nfrom prefect.configuration import config\n\nfrom prefect.utilities.context import context\n\nfrom prefect.client import Client\nimport prefect.schedules\nimport prefect.triggers\nimport prefect.environments\n\nfrom prefect.core import Task, Flow, Parameter\nimport prefect.engine\nimport prefect.tasks\nfrom prefect.tasks.control_flow import case\nfrom prefect.utilities.tasks import task, tags, unmapped\n\nimport prefect.serialization\n\nimport prefect.agent\n\nfrom ._version import get_versions\n\n__version__ = get_versions()[\"version\"] # type: ignore\ndel get_versions\n\ntry:\n import signal as _signal\n from ._siginfo import sig_handler as _sig_handler\n\n _signal.signal(29, _sig_handler)\nexcept:\n pass\n", "path": "src/prefect/__init__.py"}]} | 694 | 99 |
gh_patches_debug_20034 | rasdani/github-patches | git_diff | python-discord__bot-790 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Filters should not apply to staff members
Currently, we have certain channels whitelisted for certain filters and some filters apply to all members, even Owners cannot post filetypes that aren't on the whitelist!
Please change this so that absolutely all filters will ignore all staff members. It is not necessary to whitelist staff channels or to keep any other kind of exceptions to the filters once this very simple exception has been added, so please clean up any such exceptions while you're at it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/antimalware.py`
Content:
```
1 import logging
2
3 from discord import Embed, Message, NotFound
4 from discord.ext.commands import Cog
5
6 from bot.bot import Bot
7 from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs
8
9 log = logging.getLogger(__name__)
10
11
12 class AntiMalware(Cog):
13 """Delete messages which contain attachments with non-whitelisted file extensions."""
14
15 def __init__(self, bot: Bot):
16 self.bot = bot
17
18 @Cog.listener()
19 async def on_message(self, message: Message) -> None:
20 """Identify messages with prohibited attachments."""
21 if not message.attachments:
22 return
23
24 embed = Embed()
25 for attachment in message.attachments:
26 filename = attachment.filename.lower()
27 if filename.endswith('.py'):
28 embed.description = (
29 f"It looks like you tried to attach a Python file - please "
30 f"use a code-pasting service such as {URLs.site_schema}{URLs.site_paste}"
31 )
32 break # Other detections irrelevant because we prioritize the .py message.
33 if not filename.endswith(tuple(AntiMalwareConfig.whitelist)):
34 whitelisted_types = ', '.join(AntiMalwareConfig.whitelist)
35 meta_channel = self.bot.get_channel(Channels.meta)
36 embed.description = (
37 f"It looks like you tried to attach a file type that we "
38 f"do not allow. We currently allow the following file "
39 f"types: **{whitelisted_types}**. \n\n Feel free to ask "
40 f"in {meta_channel.mention} if you think this is a mistake."
41 )
42 if embed.description:
43 await message.channel.send(f"Hey {message.author.mention}!", embed=embed)
44
45 # Delete the offending message:
46 try:
47 await message.delete()
48 except NotFound:
49 log.info(f"Tried to delete message `{message.id}`, but message could not be found.")
50
51
52 def setup(bot: Bot) -> None:
53 """Load the AntiMalware cog."""
54 bot.add_cog(AntiMalware(bot))
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/antimalware.py b/bot/cogs/antimalware.py
--- a/bot/cogs/antimalware.py
+++ b/bot/cogs/antimalware.py
@@ -4,7 +4,7 @@
from discord.ext.commands import Cog
from bot.bot import Bot
-from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs
+from bot.constants import AntiMalware as AntiMalwareConfig, Channels, STAFF_ROLES, URLs
log = logging.getLogger(__name__)
@@ -18,7 +18,13 @@
@Cog.listener()
async def on_message(self, message: Message) -> None:
"""Identify messages with prohibited attachments."""
- if not message.attachments:
+ # Return when message don't have attachment and don't moderate DMs
+ if not message.attachments or not message.guild:
+ return
+
+ # Check if user is staff, if is, return
+ # Since we only care that roles exist to iterate over, check for the attr rather than a User/Member instance
+ if hasattr(message.author, "roles") and any(role.id in STAFF_ROLES for role in message.author.roles):
return
embed = Embed()
| {"golden_diff": "diff --git a/bot/cogs/antimalware.py b/bot/cogs/antimalware.py\n--- a/bot/cogs/antimalware.py\n+++ b/bot/cogs/antimalware.py\n@@ -4,7 +4,7 @@\n from discord.ext.commands import Cog\n \n from bot.bot import Bot\n-from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs\n+from bot.constants import AntiMalware as AntiMalwareConfig, Channels, STAFF_ROLES, URLs\n \n log = logging.getLogger(__name__)\n \n@@ -18,7 +18,13 @@\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Identify messages with prohibited attachments.\"\"\"\n- if not message.attachments:\n+ # Return when message don't have attachment and don't moderate DMs\n+ if not message.attachments or not message.guild:\n+ return\n+\n+ # Check if user is staff, if is, return\n+ # Since we only care that roles exist to iterate over, check for the attr rather than a User/Member instance\n+ if hasattr(message.author, \"roles\") and any(role.id in STAFF_ROLES for role in message.author.roles):\n return\n \n embed = Embed()\n", "issue": "Filters should not apply to staff members\nCurrently, we have certain channels whitelisted for certain filters and some filters apply to all members, even Owners cannot post filetypes that aren't on the whitelist!\r\n\r\nPlease change this so that absolutely all filters will ignore all staff members. It is not necessary to whitelist staff channels or to keep any other kind of exceptions to the filters once this very simple exception has been added, so please clean up any such exceptions while you're at it.\n", "before_files": [{"content": "import logging\n\nfrom discord import Embed, Message, NotFound\nfrom discord.ext.commands import Cog\n\nfrom bot.bot import Bot\nfrom bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs\n\nlog = logging.getLogger(__name__)\n\n\nclass AntiMalware(Cog):\n \"\"\"Delete messages which contain attachments with non-whitelisted file extensions.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Identify messages with prohibited attachments.\"\"\"\n if not message.attachments:\n return\n\n embed = Embed()\n for attachment in message.attachments:\n filename = attachment.filename.lower()\n if filename.endswith('.py'):\n embed.description = (\n f\"It looks like you tried to attach a Python file - please \"\n f\"use a code-pasting service such as {URLs.site_schema}{URLs.site_paste}\"\n )\n break # Other detections irrelevant because we prioritize the .py message.\n if not filename.endswith(tuple(AntiMalwareConfig.whitelist)):\n whitelisted_types = ', '.join(AntiMalwareConfig.whitelist)\n meta_channel = self.bot.get_channel(Channels.meta)\n embed.description = (\n f\"It looks like you tried to attach a file type that we \"\n f\"do not allow. We currently allow the following file \"\n f\"types: **{whitelisted_types}**. \\n\\n Feel free to ask \"\n f\"in {meta_channel.mention} if you think this is a mistake.\"\n )\n if embed.description:\n await message.channel.send(f\"Hey {message.author.mention}!\", embed=embed)\n\n # Delete the offending message:\n try:\n await message.delete()\n except NotFound:\n log.info(f\"Tried to delete message `{message.id}`, but message could not be found.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the AntiMalware cog.\"\"\"\n bot.add_cog(AntiMalware(bot))\n", "path": "bot/cogs/antimalware.py"}], "after_files": [{"content": "import logging\n\nfrom discord import Embed, Message, NotFound\nfrom discord.ext.commands import Cog\n\nfrom bot.bot import Bot\nfrom bot.constants import AntiMalware as AntiMalwareConfig, Channels, STAFF_ROLES, URLs\n\nlog = logging.getLogger(__name__)\n\n\nclass AntiMalware(Cog):\n \"\"\"Delete messages which contain attachments with non-whitelisted file extensions.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Identify messages with prohibited attachments.\"\"\"\n # Return when message don't have attachment and don't moderate DMs\n if not message.attachments or not message.guild:\n return\n\n # Check if user is staff, if is, return\n # Since we only care that roles exist to iterate over, check for the attr rather than a User/Member instance\n if hasattr(message.author, \"roles\") and any(role.id in STAFF_ROLES for role in message.author.roles):\n return\n\n embed = Embed()\n for attachment in message.attachments:\n filename = attachment.filename.lower()\n if filename.endswith('.py'):\n embed.description = (\n f\"It looks like you tried to attach a Python file - please \"\n f\"use a code-pasting service such as {URLs.site_schema}{URLs.site_paste}\"\n )\n break # Other detections irrelevant because we prioritize the .py message.\n if not filename.endswith(tuple(AntiMalwareConfig.whitelist)):\n whitelisted_types = ', '.join(AntiMalwareConfig.whitelist)\n meta_channel = self.bot.get_channel(Channels.meta)\n embed.description = (\n f\"It looks like you tried to attach a file type that we \"\n f\"do not allow. We currently allow the following file \"\n f\"types: **{whitelisted_types}**. \\n\\n Feel free to ask \"\n f\"in {meta_channel.mention} if you think this is a mistake.\"\n )\n if embed.description:\n await message.channel.send(f\"Hey {message.author.mention}!\", embed=embed)\n\n # Delete the offending message:\n try:\n await message.delete()\n except NotFound:\n log.info(f\"Tried to delete message `{message.id}`, but message could not be found.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the AntiMalware cog.\"\"\"\n bot.add_cog(AntiMalware(bot))\n", "path": "bot/cogs/antimalware.py"}]} | 897 | 277 |
gh_patches_debug_19708 | rasdani/github-patches | git_diff | translate__pootle-4422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migrations that alter sensitive schema fail
I am using `mysql Ver 14.14 Distrib 5.5.46, for debian-linux-gnu (x86_64) using readline 6.3`.
``` pytb
./manage.py migrate
2016-01-22 09:33:09,204 INFO Using Python PO
Operations to perform:
Synchronize unmigrated apps: django_rq, pootle_profile, staticfiles, pootle_terminology, debug_toolbar, pootle, allauth, humanize, contact, django_extensions, import_export, django_assets, pootle_misc, overextends
Apply all migrations: account, pootle_store, pootle_language, virtualfolder, pootle_app, pootle_project, sites, auth, reports, contenttypes, pootle_translationproject, accounts, pootle_statistics, sessions, staticpages, socialaccount
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying pootle_app.0005_case_sensitive_schema...Traceback (most recent call last):
File "./manage.py", line 22, in <module>
execute_from_command_line()
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py", line 394, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py", line 445, in execute
output = self.handle(*args, **options)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 222, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 110, in migrate
self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 148, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/migration.py", line 112, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/operations/special.py", line 183, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/leo/Escritorio/core_pootle/pootle/apps/pootle_app/migrations/0005_case_sensitive_schema.py", line 17, in make_directory_paths_cs
"varchar(255)")
File "/home/leo/Escritorio/core_pootle/pootle/core/utils/db.py", line 44, in set_mysql_collation_for_column
column, schema, collation))
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/utils.py", line 98, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-core-django-17.pootle_app_directory MODIFY pootle_path varchar(255) character' at line 1")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/utils/db.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9
10 def set_mysql_collation_for_column(apps, cursor, model, column, collation, schema):
11 """Set the collation for a mysql column if it is not set already
12 """
13
14 # Check its mysql - should probs check its not too old.
15 if not hasattr(cursor.db, "mysql_version"):
16 return
17
18 # Get the db_name
19 db_name = cursor.db.get_connection_params()['db']
20
21 # Get table_name
22 table_name = apps.get_model(model)._meta.db_table
23
24 # Get the current collation
25 cursor.execute(
26 "SELECT COLLATION_NAME"
27 " FROM information_schema.columns"
28 " WHERE TABLE_SCHEMA = '%s'"
29 " AND TABLE_NAME = '%s'"
30 " AND COLUMN_NAME = '%s';"
31 % (db_name, table_name, column))
32 current_collation = cursor.fetchone()[0]
33
34 if current_collation != collation:
35 # set collation
36 cursor.execute(
37 "ALTER TABLE %s.%s"
38 " MODIFY %s"
39 " %s"
40 " character set utf8"
41 " collate %s"
42 " NOT NULL;"
43 % (db_name, table_name,
44 column, schema, collation))
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/core/utils/db.py b/pootle/core/utils/db.py
--- a/pootle/core/utils/db.py
+++ b/pootle/core/utils/db.py
@@ -24,7 +24,7 @@
# Get the current collation
cursor.execute(
"SELECT COLLATION_NAME"
- " FROM information_schema.columns"
+ " FROM `information_schema`.`columns`"
" WHERE TABLE_SCHEMA = '%s'"
" AND TABLE_NAME = '%s'"
" AND COLUMN_NAME = '%s';"
@@ -34,11 +34,11 @@
if current_collation != collation:
# set collation
cursor.execute(
- "ALTER TABLE %s.%s"
- " MODIFY %s"
+ "ALTER TABLE `%s`.`%s`"
+ " MODIFY `%s`"
" %s"
- " character set utf8"
- " collate %s"
+ " CHARACTER SET utf8"
+ " COLLATE %s"
" NOT NULL;"
% (db_name, table_name,
column, schema, collation))
| {"golden_diff": "diff --git a/pootle/core/utils/db.py b/pootle/core/utils/db.py\n--- a/pootle/core/utils/db.py\n+++ b/pootle/core/utils/db.py\n@@ -24,7 +24,7 @@\n # Get the current collation\n cursor.execute(\n \"SELECT COLLATION_NAME\"\n- \" FROM information_schema.columns\"\n+ \" FROM `information_schema`.`columns`\"\n \" WHERE TABLE_SCHEMA = '%s'\"\n \" AND TABLE_NAME = '%s'\"\n \" AND COLUMN_NAME = '%s';\"\n@@ -34,11 +34,11 @@\n if current_collation != collation:\n # set collation\n cursor.execute(\n- \"ALTER TABLE %s.%s\"\n- \" MODIFY %s\"\n+ \"ALTER TABLE `%s`.`%s`\"\n+ \" MODIFY `%s`\"\n \" %s\"\n- \" character set utf8\"\n- \" collate %s\"\n+ \" CHARACTER SET utf8\"\n+ \" COLLATE %s\"\n \" NOT NULL;\"\n % (db_name, table_name,\n column, schema, collation))\n", "issue": "Migrations that alter sensitive schema fail\nI am using `mysql Ver 14.14 Distrib 5.5.46, for debian-linux-gnu (x86_64) using readline 6.3`.\n\n``` pytb\n./manage.py migrate \n2016-01-22 09:33:09,204 INFO Using Python PO\nOperations to perform:\n Synchronize unmigrated apps: django_rq, pootle_profile, staticfiles, pootle_terminology, debug_toolbar, pootle, allauth, humanize, contact, django_extensions, import_export, django_assets, pootle_misc, overextends\n Apply all migrations: account, pootle_store, pootle_language, virtualfolder, pootle_app, pootle_project, sites, auth, reports, contenttypes, pootle_translationproject, accounts, pootle_statistics, sessions, staticpages, socialaccount\nSynchronizing apps without migrations:\n Creating tables...\n Running deferred SQL...\n Installing custom SQL...\nRunning migrations:\n Rendering model states... DONE\n Applying pootle_app.0005_case_sensitive_schema...Traceback (most recent call last):\n File \"./manage.py\", line 22, in <module>\n execute_from_command_line()\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 354, in execute_from_command_line\n utility.execute()\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 346, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py\", line 394, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py\", line 445, in execute\n output = self.handle(*args, **options)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py\", line 222, in handle\n executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py\", line 110, in migrate\n self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py\", line 148, in apply_migration\n state = migration.apply(state, schema_editor)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/migration.py\", line 112, in apply\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/operations/special.py\", line 183, in database_forwards\n self.code(from_state.apps, schema_editor)\n File \"/home/leo/Escritorio/core_pootle/pootle/apps/pootle_app/migrations/0005_case_sensitive_schema.py\", line 17, in make_directory_paths_cs\n \"varchar(255)\")\n File \"/home/leo/Escritorio/core_pootle/pootle/core/utils/db.py\", line 44, in set_mysql_collation_for_column\n column, schema, collation))\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 79, in execute\n return super(CursorDebugWrapper, self).execute(sql, params)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 64, in execute\n return self.cursor.execute(sql, params)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/utils.py\", line 98, in __exit__\n six.reraise(dj_exc_type, dj_exc_value, traceback)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 62, in execute\n return self.cursor.execute(sql)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py\", line 124, in execute\n return self.cursor.execute(query, args)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/cursors.py\", line 205, in execute\n self.errorhandler(self, exc, value)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/connections.py\", line 36, in defaulterrorhandler\n raise errorclass, errorvalue\ndjango.db.utils.ProgrammingError: (1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-core-django-17.pootle_app_directory MODIFY pootle_path varchar(255) character' at line 1\")\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\ndef set_mysql_collation_for_column(apps, cursor, model, column, collation, schema):\n \"\"\"Set the collation for a mysql column if it is not set already\n \"\"\"\n\n # Check its mysql - should probs check its not too old.\n if not hasattr(cursor.db, \"mysql_version\"):\n return\n\n # Get the db_name\n db_name = cursor.db.get_connection_params()['db']\n\n # Get table_name\n table_name = apps.get_model(model)._meta.db_table\n\n # Get the current collation\n cursor.execute(\n \"SELECT COLLATION_NAME\"\n \" FROM information_schema.columns\"\n \" WHERE TABLE_SCHEMA = '%s'\"\n \" AND TABLE_NAME = '%s'\"\n \" AND COLUMN_NAME = '%s';\"\n % (db_name, table_name, column))\n current_collation = cursor.fetchone()[0]\n\n if current_collation != collation:\n # set collation\n cursor.execute(\n \"ALTER TABLE %s.%s\"\n \" MODIFY %s\"\n \" %s\"\n \" character set utf8\"\n \" collate %s\"\n \" NOT NULL;\"\n % (db_name, table_name,\n column, schema, collation))\n", "path": "pootle/core/utils/db.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\ndef set_mysql_collation_for_column(apps, cursor, model, column, collation, schema):\n \"\"\"Set the collation for a mysql column if it is not set already\n \"\"\"\n\n # Check its mysql - should probs check its not too old.\n if not hasattr(cursor.db, \"mysql_version\"):\n return\n\n # Get the db_name\n db_name = cursor.db.get_connection_params()['db']\n\n # Get table_name\n table_name = apps.get_model(model)._meta.db_table\n\n # Get the current collation\n cursor.execute(\n \"SELECT COLLATION_NAME\"\n \" FROM `information_schema`.`columns`\"\n \" WHERE TABLE_SCHEMA = '%s'\"\n \" AND TABLE_NAME = '%s'\"\n \" AND COLUMN_NAME = '%s';\"\n % (db_name, table_name, column))\n current_collation = cursor.fetchone()[0]\n\n if current_collation != collation:\n # set collation\n cursor.execute(\n \"ALTER TABLE `%s`.`%s`\"\n \" MODIFY `%s`\"\n \" %s\"\n \" CHARACTER SET utf8\"\n \" COLLATE %s\"\n \" NOT NULL;\"\n % (db_name, table_name,\n column, schema, collation))\n", "path": "pootle/core/utils/db.py"}]} | 1,945 | 255 |
gh_patches_debug_265 | rasdani/github-patches | git_diff | Nitrate__Nitrate-603 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrade celery to 4.3.0
As per title. Remove `skipIf` from test `test_uses_celery`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'beautifulsoup4 >= 4.1.1',
17 'django >= 2.1,<3.0',
18 'django-contrib-comments == 1.9.1',
19 'django-tinymce == 2.7.0',
20 'django-uuslug == 1.1.8',
21 'html2text',
22 'odfpy >= 0.9.6',
23 'python-bugzilla',
24 'xmltodict',
25 'kobo == 0.9.0'
26 ]
27
28 extras_require = {
29 'mysql': ['mysqlclient >= 1.2.3'],
30 'pgsql': ['psycopg2 == 2.7.5'],
31
32 # Required for tcms.auth.backends.KerberosBackend
33 'krbauth': [
34 'kerberos == 1.2.5'
35 ],
36
37 # Packages for building documentation
38 'docs': [
39 'Sphinx >= 1.1.2',
40 'sphinx_rtd_theme',
41 ],
42
43 # Necessary packages for running tests
44 'tests': [
45 'beautifulsoup4',
46 'coverage',
47 'factory_boy',
48 'flake8',
49 'pytest',
50 'pytest-cov',
51 'pytest-django',
52 ],
53
54 # Contain tools that assists the development
55 'devtools': [
56 'django-debug-toolbar',
57 'tox',
58 'django-extensions',
59 'pygraphviz',
60 ],
61
62 # Required packages required to run async tasks
63 'async': [
64 'celery == 4.2.0',
65 ],
66
67 'multiauth': [
68 'social-auth-app-django == 3.1.0',
69 ]
70 }
71
72 setup(
73 name='nitrate-tcms',
74 version=pkg_version,
75 description='A full-featured Test Case Management System',
76 long_description=get_long_description(),
77 author='Nitrate Team',
78 maintainer='Chenxiong Qi',
79 maintainer_email='[email protected]',
80 url='https://github.com/Nitrate/Nitrate/',
81 license='GPLv2+',
82 keywords='test case',
83 install_requires=install_requires,
84 extras_require=extras_require,
85 python_requires='>=3.6',
86 package_dir={'': 'src'},
87 packages=find_packages('src', exclude=['test*']),
88 include_package_data=True,
89 zip_safe=False,
90 classifiers=[
91 'Framework :: Django',
92 'Framework :: Django :: 2.0',
93 'Framework :: Django :: 2.1',
94 'Framework :: Django :: 2.2',
95 'Intended Audience :: Developers',
96 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
97 'Programming Language :: Python :: 3',
98 'Programming Language :: Python :: 3.6',
99 'Programming Language :: Python :: 3.7',
100 'Programming Language :: Python :: 3 :: Only',
101 'Topic :: Software Development :: Quality Assurance',
102 'Topic :: Software Development :: Testing',
103 ],
104 project_urls={
105 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',
106 'Source Code': 'https://github.com/Nitrate/Nitrate',
107 'Documentation': 'https://nitrate.readthedocs.io/',
108 },
109 )
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,7 @@
# Required packages required to run async tasks
'async': [
- 'celery == 4.2.0',
+ 'celery == 4.4.2',
],
'multiauth': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,7 +61,7 @@\n \n # Required packages required to run async tasks\n 'async': [\n- 'celery == 4.2.0',\n+ 'celery == 4.4.2',\n ],\n \n 'multiauth': [\n", "issue": "Upgrade celery to 4.3.0\nAs per title. Remove `skipIf` from test `test_uses_celery`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.1,<3.0',\n 'django-contrib-comments == 1.9.1',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='nitrate-tcms',\n version=pkg_version,\n description='A full-featured Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.1,<3.0',\n 'django-contrib-comments == 1.9.1',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.4.2',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='nitrate-tcms',\n version=pkg_version,\n description='A full-featured Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]} | 1,289 | 84 |
gh_patches_debug_1675 | rasdani/github-patches | git_diff | translate__pootle-4882 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make `pootle webpack` not require system checks
`pootle webpack` fails if eg the db is not set up/correctly. It would be helpful if it didnt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_app/management/commands/webpack.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import os
10 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
11 import subprocess
12 import sys
13
14 from django.conf import settings
15 from django.core.management.base import BaseCommand, CommandError
16
17 from pootle_misc.baseurl import l
18
19
20 class Command(BaseCommand):
21 help = 'Builds and bundles static assets using webpack'
22
23 def add_arguments(self, parser):
24 parser.add_argument(
25 '--dev',
26 action='store_true',
27 dest='dev',
28 default=False,
29 help='Enable development builds and watch for changes.',
30 )
31 parser.add_argument(
32 '--nowatch',
33 action='store_false',
34 dest='watch',
35 default=True,
36 help='Disable watching for changes.',
37 )
38 parser.add_argument(
39 '--progress',
40 action='store_true',
41 default=False,
42 help='Show progress (implied if --dev is present).',
43 )
44 parser.add_argument(
45 '--extra',
46 action='append',
47 default=[],
48 help='Additional options to pass to the JavaScript webpack tool.',
49 )
50
51 def handle(self, **options):
52 default_static_dir = os.path.join(settings.WORKING_DIR, 'static')
53 custom_static_dirs = filter(lambda x: x != default_static_dir,
54 settings.STATICFILES_DIRS)
55 default_js_dir = os.path.join(default_static_dir, 'js')
56
57 webpack_config_file = os.path.join(default_js_dir, 'webpack.config.js')
58
59 webpack_bin = os.path.join(default_js_dir, 'node_modules/.bin/webpack')
60 if os.name == 'nt':
61 webpack_bin = '%s.cmd' % webpack_bin
62
63 webpack_progress = (
64 '--progress' if options['progress'] or options['dev'] else ''
65 )
66 webpack_colors = '--colors' if not options['no_color'] else ''
67
68 webpack_args = [webpack_bin, '--config=%s' % webpack_config_file]
69 if webpack_progress:
70 webpack_args.append(webpack_progress)
71 if webpack_colors:
72 webpack_args.append(webpack_colors)
73
74 if options['dev']:
75 watch = '--watch' if options['watch'] else ''
76 webpack_args.extend([watch, '--display-error-details'])
77 else:
78 os.environ['NODE_ENV'] = 'production'
79 webpack_args.append("--bail")
80
81 webpack_args.extend(options['extra'])
82
83 static_base = l(settings.STATIC_URL)
84 suffix = 'js/' if static_base.endswith('/') else '/js/'
85 os.environ['WEBPACK_PUBLIC_PATH'] = static_base + suffix
86
87 if custom_static_dirs:
88 # XXX: review this for css
89 # Append `js/` so that it's not necessary to reference it from the
90 # `webpack.config.js` file
91 custom_static_dirs = map(lambda x: os.path.join(x, 'js/'),
92 custom_static_dirs)
93 os.environ['WEBPACK_ROOT'] = ':'.join(custom_static_dirs)
94
95 try:
96 subprocess.call(webpack_args)
97 except OSError:
98 raise CommandError(
99 'webpack executable not found.\n'
100 'Make sure to install it by running '
101 '`cd %s && npm install`' % default_js_dir
102 )
103 sys.exit(0)
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_app/management/commands/webpack.py b/pootle/apps/pootle_app/management/commands/webpack.py
--- a/pootle/apps/pootle_app/management/commands/webpack.py
+++ b/pootle/apps/pootle_app/management/commands/webpack.py
@@ -19,6 +19,7 @@
class Command(BaseCommand):
help = 'Builds and bundles static assets using webpack'
+ requires_system_checks = False
def add_arguments(self, parser):
parser.add_argument(
| {"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/webpack.py b/pootle/apps/pootle_app/management/commands/webpack.py\n--- a/pootle/apps/pootle_app/management/commands/webpack.py\n+++ b/pootle/apps/pootle_app/management/commands/webpack.py\n@@ -19,6 +19,7 @@\n \n class Command(BaseCommand):\n help = 'Builds and bundles static assets using webpack'\n+ requires_system_checks = False\n \n def add_arguments(self, parser):\n parser.add_argument(\n", "issue": "Make `pootle webpack` not require system checks\n`pootle webpack` fails if eg the db is not set up/correctly. It would be helpful if it didnt\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\nimport subprocess\nimport sys\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle_misc.baseurl import l\n\n\nclass Command(BaseCommand):\n help = 'Builds and bundles static assets using webpack'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--dev',\n action='store_true',\n dest='dev',\n default=False,\n help='Enable development builds and watch for changes.',\n )\n parser.add_argument(\n '--nowatch',\n action='store_false',\n dest='watch',\n default=True,\n help='Disable watching for changes.',\n )\n parser.add_argument(\n '--progress',\n action='store_true',\n default=False,\n help='Show progress (implied if --dev is present).',\n )\n parser.add_argument(\n '--extra',\n action='append',\n default=[],\n help='Additional options to pass to the JavaScript webpack tool.',\n )\n\n def handle(self, **options):\n default_static_dir = os.path.join(settings.WORKING_DIR, 'static')\n custom_static_dirs = filter(lambda x: x != default_static_dir,\n settings.STATICFILES_DIRS)\n default_js_dir = os.path.join(default_static_dir, 'js')\n\n webpack_config_file = os.path.join(default_js_dir, 'webpack.config.js')\n\n webpack_bin = os.path.join(default_js_dir, 'node_modules/.bin/webpack')\n if os.name == 'nt':\n webpack_bin = '%s.cmd' % webpack_bin\n\n webpack_progress = (\n '--progress' if options['progress'] or options['dev'] else ''\n )\n webpack_colors = '--colors' if not options['no_color'] else ''\n\n webpack_args = [webpack_bin, '--config=%s' % webpack_config_file]\n if webpack_progress:\n webpack_args.append(webpack_progress)\n if webpack_colors:\n webpack_args.append(webpack_colors)\n\n if options['dev']:\n watch = '--watch' if options['watch'] else ''\n webpack_args.extend([watch, '--display-error-details'])\n else:\n os.environ['NODE_ENV'] = 'production'\n webpack_args.append(\"--bail\")\n\n webpack_args.extend(options['extra'])\n\n static_base = l(settings.STATIC_URL)\n suffix = 'js/' if static_base.endswith('/') else '/js/'\n os.environ['WEBPACK_PUBLIC_PATH'] = static_base + suffix\n\n if custom_static_dirs:\n # XXX: review this for css\n # Append `js/` so that it's not necessary to reference it from the\n # `webpack.config.js` file\n custom_static_dirs = map(lambda x: os.path.join(x, 'js/'),\n custom_static_dirs)\n os.environ['WEBPACK_ROOT'] = ':'.join(custom_static_dirs)\n\n try:\n subprocess.call(webpack_args)\n except OSError:\n raise CommandError(\n 'webpack executable not found.\\n'\n 'Make sure to install it by running '\n '`cd %s && npm install`' % default_js_dir\n )\n sys.exit(0)\n", "path": "pootle/apps/pootle_app/management/commands/webpack.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\nimport subprocess\nimport sys\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle_misc.baseurl import l\n\n\nclass Command(BaseCommand):\n help = 'Builds and bundles static assets using webpack'\n requires_system_checks = False\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--dev',\n action='store_true',\n dest='dev',\n default=False,\n help='Enable development builds and watch for changes.',\n )\n parser.add_argument(\n '--nowatch',\n action='store_false',\n dest='watch',\n default=True,\n help='Disable watching for changes.',\n )\n parser.add_argument(\n '--progress',\n action='store_true',\n default=False,\n help='Show progress (implied if --dev is present).',\n )\n parser.add_argument(\n '--extra',\n action='append',\n default=[],\n help='Additional options to pass to the JavaScript webpack tool.',\n )\n\n def handle(self, **options):\n default_static_dir = os.path.join(settings.WORKING_DIR, 'static')\n custom_static_dirs = filter(lambda x: x != default_static_dir,\n settings.STATICFILES_DIRS)\n default_js_dir = os.path.join(default_static_dir, 'js')\n\n webpack_config_file = os.path.join(default_js_dir, 'webpack.config.js')\n\n webpack_bin = os.path.join(default_js_dir, 'node_modules/.bin/webpack')\n if os.name == 'nt':\n webpack_bin = '%s.cmd' % webpack_bin\n\n webpack_progress = (\n '--progress' if options['progress'] or options['dev'] else ''\n )\n webpack_colors = '--colors' if not options['no_color'] else ''\n\n webpack_args = [webpack_bin, '--config=%s' % webpack_config_file]\n if webpack_progress:\n webpack_args.append(webpack_progress)\n if webpack_colors:\n webpack_args.append(webpack_colors)\n\n if options['dev']:\n watch = '--watch' if options['watch'] else ''\n webpack_args.extend([watch, '--display-error-details'])\n else:\n os.environ['NODE_ENV'] = 'production'\n webpack_args.append(\"--bail\")\n\n webpack_args.extend(options['extra'])\n\n static_base = l(settings.STATIC_URL)\n suffix = 'js/' if static_base.endswith('/') else '/js/'\n os.environ['WEBPACK_PUBLIC_PATH'] = static_base + suffix\n\n if custom_static_dirs:\n # XXX: review this for css\n # Append `js/` so that it's not necessary to reference it from the\n # `webpack.config.js` file\n custom_static_dirs = map(lambda x: os.path.join(x, 'js/'),\n custom_static_dirs)\n os.environ['WEBPACK_ROOT'] = ':'.join(custom_static_dirs)\n\n try:\n subprocess.call(webpack_args)\n except OSError:\n raise CommandError(\n 'webpack executable not found.\\n'\n 'Make sure to install it by running '\n '`cd %s && npm install`' % default_js_dir\n )\n sys.exit(0)\n", "path": "pootle/apps/pootle_app/management/commands/webpack.py"}]} | 1,259 | 124 |
gh_patches_debug_39408 | rasdani/github-patches | git_diff | deepset-ai__haystack-6396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`PyPDFToDocument` 2.0 is not JSON-serializable due to its `DefaultConverter`
**Describe the bug**
Pipelines containing a PyPDFConverter are not JSON-serializable due to the `DefaultConverter` set in its init.
**Error message**
`TypeError: Object of type DefaultConverter is not JSON serializable`
**Expected behavior**
JSON-serializing and deserializing a pipeline containing a PyPDFConverter should be possible.
**Additional context**
Similarly, the following should be possible:
```python
from haystack.preview.components.file_converters import PyPDFToDocument
from haystack.preview import Pipeline
p = Pipeline()
p.add_component(instance=PyPDFToDocument(), name="pdf_file_converter")
x = Pipeline.loads(p.dumps())
```
but it currently raises:
```
yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:haystack.preview.components.file_converters.pypdf.DefaultConverter'
in "<unicode string>", line 4, column 18:
converter: !!python/object:haystack.preview ...
```
**To Reproduce**
```python
from haystack.preview.components.file_converters import PyPDFToDocument
from haystack.preview import Pipeline
import json
p = Pipeline()
p.add_component(instance=PyPDFToDocument(), name="pdf_file_converter")
print(json.dumps(p.to_dict(), indent=4))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type DefaultConverter is not JSON serializable
```
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/preview/components/converters/pypdf.py`
Content:
```
1 import io
2 import logging
3 from typing import List, Union, Optional, Protocol
4 from pathlib import Path
5
6 from haystack.preview.dataclasses import ByteStream
7 from haystack.preview.lazy_imports import LazyImport
8 from haystack.preview import Document, component
9
10 with LazyImport("Run 'pip install pypdf'") as pypdf_import:
11 from pypdf import PdfReader
12
13
14 logger = logging.getLogger(__name__)
15
16
17 class PyPDFConverter(Protocol):
18 """
19 A protocol that defines a converter which takes a PdfReader object and converts it into a Document object.
20 """
21
22 def convert(self, reader: "PdfReader") -> Document:
23 ...
24
25
26 class DefaultConverter:
27 """
28 The default converter class that extracts text from a PdfReader object's pages and returns a Document.
29 """
30
31 def convert(self, reader: "PdfReader") -> Document:
32 """Extract text from the PDF and return a Document object with the text content."""
33 text = "".join(page.extract_text() for page in reader.pages if page.extract_text())
34 return Document(content=text)
35
36
37 @component
38 class PyPDFToDocument:
39 """
40 Converts PDF files to Document objects.
41 It uses a converter that follows the PyPDFConverter protocol to perform the conversion.
42 A default text extraction converter is used if no custom converter is provided.
43 """
44
45 def __init__(self, converter: Optional[PyPDFConverter] = None):
46 """
47 Initializes the PyPDFToDocument component with an optional custom converter.
48 :param converter: A converter instance that adheres to the PyPDFConverter protocol.
49 If None, the DefaultConverter is used.
50 """
51 pypdf_import.check()
52 self.converter: PyPDFConverter = converter or DefaultConverter()
53
54 @component.output_types(documents=List[Document])
55 def run(self, sources: List[Union[str, Path, ByteStream]]):
56 """
57 Converts a list of PDF sources into Document objects using the configured converter.
58
59 :param sources: A list of PDF data sources, which can be file paths or ByteStream objects.
60 :return: A dictionary containing a list of Document objects under the 'documents' key.
61 """
62 documents = []
63 for source in sources:
64 try:
65 pdf_reader = self._get_pdf_reader(source)
66 document = self.converter.convert(pdf_reader)
67 except Exception as e:
68 logger.warning("Could not read %s and convert it to Document, skipping. %s", source, e)
69 continue
70 documents.append(document)
71
72 return {"documents": documents}
73
74 def _get_pdf_reader(self, source: Union[str, Path, ByteStream]) -> "PdfReader":
75 """
76 Creates a PdfReader object from a given source, which can be a file path or a ByteStream object.
77
78 :param source: The source of the PDF data.
79 :return: A PdfReader instance initialized with the PDF data from the source.
80 :raises ValueError: If the source type is not supported.
81 """
82 if isinstance(source, (str, Path)):
83 return PdfReader(str(source))
84 elif isinstance(source, ByteStream):
85 return PdfReader(io.BytesIO(source.data))
86 else:
87 raise ValueError(f"Unsupported source type: {type(source)}")
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/haystack/preview/components/converters/pypdf.py b/haystack/preview/components/converters/pypdf.py
--- a/haystack/preview/components/converters/pypdf.py
+++ b/haystack/preview/components/converters/pypdf.py
@@ -1,11 +1,11 @@
import io
import logging
-from typing import List, Union, Optional, Protocol
+from typing import List, Union, Protocol, Dict
from pathlib import Path
from haystack.preview.dataclasses import ByteStream
from haystack.preview.lazy_imports import LazyImport
-from haystack.preview import Document, component
+from haystack.preview import Document, component, default_to_dict
with LazyImport("Run 'pip install pypdf'") as pypdf_import:
from pypdf import PdfReader
@@ -34,6 +34,11 @@
return Document(content=text)
+# This registry is used to store converters names and instances.
+# It can be used to register custom converters.
+CONVERTERS_REGISTRY: Dict[str, PyPDFConverter] = {"default": DefaultConverter()}
+
+
@component
class PyPDFToDocument:
"""
@@ -42,14 +47,27 @@
A default text extraction converter is used if no custom converter is provided.
"""
- def __init__(self, converter: Optional[PyPDFConverter] = None):
+ def __init__(self, converter_name: str = "default"):
"""
Initializes the PyPDFToDocument component with an optional custom converter.
- :param converter: A converter instance that adheres to the PyPDFConverter protocol.
- If None, the DefaultConverter is used.
+ :param converter_name: A converter name that is registered in the CONVERTERS_REGISTRY.
+ Defaults to 'default'.
"""
pypdf_import.check()
- self.converter: PyPDFConverter = converter or DefaultConverter()
+
+ try:
+ converter = CONVERTERS_REGISTRY[converter_name]
+ except KeyError:
+ msg = (
+ f"Invalid converter_name: {converter_name}.\n Available converters: {list(CONVERTERS_REGISTRY.keys())}"
+ )
+ raise ValueError(msg) from KeyError
+ self.converter_name = converter_name
+ self._converter: PyPDFConverter = converter
+
+ def to_dict(self):
+ # do not serialize the _converter instance
+ return default_to_dict(self, converter_name=self.converter_name)
@component.output_types(documents=List[Document])
def run(self, sources: List[Union[str, Path, ByteStream]]):
@@ -63,7 +81,7 @@
for source in sources:
try:
pdf_reader = self._get_pdf_reader(source)
- document = self.converter.convert(pdf_reader)
+ document = self._converter.convert(pdf_reader)
except Exception as e:
logger.warning("Could not read %s and convert it to Document, skipping. %s", source, e)
continue
| {"golden_diff": "diff --git a/haystack/preview/components/converters/pypdf.py b/haystack/preview/components/converters/pypdf.py\n--- a/haystack/preview/components/converters/pypdf.py\n+++ b/haystack/preview/components/converters/pypdf.py\n@@ -1,11 +1,11 @@\n import io\n import logging\n-from typing import List, Union, Optional, Protocol\n+from typing import List, Union, Protocol, Dict\n from pathlib import Path\n \n from haystack.preview.dataclasses import ByteStream\n from haystack.preview.lazy_imports import LazyImport\n-from haystack.preview import Document, component\n+from haystack.preview import Document, component, default_to_dict\n \n with LazyImport(\"Run 'pip install pypdf'\") as pypdf_import:\n from pypdf import PdfReader\n@@ -34,6 +34,11 @@\n return Document(content=text)\n \n \n+# This registry is used to store converters names and instances.\n+# It can be used to register custom converters.\n+CONVERTERS_REGISTRY: Dict[str, PyPDFConverter] = {\"default\": DefaultConverter()}\n+\n+\n @component\n class PyPDFToDocument:\n \"\"\"\n@@ -42,14 +47,27 @@\n A default text extraction converter is used if no custom converter is provided.\n \"\"\"\n \n- def __init__(self, converter: Optional[PyPDFConverter] = None):\n+ def __init__(self, converter_name: str = \"default\"):\n \"\"\"\n Initializes the PyPDFToDocument component with an optional custom converter.\n- :param converter: A converter instance that adheres to the PyPDFConverter protocol.\n- If None, the DefaultConverter is used.\n+ :param converter_name: A converter name that is registered in the CONVERTERS_REGISTRY.\n+ Defaults to 'default'.\n \"\"\"\n pypdf_import.check()\n- self.converter: PyPDFConverter = converter or DefaultConverter()\n+\n+ try:\n+ converter = CONVERTERS_REGISTRY[converter_name]\n+ except KeyError:\n+ msg = (\n+ f\"Invalid converter_name: {converter_name}.\\n Available converters: {list(CONVERTERS_REGISTRY.keys())}\"\n+ )\n+ raise ValueError(msg) from KeyError\n+ self.converter_name = converter_name\n+ self._converter: PyPDFConverter = converter\n+\n+ def to_dict(self):\n+ # do not serialize the _converter instance\n+ return default_to_dict(self, converter_name=self.converter_name)\n \n @component.output_types(documents=List[Document])\n def run(self, sources: List[Union[str, Path, ByteStream]]):\n@@ -63,7 +81,7 @@\n for source in sources:\n try:\n pdf_reader = self._get_pdf_reader(source)\n- document = self.converter.convert(pdf_reader)\n+ document = self._converter.convert(pdf_reader)\n except Exception as e:\n logger.warning(\"Could not read %s and convert it to Document, skipping. %s\", source, e)\n continue\n", "issue": "`PyPDFToDocument` 2.0 is not JSON-serializable due to its `DefaultConverter`\n**Describe the bug**\r\nPipelines containing a PyPDFConverter are not JSON-serializable due to the `DefaultConverter` set in its init. \r\n\r\n**Error message**\r\n`TypeError: Object of type DefaultConverter is not JSON serializable`\r\n\r\n**Expected behavior**\r\nJSON-serializing and deserializing a pipeline containing a PyPDFConverter should be possible. \r\n\r\n**Additional context**\r\nSimilarly, the following should be possible:\r\n```python\r\nfrom haystack.preview.components.file_converters import PyPDFToDocument\r\nfrom haystack.preview import Pipeline\r\np = Pipeline()\r\np.add_component(instance=PyPDFToDocument(), name=\"pdf_file_converter\")\r\nx = Pipeline.loads(p.dumps())\r\n```\r\nbut it currently raises:\r\n```\r\nyaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:haystack.preview.components.file_converters.pypdf.DefaultConverter'\r\n in \"<unicode string>\", line 4, column 18:\r\n converter: !!python/object:haystack.preview ...\r\n```\r\n\r\n**To Reproduce**\r\n```python\r\nfrom haystack.preview.components.file_converters import PyPDFToDocument\r\nfrom haystack.preview import Pipeline\r\nimport json\r\np = Pipeline()\r\np.add_component(instance=PyPDFToDocument(), name=\"pdf_file_converter\")\r\nprint(json.dumps(p.to_dict(), indent=4))\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n [Previous line repeated 1 more time]\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type DefaultConverter is not JSON serializable\r\n```\r\n\r\n**FAQ Check**\r\n- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?\r\n\r\n**System:**\r\n - OS:\r\n - GPU/CPU:\r\n - Haystack version (commit or version number):\r\n - DocumentStore:\r\n - Reader:\r\n - Retriever:\r\n\n", "before_files": [{"content": "import io\nimport logging\nfrom typing import List, Union, Optional, Protocol\nfrom pathlib import Path\n\nfrom haystack.preview.dataclasses import ByteStream\nfrom haystack.preview.lazy_imports import LazyImport\nfrom haystack.preview import Document, component\n\nwith LazyImport(\"Run 'pip install pypdf'\") as pypdf_import:\n from pypdf import PdfReader\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PyPDFConverter(Protocol):\n \"\"\"\n A protocol that defines a converter which takes a PdfReader object and converts it into a Document object.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n ...\n\n\nclass DefaultConverter:\n \"\"\"\n The default converter class that extracts text from a PdfReader object's pages and returns a Document.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n \"\"\"Extract text from the PDF and return a Document object with the text content.\"\"\"\n text = \"\".join(page.extract_text() for page in reader.pages if page.extract_text())\n return Document(content=text)\n\n\n@component\nclass PyPDFToDocument:\n \"\"\"\n Converts PDF files to Document objects.\n It uses a converter that follows the PyPDFConverter protocol to perform the conversion.\n A default text extraction converter is used if no custom converter is provided.\n \"\"\"\n\n def __init__(self, converter: Optional[PyPDFConverter] = None):\n \"\"\"\n Initializes the PyPDFToDocument component with an optional custom converter.\n :param converter: A converter instance that adheres to the PyPDFConverter protocol.\n If None, the DefaultConverter is used.\n \"\"\"\n pypdf_import.check()\n self.converter: PyPDFConverter = converter or DefaultConverter()\n\n @component.output_types(documents=List[Document])\n def run(self, sources: List[Union[str, Path, ByteStream]]):\n \"\"\"\n Converts a list of PDF sources into Document objects using the configured converter.\n\n :param sources: A list of PDF data sources, which can be file paths or ByteStream objects.\n :return: A dictionary containing a list of Document objects under the 'documents' key.\n \"\"\"\n documents = []\n for source in sources:\n try:\n pdf_reader = self._get_pdf_reader(source)\n document = self.converter.convert(pdf_reader)\n except Exception as e:\n logger.warning(\"Could not read %s and convert it to Document, skipping. %s\", source, e)\n continue\n documents.append(document)\n\n return {\"documents\": documents}\n\n def _get_pdf_reader(self, source: Union[str, Path, ByteStream]) -> \"PdfReader\":\n \"\"\"\n Creates a PdfReader object from a given source, which can be a file path or a ByteStream object.\n\n :param source: The source of the PDF data.\n :return: A PdfReader instance initialized with the PDF data from the source.\n :raises ValueError: If the source type is not supported.\n \"\"\"\n if isinstance(source, (str, Path)):\n return PdfReader(str(source))\n elif isinstance(source, ByteStream):\n return PdfReader(io.BytesIO(source.data))\n else:\n raise ValueError(f\"Unsupported source type: {type(source)}\")\n", "path": "haystack/preview/components/converters/pypdf.py"}], "after_files": [{"content": "import io\nimport logging\nfrom typing import List, Union, Protocol, Dict\nfrom pathlib import Path\n\nfrom haystack.preview.dataclasses import ByteStream\nfrom haystack.preview.lazy_imports import LazyImport\nfrom haystack.preview import Document, component, default_to_dict\n\nwith LazyImport(\"Run 'pip install pypdf'\") as pypdf_import:\n from pypdf import PdfReader\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PyPDFConverter(Protocol):\n \"\"\"\n A protocol that defines a converter which takes a PdfReader object and converts it into a Document object.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n ...\n\n\nclass DefaultConverter:\n \"\"\"\n The default converter class that extracts text from a PdfReader object's pages and returns a Document.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n \"\"\"Extract text from the PDF and return a Document object with the text content.\"\"\"\n text = \"\".join(page.extract_text() for page in reader.pages if page.extract_text())\n return Document(content=text)\n\n\n# This registry is used to store converters names and instances.\n# It can be used to register custom converters.\nCONVERTERS_REGISTRY: Dict[str, PyPDFConverter] = {\"default\": DefaultConverter()}\n\n\n@component\nclass PyPDFToDocument:\n \"\"\"\n Converts PDF files to Document objects.\n It uses a converter that follows the PyPDFConverter protocol to perform the conversion.\n A default text extraction converter is used if no custom converter is provided.\n \"\"\"\n\n def __init__(self, converter_name: str = \"default\"):\n \"\"\"\n Initializes the PyPDFToDocument component with an optional custom converter.\n :param converter_name: A converter name that is registered in the CONVERTERS_REGISTRY.\n Defaults to 'default'.\n \"\"\"\n pypdf_import.check()\n\n try:\n converter = CONVERTERS_REGISTRY[converter_name]\n except KeyError:\n msg = (\n f\"Invalid converter_name: {converter_name}.\\n Available converters: {list(CONVERTERS_REGISTRY.keys())}\"\n )\n raise ValueError(msg) from KeyError\n self.converter_name = converter_name\n self._converter: PyPDFConverter = converter\n\n def to_dict(self):\n # do not serialize the _converter instance\n return default_to_dict(self, converter_name=self.converter_name)\n\n @component.output_types(documents=List[Document])\n def run(self, sources: List[Union[str, Path, ByteStream]]):\n \"\"\"\n Converts a list of PDF sources into Document objects using the configured converter.\n\n :param sources: A list of PDF data sources, which can be file paths or ByteStream objects.\n :return: A dictionary containing a list of Document objects under the 'documents' key.\n \"\"\"\n documents = []\n for source in sources:\n try:\n pdf_reader = self._get_pdf_reader(source)\n document = self._converter.convert(pdf_reader)\n except Exception as e:\n logger.warning(\"Could not read %s and convert it to Document, skipping. %s\", source, e)\n continue\n documents.append(document)\n\n return {\"documents\": documents}\n\n def _get_pdf_reader(self, source: Union[str, Path, ByteStream]) -> \"PdfReader\":\n \"\"\"\n Creates a PdfReader object from a given source, which can be a file path or a ByteStream object.\n\n :param source: The source of the PDF data.\n :return: A PdfReader instance initialized with the PDF data from the source.\n :raises ValueError: If the source type is not supported.\n \"\"\"\n if isinstance(source, (str, Path)):\n return PdfReader(str(source))\n elif isinstance(source, ByteStream):\n return PdfReader(io.BytesIO(source.data))\n else:\n raise ValueError(f\"Unsupported source type: {type(source)}\")\n", "path": "haystack/preview/components/converters/pypdf.py"}]} | 1,929 | 650 |
gh_patches_debug_16748 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-375 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`val_split` + `overfit_batches` gives infinite recursion error
## 🐛 Bug
`val_split` + `overfit_batches` gives infinite recursion error on master
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/core/data/splits.py`
Content:
```
1 from typing import Any, List
2
3 import numpy as np
4 from pytorch_lightning.utilities.exceptions import MisconfigurationException
5 from torch.utils.data import Dataset
6
7
8 class SplitDataset(Dataset):
9 """
10 SplitDataset is used to create Dataset Subset using indices.
11
12 Args:
13
14 dataset: A dataset to be splitted
15 indices: List of indices to expose from the dataset
16 use_duplicated_indices: Wether to allow duplicated indices.
17
18 Example::
19
20 split_ds = SplitDataset(dataset, indices=[10, 14, 25])
21
22 split_ds = SplitDataset(dataset, indices=[10, 10, 10, 14, 25], use_duplicated_indices=True)
23
24 """
25
26 _INTERNAL_KEYS = ("dataset", "indices", "data")
27
28 def __init__(self, dataset: Any, indices: List[int] = [], use_duplicated_indices: bool = False) -> None:
29 if not isinstance(indices, list):
30 raise MisconfigurationException("indices should be a list")
31
32 if use_duplicated_indices:
33 indices = list(indices)
34 else:
35 indices = list(np.unique(indices))
36
37 if np.max(indices) >= len(dataset) or np.min(indices) < 0:
38 raise MisconfigurationException(f"`indices` should be within [0, {len(dataset) -1}].")
39
40 self.dataset = dataset
41 self.indices = indices
42
43 def __getattr__(self, key: str):
44 if key in self._INTERNAL_KEYS:
45 return getattr(self, key)
46 return getattr(self.dataset, key)
47
48 def __setattr__(self, name: str, value: Any) -> None:
49 if name in self._INTERNAL_KEYS:
50 self.__dict__[name] = value
51 else:
52 setattr(self.dataset, name, value)
53
54 def __getitem__(self, index: int) -> Any:
55 return self.dataset[self.indices[index]]
56
57 def __len__(self) -> int:
58 return len(self.indices)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash/core/data/splits.py b/flash/core/data/splits.py
--- a/flash/core/data/splits.py
+++ b/flash/core/data/splits.py
@@ -13,7 +13,7 @@
dataset: A dataset to be splitted
indices: List of indices to expose from the dataset
- use_duplicated_indices: Wether to allow duplicated indices.
+ use_duplicated_indices: Whether to allow duplicated indices.
Example::
@@ -41,9 +41,9 @@
self.indices = indices
def __getattr__(self, key: str):
- if key in self._INTERNAL_KEYS:
- return getattr(self, key)
- return getattr(self.dataset, key)
+ if key not in self._INTERNAL_KEYS:
+ return self.dataset.__getattribute__(key)
+ raise AttributeError
def __setattr__(self, name: str, value: Any) -> None:
if name in self._INTERNAL_KEYS:
| {"golden_diff": "diff --git a/flash/core/data/splits.py b/flash/core/data/splits.py\n--- a/flash/core/data/splits.py\n+++ b/flash/core/data/splits.py\n@@ -13,7 +13,7 @@\n \n dataset: A dataset to be splitted\n indices: List of indices to expose from the dataset\n- use_duplicated_indices: Wether to allow duplicated indices.\n+ use_duplicated_indices: Whether to allow duplicated indices.\n \n Example::\n \n@@ -41,9 +41,9 @@\n self.indices = indices\n \n def __getattr__(self, key: str):\n- if key in self._INTERNAL_KEYS:\n- return getattr(self, key)\n- return getattr(self.dataset, key)\n+ if key not in self._INTERNAL_KEYS:\n+ return self.dataset.__getattribute__(key)\n+ raise AttributeError\n \n def __setattr__(self, name: str, value: Any) -> None:\n if name in self._INTERNAL_KEYS:\n", "issue": "`val_split` + `overfit_batches` gives infinite recursion error\n## \ud83d\udc1b Bug\r\n\r\n`val_split` + `overfit_batches` gives infinite recursion error on master\n", "before_files": [{"content": "from typing import Any, List\n\nimport numpy as np\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\nfrom torch.utils.data import Dataset\n\n\nclass SplitDataset(Dataset):\n \"\"\"\n SplitDataset is used to create Dataset Subset using indices.\n\n Args:\n\n dataset: A dataset to be splitted\n indices: List of indices to expose from the dataset\n use_duplicated_indices: Wether to allow duplicated indices.\n\n Example::\n\n split_ds = SplitDataset(dataset, indices=[10, 14, 25])\n\n split_ds = SplitDataset(dataset, indices=[10, 10, 10, 14, 25], use_duplicated_indices=True)\n\n \"\"\"\n\n _INTERNAL_KEYS = (\"dataset\", \"indices\", \"data\")\n\n def __init__(self, dataset: Any, indices: List[int] = [], use_duplicated_indices: bool = False) -> None:\n if not isinstance(indices, list):\n raise MisconfigurationException(\"indices should be a list\")\n\n if use_duplicated_indices:\n indices = list(indices)\n else:\n indices = list(np.unique(indices))\n\n if np.max(indices) >= len(dataset) or np.min(indices) < 0:\n raise MisconfigurationException(f\"`indices` should be within [0, {len(dataset) -1}].\")\n\n self.dataset = dataset\n self.indices = indices\n\n def __getattr__(self, key: str):\n if key in self._INTERNAL_KEYS:\n return getattr(self, key)\n return getattr(self.dataset, key)\n\n def __setattr__(self, name: str, value: Any) -> None:\n if name in self._INTERNAL_KEYS:\n self.__dict__[name] = value\n else:\n setattr(self.dataset, name, value)\n\n def __getitem__(self, index: int) -> Any:\n return self.dataset[self.indices[index]]\n\n def __len__(self) -> int:\n return len(self.indices)\n", "path": "flash/core/data/splits.py"}], "after_files": [{"content": "from typing import Any, List\n\nimport numpy as np\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\nfrom torch.utils.data import Dataset\n\n\nclass SplitDataset(Dataset):\n \"\"\"\n SplitDataset is used to create Dataset Subset using indices.\n\n Args:\n\n dataset: A dataset to be splitted\n indices: List of indices to expose from the dataset\n use_duplicated_indices: Whether to allow duplicated indices.\n\n Example::\n\n split_ds = SplitDataset(dataset, indices=[10, 14, 25])\n\n split_ds = SplitDataset(dataset, indices=[10, 10, 10, 14, 25], use_duplicated_indices=True)\n\n \"\"\"\n\n _INTERNAL_KEYS = (\"dataset\", \"indices\", \"data\")\n\n def __init__(self, dataset: Any, indices: List[int] = [], use_duplicated_indices: bool = False) -> None:\n if not isinstance(indices, list):\n raise MisconfigurationException(\"indices should be a list\")\n\n if use_duplicated_indices:\n indices = list(indices)\n else:\n indices = list(np.unique(indices))\n\n if np.max(indices) >= len(dataset) or np.min(indices) < 0:\n raise MisconfigurationException(f\"`indices` should be within [0, {len(dataset) -1}].\")\n\n self.dataset = dataset\n self.indices = indices\n\n def __getattr__(self, key: str):\n if key not in self._INTERNAL_KEYS:\n return self.dataset.__getattribute__(key)\n raise AttributeError\n\n def __setattr__(self, name: str, value: Any) -> None:\n if name in self._INTERNAL_KEYS:\n self.__dict__[name] = value\n else:\n setattr(self.dataset, name, value)\n\n def __getitem__(self, index: int) -> Any:\n return self.dataset[self.indices[index]]\n\n def __len__(self) -> int:\n return len(self.indices)\n", "path": "flash/core/data/splits.py"}]} | 839 | 216 |
gh_patches_debug_7021 | rasdani/github-patches | git_diff | zulip__zulip-26839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update documentation on previews permissions
The help center should be updated following #27783.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/lib/url_redirects.py`
Content:
```
1 from dataclasses import dataclass
2 from typing import List
3
4
5 @dataclass
6 class URLRedirect:
7 old_url: str
8 new_url: str
9
10
11 API_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [
12 # Add URL redirects for REST API documentation here:
13 URLRedirect("/api/delete-stream", "/api/archive-stream"),
14 ]
15
16 POLICY_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [
17 # Add URL redirects for policy documentation here:
18 URLRedirect("/privacy/", "/policies/privacy"),
19 URLRedirect("/terms/", "/policies/terms"),
20 ]
21
22 HELP_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [
23 # Add URL redirects for help center documentation here:
24 URLRedirect("/help/pm-mention-alert-notifications", "/help/dm-mention-alert-notifications"),
25 URLRedirect("/help/restrict-private-messages", "/help/restrict-direct-messages"),
26 URLRedirect("/help/reading-pms", "/help/reading-dms"),
27 URLRedirect("/help/private-messages", "/help/direct-messages"),
28 URLRedirect("/help/configure-who-can-edit-topics", "/help/restrict-moving-messages"),
29 URLRedirect(
30 "/help/configure-message-editing-and-deletion",
31 "/help/restrict-message-editing-and-deletion",
32 ),
33 URLRedirect("/help/restrict-visibility-of-email-addresses", "/help/configure-email-visibility"),
34 URLRedirect("/help/change-default-view", "/help/configure-default-view"),
35 URLRedirect("/help/recent-topics", "/help/recent-conversations"),
36 URLRedirect(
37 "/help/add-custom-profile-fields",
38 "/help/custom-profile-fields",
39 ),
40 URLRedirect(
41 "/help/enable-enter-to-send",
42 "/help/mastering-the-compose-box#toggle-between-ctrl-enter-and-enter-to-send-a-message",
43 ),
44 URLRedirect(
45 "/help/change-the-default-language-for-your-organization",
46 "/help/configure-organization-language",
47 ),
48 URLRedirect("/help/delete-a-stream", "/help/archive-a-stream"),
49 URLRedirect("/help/change-the-topic-of-a-message", "/help/rename-a-topic"),
50 URLRedirect("/help/configure-missed-message-emails", "/help/email-notifications"),
51 URLRedirect("/help/add-an-alert-word", "/help/dm-mention-alert-notifications#alert-words"),
52 URLRedirect("/help/test-mobile-notifications", "/help/mobile-notifications"),
53 URLRedirect(
54 "/help/troubleshooting-desktop-notifications",
55 "/help/desktop-notifications#troubleshooting-desktop-notifications",
56 ),
57 URLRedirect(
58 "/help/change-notification-sound", "/help/desktop-notifications#change-notification-sound"
59 ),
60 URLRedirect("/help/configure-message-notification-emails", "/help/email-notifications"),
61 URLRedirect("/help/disable-new-login-emails", "/help/email-notifications#new-login-emails"),
62 # The `help/about-streams-and-topics` redirect is particularly important,
63 # because the old URL appears in links from Welcome Bot messages.
64 URLRedirect("/help/about-streams-and-topics", "/help/streams-and-topics"),
65 URLRedirect("/help/community-topic-edits", "/help/restrict-moving-messages"),
66 URLRedirect(
67 "/help/only-allow-admins-to-add-emoji", "/help/custom-emoji#change-who-can-add-custom-emoji"
68 ),
69 URLRedirect(
70 "/help/configure-who-can-add-custom-emoji",
71 "/help/custom-emoji#change-who-can-add-custom-emoji",
72 ),
73 URLRedirect("/help/add-custom-emoji", "/help/custom-emoji"),
74 URLRedirect("/help/night-mode", "/help/dark-theme"),
75 URLRedirect("/help/enable-emoticon-translations", "/help/configure-emoticon-translations"),
76 URLRedirect("/help/web-public-streams", "/help/public-access-option"),
77 URLRedirect("/help/starting-a-new-private-thread", "/help/starting-a-new-direct-message"),
78 URLRedirect("/help/edit-or-delete-a-message", "/help/delete-a-message"),
79 URLRedirect("/help/start-a-new-topic", "/help/starting-a-new-topic"),
80 URLRedirect("/help/configure-default-view", "/help/configure-home-view"),
81 URLRedirect("/help/reading-topics", "/help/reading-conversations"),
82 URLRedirect("/help/finding-a-topic-to-read", "/help/finding-a-conversation-to-read"),
83 ]
84
85 LANDING_PAGE_REDIRECTS = [
86 # Add URL redirects for corporate landing pages here.
87 URLRedirect("/new-user/", "/hello/"),
88 URLRedirect("/developer-community/", "/development-community"),
89 URLRedirect("/for/companies/", "/for/business"),
90 URLRedirect("/for/working-groups-and-communities/", "/for/communities"),
91 ]
92
93 DOCUMENTATION_REDIRECTS = (
94 API_DOCUMENTATION_REDIRECTS + POLICY_DOCUMENTATION_REDIRECTS + HELP_DOCUMENTATION_REDIRECTS
95 )
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zerver/lib/url_redirects.py b/zerver/lib/url_redirects.py
--- a/zerver/lib/url_redirects.py
+++ b/zerver/lib/url_redirects.py
@@ -80,6 +80,7 @@
URLRedirect("/help/configure-default-view", "/help/configure-home-view"),
URLRedirect("/help/reading-topics", "/help/reading-conversations"),
URLRedirect("/help/finding-a-topic-to-read", "/help/finding-a-conversation-to-read"),
+ URLRedirect("/help/view-and-browse-images", "/help/view-images-and-videos"),
]
LANDING_PAGE_REDIRECTS = [
| {"golden_diff": "diff --git a/zerver/lib/url_redirects.py b/zerver/lib/url_redirects.py\n--- a/zerver/lib/url_redirects.py\n+++ b/zerver/lib/url_redirects.py\n@@ -80,6 +80,7 @@\n URLRedirect(\"/help/configure-default-view\", \"/help/configure-home-view\"),\n URLRedirect(\"/help/reading-topics\", \"/help/reading-conversations\"),\n URLRedirect(\"/help/finding-a-topic-to-read\", \"/help/finding-a-conversation-to-read\"),\n+ URLRedirect(\"/help/view-and-browse-images\", \"/help/view-images-and-videos\"),\n ]\n \n LANDING_PAGE_REDIRECTS = [\n", "issue": "Update documentation on previews permissions\nThe help center should be updated following #27783.\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import List\n\n\n@dataclass\nclass URLRedirect:\n old_url: str\n new_url: str\n\n\nAPI_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for REST API documentation here:\n URLRedirect(\"/api/delete-stream\", \"/api/archive-stream\"),\n]\n\nPOLICY_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for policy documentation here:\n URLRedirect(\"/privacy/\", \"/policies/privacy\"),\n URLRedirect(\"/terms/\", \"/policies/terms\"),\n]\n\nHELP_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for help center documentation here:\n URLRedirect(\"/help/pm-mention-alert-notifications\", \"/help/dm-mention-alert-notifications\"),\n URLRedirect(\"/help/restrict-private-messages\", \"/help/restrict-direct-messages\"),\n URLRedirect(\"/help/reading-pms\", \"/help/reading-dms\"),\n URLRedirect(\"/help/private-messages\", \"/help/direct-messages\"),\n URLRedirect(\"/help/configure-who-can-edit-topics\", \"/help/restrict-moving-messages\"),\n URLRedirect(\n \"/help/configure-message-editing-and-deletion\",\n \"/help/restrict-message-editing-and-deletion\",\n ),\n URLRedirect(\"/help/restrict-visibility-of-email-addresses\", \"/help/configure-email-visibility\"),\n URLRedirect(\"/help/change-default-view\", \"/help/configure-default-view\"),\n URLRedirect(\"/help/recent-topics\", \"/help/recent-conversations\"),\n URLRedirect(\n \"/help/add-custom-profile-fields\",\n \"/help/custom-profile-fields\",\n ),\n URLRedirect(\n \"/help/enable-enter-to-send\",\n \"/help/mastering-the-compose-box#toggle-between-ctrl-enter-and-enter-to-send-a-message\",\n ),\n URLRedirect(\n \"/help/change-the-default-language-for-your-organization\",\n \"/help/configure-organization-language\",\n ),\n URLRedirect(\"/help/delete-a-stream\", \"/help/archive-a-stream\"),\n URLRedirect(\"/help/change-the-topic-of-a-message\", \"/help/rename-a-topic\"),\n URLRedirect(\"/help/configure-missed-message-emails\", \"/help/email-notifications\"),\n URLRedirect(\"/help/add-an-alert-word\", \"/help/dm-mention-alert-notifications#alert-words\"),\n URLRedirect(\"/help/test-mobile-notifications\", \"/help/mobile-notifications\"),\n URLRedirect(\n \"/help/troubleshooting-desktop-notifications\",\n \"/help/desktop-notifications#troubleshooting-desktop-notifications\",\n ),\n URLRedirect(\n \"/help/change-notification-sound\", \"/help/desktop-notifications#change-notification-sound\"\n ),\n URLRedirect(\"/help/configure-message-notification-emails\", \"/help/email-notifications\"),\n URLRedirect(\"/help/disable-new-login-emails\", \"/help/email-notifications#new-login-emails\"),\n # The `help/about-streams-and-topics` redirect is particularly important,\n # because the old URL appears in links from Welcome Bot messages.\n URLRedirect(\"/help/about-streams-and-topics\", \"/help/streams-and-topics\"),\n URLRedirect(\"/help/community-topic-edits\", \"/help/restrict-moving-messages\"),\n URLRedirect(\n \"/help/only-allow-admins-to-add-emoji\", \"/help/custom-emoji#change-who-can-add-custom-emoji\"\n ),\n URLRedirect(\n \"/help/configure-who-can-add-custom-emoji\",\n \"/help/custom-emoji#change-who-can-add-custom-emoji\",\n ),\n URLRedirect(\"/help/add-custom-emoji\", \"/help/custom-emoji\"),\n URLRedirect(\"/help/night-mode\", \"/help/dark-theme\"),\n URLRedirect(\"/help/enable-emoticon-translations\", \"/help/configure-emoticon-translations\"),\n URLRedirect(\"/help/web-public-streams\", \"/help/public-access-option\"),\n URLRedirect(\"/help/starting-a-new-private-thread\", \"/help/starting-a-new-direct-message\"),\n URLRedirect(\"/help/edit-or-delete-a-message\", \"/help/delete-a-message\"),\n URLRedirect(\"/help/start-a-new-topic\", \"/help/starting-a-new-topic\"),\n URLRedirect(\"/help/configure-default-view\", \"/help/configure-home-view\"),\n URLRedirect(\"/help/reading-topics\", \"/help/reading-conversations\"),\n URLRedirect(\"/help/finding-a-topic-to-read\", \"/help/finding-a-conversation-to-read\"),\n]\n\nLANDING_PAGE_REDIRECTS = [\n # Add URL redirects for corporate landing pages here.\n URLRedirect(\"/new-user/\", \"/hello/\"),\n URLRedirect(\"/developer-community/\", \"/development-community\"),\n URLRedirect(\"/for/companies/\", \"/for/business\"),\n URLRedirect(\"/for/working-groups-and-communities/\", \"/for/communities\"),\n]\n\nDOCUMENTATION_REDIRECTS = (\n API_DOCUMENTATION_REDIRECTS + POLICY_DOCUMENTATION_REDIRECTS + HELP_DOCUMENTATION_REDIRECTS\n)\n", "path": "zerver/lib/url_redirects.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom typing import List\n\n\n@dataclass\nclass URLRedirect:\n old_url: str\n new_url: str\n\n\nAPI_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for REST API documentation here:\n URLRedirect(\"/api/delete-stream\", \"/api/archive-stream\"),\n]\n\nPOLICY_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for policy documentation here:\n URLRedirect(\"/privacy/\", \"/policies/privacy\"),\n URLRedirect(\"/terms/\", \"/policies/terms\"),\n]\n\nHELP_DOCUMENTATION_REDIRECTS: List[URLRedirect] = [\n # Add URL redirects for help center documentation here:\n URLRedirect(\"/help/pm-mention-alert-notifications\", \"/help/dm-mention-alert-notifications\"),\n URLRedirect(\"/help/restrict-private-messages\", \"/help/restrict-direct-messages\"),\n URLRedirect(\"/help/reading-pms\", \"/help/reading-dms\"),\n URLRedirect(\"/help/private-messages\", \"/help/direct-messages\"),\n URLRedirect(\"/help/configure-who-can-edit-topics\", \"/help/restrict-moving-messages\"),\n URLRedirect(\n \"/help/configure-message-editing-and-deletion\",\n \"/help/restrict-message-editing-and-deletion\",\n ),\n URLRedirect(\"/help/restrict-visibility-of-email-addresses\", \"/help/configure-email-visibility\"),\n URLRedirect(\"/help/change-default-view\", \"/help/configure-default-view\"),\n URLRedirect(\"/help/recent-topics\", \"/help/recent-conversations\"),\n URLRedirect(\n \"/help/add-custom-profile-fields\",\n \"/help/custom-profile-fields\",\n ),\n URLRedirect(\n \"/help/enable-enter-to-send\",\n \"/help/mastering-the-compose-box#toggle-between-ctrl-enter-and-enter-to-send-a-message\",\n ),\n URLRedirect(\n \"/help/change-the-default-language-for-your-organization\",\n \"/help/configure-organization-language\",\n ),\n URLRedirect(\"/help/delete-a-stream\", \"/help/archive-a-stream\"),\n URLRedirect(\"/help/change-the-topic-of-a-message\", \"/help/rename-a-topic\"),\n URLRedirect(\"/help/configure-missed-message-emails\", \"/help/email-notifications\"),\n URLRedirect(\"/help/add-an-alert-word\", \"/help/dm-mention-alert-notifications#alert-words\"),\n URLRedirect(\"/help/test-mobile-notifications\", \"/help/mobile-notifications\"),\n URLRedirect(\n \"/help/troubleshooting-desktop-notifications\",\n \"/help/desktop-notifications#troubleshooting-desktop-notifications\",\n ),\n URLRedirect(\n \"/help/change-notification-sound\", \"/help/desktop-notifications#change-notification-sound\"\n ),\n URLRedirect(\"/help/configure-message-notification-emails\", \"/help/email-notifications\"),\n URLRedirect(\"/help/disable-new-login-emails\", \"/help/email-notifications#new-login-emails\"),\n # The `help/about-streams-and-topics` redirect is particularly important,\n # because the old URL appears in links from Welcome Bot messages.\n URLRedirect(\"/help/about-streams-and-topics\", \"/help/streams-and-topics\"),\n URLRedirect(\"/help/community-topic-edits\", \"/help/restrict-moving-messages\"),\n URLRedirect(\n \"/help/only-allow-admins-to-add-emoji\", \"/help/custom-emoji#change-who-can-add-custom-emoji\"\n ),\n URLRedirect(\n \"/help/configure-who-can-add-custom-emoji\",\n \"/help/custom-emoji#change-who-can-add-custom-emoji\",\n ),\n URLRedirect(\"/help/add-custom-emoji\", \"/help/custom-emoji\"),\n URLRedirect(\"/help/night-mode\", \"/help/dark-theme\"),\n URLRedirect(\"/help/enable-emoticon-translations\", \"/help/configure-emoticon-translations\"),\n URLRedirect(\"/help/web-public-streams\", \"/help/public-access-option\"),\n URLRedirect(\"/help/starting-a-new-private-thread\", \"/help/starting-a-new-direct-message\"),\n URLRedirect(\"/help/edit-or-delete-a-message\", \"/help/delete-a-message\"),\n URLRedirect(\"/help/start-a-new-topic\", \"/help/starting-a-new-topic\"),\n URLRedirect(\"/help/configure-default-view\", \"/help/configure-home-view\"),\n URLRedirect(\"/help/reading-topics\", \"/help/reading-conversations\"),\n URLRedirect(\"/help/finding-a-topic-to-read\", \"/help/finding-a-conversation-to-read\"),\n URLRedirect(\"/help/view-and-browse-images\", \"/help/view-images-and-videos\"),\n]\n\nLANDING_PAGE_REDIRECTS = [\n # Add URL redirects for corporate landing pages here.\n URLRedirect(\"/new-user/\", \"/hello/\"),\n URLRedirect(\"/developer-community/\", \"/development-community\"),\n URLRedirect(\"/for/companies/\", \"/for/business\"),\n URLRedirect(\"/for/working-groups-and-communities/\", \"/for/communities\"),\n]\n\nDOCUMENTATION_REDIRECTS = (\n API_DOCUMENTATION_REDIRECTS + POLICY_DOCUMENTATION_REDIRECTS + HELP_DOCUMENTATION_REDIRECTS\n)\n", "path": "zerver/lib/url_redirects.py"}]} | 1,482 | 137 |
gh_patches_debug_32222 | rasdani/github-patches | git_diff | explosion__spaCy-3281 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect tokenization of dash punctuation in Spanish
In Spanish text, the conventions for using dashes and em-dashes as punctuation seems to be considerably different than in English. Spacy often does not tokenize the dash or em-dash as a separate token, instead keeping it attached to the closest word.
For example, the Spanish sentence:
—Yo me llamo... –murmuró el niño– Emilio Sánchez Pérez.
English Translation:
"My name is...", murmured the boy, "Emilio Sanchez Perez."
Here, the Spanish dash is used like a comma. The em-dash at the beginning of the sentence is used like a double quote. I believe that the fact that there is no space between the dash and word is throwing off the tokenizer.
The Spanish sentence above is tokenized as:
—Yo
me
llamo
...
–murmuró
el
niño–
Emilio
Sánchez
Pérez
.
I would expect the tokenization to be
—
Yo
me
llamo
...
–
murmuró
el
niño
–
Emilio
Sánchez
Pérez
.
## Your Environment
* **spaCy version:** 2.0.12
* **Platform:** Darwin-18.0.0-x86_64-i386-64bit
* **Python version:** 3.7.0
* **Models:** de, es, en
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spacy/lang/punctuation.py`
Content:
```
1 # coding: utf8
2 from __future__ import unicode_literals
3
4 from .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY, LIST_ICONS
5 from .char_classes import HYPHENS
6 from .char_classes import CURRENCY, UNITS
7 from .char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
8
9
10 _prefixes = (
11 ["§", "%", "=", r"\+(?![0-9])"]
12 + LIST_PUNCT
13 + LIST_ELLIPSES
14 + LIST_QUOTES
15 + LIST_CURRENCY
16 + LIST_ICONS
17 )
18
19
20 _suffixes = (
21 LIST_PUNCT
22 + LIST_ELLIPSES
23 + LIST_QUOTES
24 + LIST_ICONS
25 + ["'s", "'S", "’s", "’S"]
26 + [
27 r"(?<=[0-9])\+",
28 r"(?<=°[FfCcKk])\.",
29 r"(?<=[0-9])(?:{c})".format(c=CURRENCY),
30 r"(?<=[0-9])(?:{u})".format(u=UNITS),
31 r"(?<=[0-9{al}{e}(?:{q})])\.".format(al=ALPHA_LOWER, e=r"%²\-\+", q=CONCAT_QUOTES),
32 r"(?<=[{au}][{au}])\.".format(au=ALPHA_UPPER),
33 ]
34 )
35
36 _infixes = (
37 LIST_ELLIPSES
38 + LIST_ICONS
39 + [
40 r"(?<=[0-9])[+\-\*^](?=[0-9-])",
41 r"(?<=[{al}])\.(?=[{au}])".format(al=ALPHA_LOWER, au=ALPHA_UPPER),
42 r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
43 r'(?<=[{a}])(?:{h})(?=[{a}])'.format(a=ALPHA, h=HYPHENS),
44 r'(?<=[{a}])[:<>=/](?=[{a}])'.format(a=ALPHA),
45 ]
46 )
47
48 TOKENIZER_PREFIXES = _prefixes
49 TOKENIZER_SUFFIXES = _suffixes
50 TOKENIZER_INFIXES = _infixes
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spacy/lang/punctuation.py b/spacy/lang/punctuation.py
--- a/spacy/lang/punctuation.py
+++ b/spacy/lang/punctuation.py
@@ -1,14 +1,13 @@
# coding: utf8
from __future__ import unicode_literals
-from .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY, LIST_ICONS
-from .char_classes import HYPHENS
-from .char_classes import CURRENCY, UNITS
+from .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY
+from .char_classes import LIST_ICONS, HYPHENS, CURRENCY, UNITS
from .char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
_prefixes = (
- ["§", "%", "=", r"\+(?![0-9])"]
+ ["§", "%", "=", "—", "–", r"\+(?![0-9])"]
+ LIST_PUNCT
+ LIST_ELLIPSES
+ LIST_QUOTES
@@ -22,13 +21,15 @@
+ LIST_ELLIPSES
+ LIST_QUOTES
+ LIST_ICONS
- + ["'s", "'S", "’s", "’S"]
+ + ["'s", "'S", "’s", "’S", "—", "–"]
+ [
r"(?<=[0-9])\+",
r"(?<=°[FfCcKk])\.",
r"(?<=[0-9])(?:{c})".format(c=CURRENCY),
r"(?<=[0-9])(?:{u})".format(u=UNITS),
- r"(?<=[0-9{al}{e}(?:{q})])\.".format(al=ALPHA_LOWER, e=r"%²\-\+", q=CONCAT_QUOTES),
+ r"(?<=[0-9{al}{e}(?:{q})])\.".format(
+ al=ALPHA_LOWER, e=r"%²\-\+", q=CONCAT_QUOTES
+ ),
r"(?<=[{au}][{au}])\.".format(au=ALPHA_UPPER),
]
)
@@ -40,8 +41,8 @@
r"(?<=[0-9])[+\-\*^](?=[0-9-])",
r"(?<=[{al}])\.(?=[{au}])".format(al=ALPHA_LOWER, au=ALPHA_UPPER),
r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
- r'(?<=[{a}])(?:{h})(?=[{a}])'.format(a=ALPHA, h=HYPHENS),
- r'(?<=[{a}])[:<>=/](?=[{a}])'.format(a=ALPHA),
+ r"(?<=[{a}])(?:{h})(?=[{a}])".format(a=ALPHA, h=HYPHENS),
+ r"(?<=[{a}])[:<>=/](?=[{a}])".format(a=ALPHA),
]
)
| {"golden_diff": "diff --git a/spacy/lang/punctuation.py b/spacy/lang/punctuation.py\n--- a/spacy/lang/punctuation.py\n+++ b/spacy/lang/punctuation.py\n@@ -1,14 +1,13 @@\n # coding: utf8\n from __future__ import unicode_literals\n \n-from .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY, LIST_ICONS\n-from .char_classes import HYPHENS\n-from .char_classes import CURRENCY, UNITS\n+from .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY\n+from .char_classes import LIST_ICONS, HYPHENS, CURRENCY, UNITS\n from .char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA\n \n \n _prefixes = (\n- [\"\u00a7\", \"%\", \"=\", r\"\\+(?![0-9])\"]\n+ [\"\u00a7\", \"%\", \"=\", \"\u2014\", \"\u2013\", r\"\\+(?![0-9])\"]\n + LIST_PUNCT\n + LIST_ELLIPSES\n + LIST_QUOTES\n@@ -22,13 +21,15 @@\n + LIST_ELLIPSES\n + LIST_QUOTES\n + LIST_ICONS\n- + [\"'s\", \"'S\", \"\u2019s\", \"\u2019S\"]\n+ + [\"'s\", \"'S\", \"\u2019s\", \"\u2019S\", \"\u2014\", \"\u2013\"]\n + [\n r\"(?<=[0-9])\\+\",\n r\"(?<=\u00b0[FfCcKk])\\.\",\n r\"(?<=[0-9])(?:{c})\".format(c=CURRENCY),\n r\"(?<=[0-9])(?:{u})\".format(u=UNITS),\n- r\"(?<=[0-9{al}{e}(?:{q})])\\.\".format(al=ALPHA_LOWER, e=r\"%\u00b2\\-\\+\", q=CONCAT_QUOTES),\n+ r\"(?<=[0-9{al}{e}(?:{q})])\\.\".format(\n+ al=ALPHA_LOWER, e=r\"%\u00b2\\-\\+\", q=CONCAT_QUOTES\n+ ),\n r\"(?<=[{au}][{au}])\\.\".format(au=ALPHA_UPPER),\n ]\n )\n@@ -40,8 +41,8 @@\n r\"(?<=[0-9])[+\\-\\*^](?=[0-9-])\",\n r\"(?<=[{al}])\\.(?=[{au}])\".format(al=ALPHA_LOWER, au=ALPHA_UPPER),\n r\"(?<=[{a}]),(?=[{a}])\".format(a=ALPHA),\n- r'(?<=[{a}])(?:{h})(?=[{a}])'.format(a=ALPHA, h=HYPHENS),\n- r'(?<=[{a}])[:<>=/](?=[{a}])'.format(a=ALPHA),\n+ r\"(?<=[{a}])(?:{h})(?=[{a}])\".format(a=ALPHA, h=HYPHENS),\n+ r\"(?<=[{a}])[:<>=/](?=[{a}])\".format(a=ALPHA),\n ]\n )\n", "issue": "Incorrect tokenization of dash punctuation in Spanish\nIn Spanish text, the conventions for using dashes and em-dashes as punctuation seems to be considerably different than in English. Spacy often does not tokenize the dash or em-dash as a separate token, instead keeping it attached to the closest word.\r\n\r\nFor example, the Spanish sentence:\r\n\u2014Yo me llamo... \u2013murmur\u00f3 el ni\u00f1o\u2013 Emilio S\u00e1nchez P\u00e9rez.\r\nEnglish Translation:\r\n\"My name is...\", murmured the boy, \"Emilio Sanchez Perez.\"\r\n\r\nHere, the Spanish dash is used like a comma. The em-dash at the beginning of the sentence is used like a double quote. I believe that the fact that there is no space between the dash and word is throwing off the tokenizer.\r\n\r\nThe Spanish sentence above is tokenized as:\r\n\u2014Yo\r\nme\r\nllamo\r\n...\r\n\u2013murmur\u00f3\r\nel\r\nni\u00f1o\u2013\r\nEmilio\r\nS\u00e1nchez\r\nP\u00e9rez\r\n.\r\n\r\nI would expect the tokenization to be\r\n\u2014\r\nYo\r\nme\r\nllamo\r\n...\r\n\u2013\r\nmurmur\u00f3\r\nel\r\nni\u00f1o\r\n\u2013\r\nEmilio\r\nS\u00e1nchez\r\nP\u00e9rez\r\n.\r\n\r\n## Your Environment\r\n* **spaCy version:** 2.0.12\r\n* **Platform:** Darwin-18.0.0-x86_64-i386-64bit\r\n* **Python version:** 3.7.0\r\n* **Models:** de, es, en\r\n\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nfrom .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY, LIST_ICONS\nfrom .char_classes import HYPHENS\nfrom .char_classes import CURRENCY, UNITS\nfrom .char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA\n\n\n_prefixes = (\n [\"\u00a7\", \"%\", \"=\", r\"\\+(?![0-9])\"]\n + LIST_PUNCT\n + LIST_ELLIPSES\n + LIST_QUOTES\n + LIST_CURRENCY\n + LIST_ICONS\n)\n\n\n_suffixes = (\n LIST_PUNCT\n + LIST_ELLIPSES\n + LIST_QUOTES\n + LIST_ICONS\n + [\"'s\", \"'S\", \"\u2019s\", \"\u2019S\"]\n + [\n r\"(?<=[0-9])\\+\",\n r\"(?<=\u00b0[FfCcKk])\\.\",\n r\"(?<=[0-9])(?:{c})\".format(c=CURRENCY),\n r\"(?<=[0-9])(?:{u})\".format(u=UNITS),\n r\"(?<=[0-9{al}{e}(?:{q})])\\.\".format(al=ALPHA_LOWER, e=r\"%\u00b2\\-\\+\", q=CONCAT_QUOTES),\n r\"(?<=[{au}][{au}])\\.\".format(au=ALPHA_UPPER),\n ]\n)\n\n_infixes = (\n LIST_ELLIPSES\n + LIST_ICONS\n + [\n r\"(?<=[0-9])[+\\-\\*^](?=[0-9-])\",\n r\"(?<=[{al}])\\.(?=[{au}])\".format(al=ALPHA_LOWER, au=ALPHA_UPPER),\n r\"(?<=[{a}]),(?=[{a}])\".format(a=ALPHA),\n r'(?<=[{a}])(?:{h})(?=[{a}])'.format(a=ALPHA, h=HYPHENS),\n r'(?<=[{a}])[:<>=/](?=[{a}])'.format(a=ALPHA),\n ]\n)\n\nTOKENIZER_PREFIXES = _prefixes\nTOKENIZER_SUFFIXES = _suffixes\nTOKENIZER_INFIXES = _infixes\n", "path": "spacy/lang/punctuation.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nfrom .char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_CURRENCY\nfrom .char_classes import LIST_ICONS, HYPHENS, CURRENCY, UNITS\nfrom .char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA\n\n\n_prefixes = (\n [\"\u00a7\", \"%\", \"=\", \"\u2014\", \"\u2013\", r\"\\+(?![0-9])\"]\n + LIST_PUNCT\n + LIST_ELLIPSES\n + LIST_QUOTES\n + LIST_CURRENCY\n + LIST_ICONS\n)\n\n\n_suffixes = (\n LIST_PUNCT\n + LIST_ELLIPSES\n + LIST_QUOTES\n + LIST_ICONS\n + [\"'s\", \"'S\", \"\u2019s\", \"\u2019S\", \"\u2014\", \"\u2013\"]\n + [\n r\"(?<=[0-9])\\+\",\n r\"(?<=\u00b0[FfCcKk])\\.\",\n r\"(?<=[0-9])(?:{c})\".format(c=CURRENCY),\n r\"(?<=[0-9])(?:{u})\".format(u=UNITS),\n r\"(?<=[0-9{al}{e}(?:{q})])\\.\".format(\n al=ALPHA_LOWER, e=r\"%\u00b2\\-\\+\", q=CONCAT_QUOTES\n ),\n r\"(?<=[{au}][{au}])\\.\".format(au=ALPHA_UPPER),\n ]\n)\n\n_infixes = (\n LIST_ELLIPSES\n + LIST_ICONS\n + [\n r\"(?<=[0-9])[+\\-\\*^](?=[0-9-])\",\n r\"(?<=[{al}])\\.(?=[{au}])\".format(al=ALPHA_LOWER, au=ALPHA_UPPER),\n r\"(?<=[{a}]),(?=[{a}])\".format(a=ALPHA),\n r\"(?<=[{a}])(?:{h})(?=[{a}])\".format(a=ALPHA, h=HYPHENS),\n r\"(?<=[{a}])[:<>=/](?=[{a}])\".format(a=ALPHA),\n ]\n)\n\nTOKENIZER_PREFIXES = _prefixes\nTOKENIZER_SUFFIXES = _suffixes\nTOKENIZER_INFIXES = _infixes\n", "path": "spacy/lang/punctuation.py"}]} | 1,171 | 710 |
gh_patches_debug_47933 | rasdani/github-patches | git_diff | liqd__a4-opin-583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
when I log out while I am on my profile page I get a 500
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/follows/templatetags/follow_tags.py`
Content:
```
1 from django import template
2
3 from .. import models
4
5 register = template.Library()
6
7
8 @register.assignment_tag()
9 def is_following(user, project):
10 return models.Follow.objects.filter(
11 enabled=True,
12 project=project,
13 creator=user
14 ).exists()
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/follows/templatetags/follow_tags.py b/euth/follows/templatetags/follow_tags.py
--- a/euth/follows/templatetags/follow_tags.py
+++ b/euth/follows/templatetags/follow_tags.py
@@ -7,8 +7,11 @@
@register.assignment_tag()
def is_following(user, project):
- return models.Follow.objects.filter(
- enabled=True,
- project=project,
- creator=user
- ).exists()
+ if not user.is_anonymous():
+ return models.Follow.objects.filter(
+ enabled=True,
+ project=project,
+ creator=user
+ ).exists()
+ else:
+ return False
| {"golden_diff": "diff --git a/euth/follows/templatetags/follow_tags.py b/euth/follows/templatetags/follow_tags.py\n--- a/euth/follows/templatetags/follow_tags.py\n+++ b/euth/follows/templatetags/follow_tags.py\n@@ -7,8 +7,11 @@\n \n @register.assignment_tag()\n def is_following(user, project):\n- return models.Follow.objects.filter(\n- enabled=True,\n- project=project,\n- creator=user\n- ).exists()\n+ if not user.is_anonymous():\n+ return models.Follow.objects.filter(\n+ enabled=True,\n+ project=project,\n+ creator=user\n+ ).exists()\n+ else:\n+ return False\n", "issue": "when I log out while I am on my profile page I get a 500\n\n", "before_files": [{"content": "from django import template\n\nfrom .. import models\n\nregister = template.Library()\n\n\[email protected]_tag()\ndef is_following(user, project):\n return models.Follow.objects.filter(\n enabled=True,\n project=project,\n creator=user\n ).exists()\n", "path": "euth/follows/templatetags/follow_tags.py"}], "after_files": [{"content": "from django import template\n\nfrom .. import models\n\nregister = template.Library()\n\n\[email protected]_tag()\ndef is_following(user, project):\n if not user.is_anonymous():\n return models.Follow.objects.filter(\n enabled=True,\n project=project,\n creator=user\n ).exists()\n else:\n return False\n", "path": "euth/follows/templatetags/follow_tags.py"}]} | 366 | 171 |
gh_patches_debug_10257 | rasdani/github-patches | git_diff | getsentry__sentry-48159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to disable issue alert notifications for specific projects on old sentry accounts.
### Environment
SaaS (https://sentry.io/)
### Version
_No response_
### Link
_No response_
### DSN
_No response_
### Steps to Reproduce
In old accounts the default option for Fine tuning issue alert notifications was `--`, which no longer exists. (User Settings -> Notifications -> Issue alert notifications -> Fine Tune).
It seems to have been replaced with "Default" and in cases a user wants to disable the notifications for a project that currently has `--` selected, the changes to "Off" is not saved and notifications continue to be sent.
There is no workaround as turning issue alerts notifications to OFF completely disable the notifications for all projects in all organisation, not allowing to enable for projects that man wants to receive notifications from.
As a side note: This design is not optimal as having "On" and "Default" is redundant -> you can only set default to On, and when it is off there are no options available (Default(Off) does not exist).
There is a recoding available in this [internal ticket (contains personal information)](https://sentry.zendesk.com/agent/tickets/84240)
### Expected Result
Changing the notification from `--` to `Off` saves and stop sending emails.
### Actual Result
The change is not saved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/incidents/endpoints/bases.py`
Content:
```
1 from rest_framework.exceptions import PermissionDenied
2 from rest_framework.request import Request
3
4 from sentry import features
5 from sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint
6 from sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint
7 from sentry.api.exceptions import ResourceDoesNotExist
8 from sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction
9
10
11 class ProjectAlertRuleEndpoint(ProjectEndpoint):
12 permission_classes = (ProjectAlertRulePermission,)
13
14 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
15 args, kwargs = super().convert_args(request, *args, **kwargs)
16 project = kwargs["project"]
17
18 if not features.has("organizations:incidents", project.organization, actor=request.user):
19 raise ResourceDoesNotExist
20
21 if not request.access.has_project_access(project):
22 raise PermissionDenied
23
24 try:
25 kwargs["alert_rule"] = AlertRule.objects.get(
26 snuba_query__subscriptions__project=project, id=alert_rule_id
27 )
28 except AlertRule.DoesNotExist:
29 raise ResourceDoesNotExist
30
31 return args, kwargs
32
33
34 class OrganizationAlertRuleEndpoint(OrganizationEndpoint):
35 permission_classes = (OrganizationAlertRulePermission,)
36
37 def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):
38 args, kwargs = super().convert_args(request, *args, **kwargs)
39 organization = kwargs["organization"]
40
41 # Allow orgs that have downgraded plans to delete metric alerts
42 if request.method != "DELETE" and not features.has(
43 "organizations:incidents", organization, actor=request.user
44 ):
45 raise ResourceDoesNotExist
46
47 try:
48 kwargs["alert_rule"] = AlertRule.objects.get(
49 organization=organization, id=alert_rule_id
50 )
51 except AlertRule.DoesNotExist:
52 raise ResourceDoesNotExist
53
54 return args, kwargs
55
56
57 class OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):
58 def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):
59 args, kwargs = super().convert_args(request, *args, **kwargs)
60 organization = kwargs["organization"]
61 alert_rule = kwargs["alert_rule"]
62
63 if not features.has("organizations:incidents", organization, actor=request.user):
64 raise ResourceDoesNotExist
65
66 try:
67 kwargs["alert_rule_trigger"] = AlertRuleTrigger.objects.get(
68 alert_rule=alert_rule, id=alert_rule_trigger_id
69 )
70 except AlertRuleTrigger.DoesNotExist:
71 raise ResourceDoesNotExist
72
73 return args, kwargs
74
75
76 class OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):
77 def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):
78 args, kwargs = super().convert_args(request, *args, **kwargs)
79 organization = kwargs["organization"]
80 trigger = kwargs["alert_rule_trigger"]
81
82 if not features.has("organizations:incidents", organization, actor=request.user):
83 raise ResourceDoesNotExist
84
85 try:
86 kwargs["alert_rule_trigger_action"] = AlertRuleTriggerAction.objects.get(
87 alert_rule_trigger=trigger, id=alert_rule_trigger_action_id
88 )
89 except AlertRuleTriggerAction.DoesNotExist:
90 raise ResourceDoesNotExist
91
92 return args, kwargs
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py
--- a/src/sentry/incidents/endpoints/bases.py
+++ b/src/sentry/incidents/endpoints/bases.py
@@ -15,7 +15,10 @@
args, kwargs = super().convert_args(request, *args, **kwargs)
project = kwargs["project"]
- if not features.has("organizations:incidents", project.organization, actor=request.user):
+ # Allow orgs that have downgraded plans to delete metric alerts
+ if request.method != "DELETE" and not features.has(
+ "organizations:incidents", project.organization, actor=request.user
+ ):
raise ResourceDoesNotExist
if not request.access.has_project_access(project):
| {"golden_diff": "diff --git a/src/sentry/incidents/endpoints/bases.py b/src/sentry/incidents/endpoints/bases.py\n--- a/src/sentry/incidents/endpoints/bases.py\n+++ b/src/sentry/incidents/endpoints/bases.py\n@@ -15,7 +15,10 @@\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n \n- if not features.has(\"organizations:incidents\", project.organization, actor=request.user):\n+ # Allow orgs that have downgraded plans to delete metric alerts\n+ if request.method != \"DELETE\" and not features.has(\n+ \"organizations:incidents\", project.organization, actor=request.user\n+ ):\n raise ResourceDoesNotExist\n \n if not request.access.has_project_access(project):\n", "issue": "Unable to disable issue alert notifications for specific projects on old sentry accounts.\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Version\n\n_No response_\n\n### Link\n\n_No response_\n\n### DSN\n\n_No response_\n\n### Steps to Reproduce\n\nIn old accounts the default option for Fine tuning issue alert notifications was `--`, which no longer exists. (User Settings -> Notifications -> Issue alert notifications -> Fine Tune).\r\n\r\nIt seems to have been replaced with \"Default\" and in cases a user wants to disable the notifications for a project that currently has `--` selected, the changes to \"Off\" is not saved and notifications continue to be sent. \r\n\r\nThere is no workaround as turning issue alerts notifications to OFF completely disable the notifications for all projects in all organisation, not allowing to enable for projects that man wants to receive notifications from. \r\n\r\nAs a side note: This design is not optimal as having \"On\" and \"Default\" is redundant -> you can only set default to On, and when it is off there are no options available (Default(Off) does not exist). \r\n\r\nThere is a recoding available in this [internal ticket (contains personal information)](https://sentry.zendesk.com/agent/tickets/84240)\n\n### Expected Result\n\nChanging the notification from `--` to `Off` saves and stop sending emails.\n\n### Actual Result\n\nThe change is not saved.\n", "before_files": [{"content": "from rest_framework.exceptions import PermissionDenied\nfrom rest_framework.request import Request\n\nfrom sentry import features\nfrom sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint\nfrom sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction\n\n\nclass ProjectAlertRuleEndpoint(ProjectEndpoint):\n permission_classes = (ProjectAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n\n if not features.has(\"organizations:incidents\", project.organization, actor=request.user):\n raise ResourceDoesNotExist\n\n if not request.access.has_project_access(project):\n raise PermissionDenied\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n snuba_query__subscriptions__project=project, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n\n # Allow orgs that have downgraded plans to delete metric alerts\n if request.method != \"DELETE\" and not features.has(\n \"organizations:incidents\", organization, actor=request.user\n ):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n organization=organization, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n alert_rule = kwargs[\"alert_rule\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger\"] = AlertRuleTrigger.objects.get(\n alert_rule=alert_rule, id=alert_rule_trigger_id\n )\n except AlertRuleTrigger.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n trigger = kwargs[\"alert_rule_trigger\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger_action\"] = AlertRuleTriggerAction.objects.get(\n alert_rule_trigger=trigger, id=alert_rule_trigger_action_id\n )\n except AlertRuleTriggerAction.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n", "path": "src/sentry/incidents/endpoints/bases.py"}], "after_files": [{"content": "from rest_framework.exceptions import PermissionDenied\nfrom rest_framework.request import Request\n\nfrom sentry import features\nfrom sentry.api.bases.organization import OrganizationAlertRulePermission, OrganizationEndpoint\nfrom sentry.api.bases.project import ProjectAlertRulePermission, ProjectEndpoint\nfrom sentry.api.exceptions import ResourceDoesNotExist\nfrom sentry.incidents.models import AlertRule, AlertRuleTrigger, AlertRuleTriggerAction\n\n\nclass ProjectAlertRuleEndpoint(ProjectEndpoint):\n permission_classes = (ProjectAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n project = kwargs[\"project\"]\n\n # Allow orgs that have downgraded plans to delete metric alerts\n if request.method != \"DELETE\" and not features.has(\n \"organizations:incidents\", project.organization, actor=request.user\n ):\n raise ResourceDoesNotExist\n\n if not request.access.has_project_access(project):\n raise PermissionDenied\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n snuba_query__subscriptions__project=project, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleEndpoint(OrganizationEndpoint):\n permission_classes = (OrganizationAlertRulePermission,)\n\n def convert_args(self, request: Request, alert_rule_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n\n # Allow orgs that have downgraded plans to delete metric alerts\n if request.method != \"DELETE\" and not features.has(\n \"organizations:incidents\", organization, actor=request.user\n ):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule\"] = AlertRule.objects.get(\n organization=organization, id=alert_rule_id\n )\n except AlertRule.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerEndpoint(OrganizationAlertRuleEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n alert_rule = kwargs[\"alert_rule\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger\"] = AlertRuleTrigger.objects.get(\n alert_rule=alert_rule, id=alert_rule_trigger_id\n )\n except AlertRuleTrigger.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n\n\nclass OrganizationAlertRuleTriggerActionEndpoint(OrganizationAlertRuleTriggerEndpoint):\n def convert_args(self, request: Request, alert_rule_trigger_action_id, *args, **kwargs):\n args, kwargs = super().convert_args(request, *args, **kwargs)\n organization = kwargs[\"organization\"]\n trigger = kwargs[\"alert_rule_trigger\"]\n\n if not features.has(\"organizations:incidents\", organization, actor=request.user):\n raise ResourceDoesNotExist\n\n try:\n kwargs[\"alert_rule_trigger_action\"] = AlertRuleTriggerAction.objects.get(\n alert_rule_trigger=trigger, id=alert_rule_trigger_action_id\n )\n except AlertRuleTriggerAction.DoesNotExist:\n raise ResourceDoesNotExist\n\n return args, kwargs\n", "path": "src/sentry/incidents/endpoints/bases.py"}]} | 1,437 | 173 |
gh_patches_debug_5127 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1064 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate email to algorithm user that is also an editor when job fails
A tiny improvement request: When an algorithm job fails, the editors and the job creator are emailed (#1018), but there is apparently no check whether the job creator is one of the algorithm editor and would receive an email anyway - I was testing a new algorithm and always received two emails when a job failed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/algorithms/emails.py`
Content:
```
1 from django.conf import settings
2 from django.contrib.sites.models import Site
3 from django.core.mail import send_mail
4
5 from grandchallenge.core.utils.email import send_templated_email
6 from grandchallenge.evaluation.templatetags.evaluation_extras import user_error
7
8
9 def send_permission_request_email(obj):
10 """
11 Emails the editors that someone has requested to view an algorithm.
12
13 Parameters
14 ----------
15 obj:
16 AlgorithmPermissionRequest object containing info on which
17 user requested access to which algorithm.
18 """
19 title = f"[{obj.algorithm.title}] New access request"
20 kwargs = {
21 "user": obj.user,
22 "site": Site.objects.get_current(),
23 "algorithm": obj.algorithm,
24 }
25 for editor in obj.algorithm.editors_group.user_set.all():
26 kwargs["editor"] = editor
27 send_templated_email(
28 title,
29 "algorithms/emails/access_request.html",
30 kwargs,
31 [editor.email],
32 )
33
34
35 def send_permission_granted_email(obj):
36 """
37 Emails the requester that their request has been approved.
38
39 Parameters
40 ----------
41 obj:
42 AlgorithmPermissionRequest object containing info on which
43 user requested access to which algorithm.
44 """
45 title = f"[{obj.algorithm.title}] Access granted"
46 kwargs = {
47 "user": obj.user,
48 "site": Site.objects.get_current(),
49 "algorithm": obj.algorithm,
50 }
51 send_templated_email(
52 title,
53 "algorithms/emails/access_granted.html",
54 kwargs,
55 [obj.user.email],
56 )
57
58
59 def send_permission_denied_email(obj):
60 """
61 Emails the requester that their request has been approved.
62
63 Parameters
64 ----------
65 obj:
66 AlgorithmPermissionRequest object containing info on which
67 user requested access to which algorithm and optionally the
68 reason for rejection.
69 """
70 title = f"[{obj.algorithm.title}] Access denied"
71 kwargs = {
72 "user": obj.user,
73 "site": Site.objects.get_current(),
74 "algorithm": obj.algorithm,
75 "permission_request": obj,
76 }
77 send_templated_email(
78 title,
79 "algorithms/emails/access_denied.html",
80 kwargs,
81 [obj.user.email],
82 )
83
84
85 def send_failed_job_email(job):
86 algorithm = job.algorithm_image.algorithm
87 message = (
88 f"Unfortunately your job for algorithm "
89 f"'{algorithm.title}' failed with an error. "
90 f"The error message is:\n\n"
91 f"{user_error(job.output)}\n\n"
92 f"You may wish to try and correct this, or contact the challenge "
93 f"organizers. The following information may help them:\n"
94 f"User: {job.creator.username}\n"
95 f"Job ID: {job.pk}\n"
96 f"Submission ID: {job.pk}"
97 )
98 recipient_emails = [
99 o.email for o in algorithm.editors_group.user_set.all()
100 ]
101 recipient_emails.append(job.creator.email)
102 for email in recipient_emails:
103 send_mail(
104 subject=(
105 f"[{Site.objects.get_current().domain.lower()}] "
106 f"[{algorithm.title.lower()}] "
107 f"Job Failed"
108 ),
109 message=message,
110 from_email=settings.DEFAULT_FROM_EMAIL,
111 recipient_list=[email],
112 )
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/algorithms/emails.py b/app/grandchallenge/algorithms/emails.py
--- a/app/grandchallenge/algorithms/emails.py
+++ b/app/grandchallenge/algorithms/emails.py
@@ -99,7 +99,8 @@
o.email for o in algorithm.editors_group.user_set.all()
]
recipient_emails.append(job.creator.email)
- for email in recipient_emails:
+
+ for email in {*recipient_emails}:
send_mail(
subject=(
f"[{Site.objects.get_current().domain.lower()}] "
| {"golden_diff": "diff --git a/app/grandchallenge/algorithms/emails.py b/app/grandchallenge/algorithms/emails.py\n--- a/app/grandchallenge/algorithms/emails.py\n+++ b/app/grandchallenge/algorithms/emails.py\n@@ -99,7 +99,8 @@\n o.email for o in algorithm.editors_group.user_set.all()\n ]\n recipient_emails.append(job.creator.email)\n- for email in recipient_emails:\n+\n+ for email in {*recipient_emails}:\n send_mail(\n subject=(\n f\"[{Site.objects.get_current().domain.lower()}] \"\n", "issue": "Duplicate email to algorithm user that is also an editor when job fails\nA tiny improvement request: When an algorithm job fails, the editors and the job creator are emailed (#1018), but there is apparently no check whether the job creator is one of the algorithm editor and would receive an email anyway - I was testing a new algorithm and always received two emails when a job failed.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.sites.models import Site\nfrom django.core.mail import send_mail\n\nfrom grandchallenge.core.utils.email import send_templated_email\nfrom grandchallenge.evaluation.templatetags.evaluation_extras import user_error\n\n\ndef send_permission_request_email(obj):\n \"\"\"\n Emails the editors that someone has requested to view an algorithm.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm.\n \"\"\"\n title = f\"[{obj.algorithm.title}] New access request\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n }\n for editor in obj.algorithm.editors_group.user_set.all():\n kwargs[\"editor\"] = editor\n send_templated_email(\n title,\n \"algorithms/emails/access_request.html\",\n kwargs,\n [editor.email],\n )\n\n\ndef send_permission_granted_email(obj):\n \"\"\"\n Emails the requester that their request has been approved.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm.\n \"\"\"\n title = f\"[{obj.algorithm.title}] Access granted\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n }\n send_templated_email(\n title,\n \"algorithms/emails/access_granted.html\",\n kwargs,\n [obj.user.email],\n )\n\n\ndef send_permission_denied_email(obj):\n \"\"\"\n Emails the requester that their request has been approved.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm and optionally the\n reason for rejection.\n \"\"\"\n title = f\"[{obj.algorithm.title}] Access denied\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n \"permission_request\": obj,\n }\n send_templated_email(\n title,\n \"algorithms/emails/access_denied.html\",\n kwargs,\n [obj.user.email],\n )\n\n\ndef send_failed_job_email(job):\n algorithm = job.algorithm_image.algorithm\n message = (\n f\"Unfortunately your job for algorithm \"\n f\"'{algorithm.title}' failed with an error. \"\n f\"The error message is:\\n\\n\"\n f\"{user_error(job.output)}\\n\\n\"\n f\"You may wish to try and correct this, or contact the challenge \"\n f\"organizers. The following information may help them:\\n\"\n f\"User: {job.creator.username}\\n\"\n f\"Job ID: {job.pk}\\n\"\n f\"Submission ID: {job.pk}\"\n )\n recipient_emails = [\n o.email for o in algorithm.editors_group.user_set.all()\n ]\n recipient_emails.append(job.creator.email)\n for email in recipient_emails:\n send_mail(\n subject=(\n f\"[{Site.objects.get_current().domain.lower()}] \"\n f\"[{algorithm.title.lower()}] \"\n f\"Job Failed\"\n ),\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n", "path": "app/grandchallenge/algorithms/emails.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.contrib.sites.models import Site\nfrom django.core.mail import send_mail\n\nfrom grandchallenge.core.utils.email import send_templated_email\nfrom grandchallenge.evaluation.templatetags.evaluation_extras import user_error\n\n\ndef send_permission_request_email(obj):\n \"\"\"\n Emails the editors that someone has requested to view an algorithm.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm.\n \"\"\"\n title = f\"[{obj.algorithm.title}] New access request\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n }\n for editor in obj.algorithm.editors_group.user_set.all():\n kwargs[\"editor\"] = editor\n send_templated_email(\n title,\n \"algorithms/emails/access_request.html\",\n kwargs,\n [editor.email],\n )\n\n\ndef send_permission_granted_email(obj):\n \"\"\"\n Emails the requester that their request has been approved.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm.\n \"\"\"\n title = f\"[{obj.algorithm.title}] Access granted\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n }\n send_templated_email(\n title,\n \"algorithms/emails/access_granted.html\",\n kwargs,\n [obj.user.email],\n )\n\n\ndef send_permission_denied_email(obj):\n \"\"\"\n Emails the requester that their request has been approved.\n\n Parameters\n ----------\n obj:\n AlgorithmPermissionRequest object containing info on which\n user requested access to which algorithm and optionally the\n reason for rejection.\n \"\"\"\n title = f\"[{obj.algorithm.title}] Access denied\"\n kwargs = {\n \"user\": obj.user,\n \"site\": Site.objects.get_current(),\n \"algorithm\": obj.algorithm,\n \"permission_request\": obj,\n }\n send_templated_email(\n title,\n \"algorithms/emails/access_denied.html\",\n kwargs,\n [obj.user.email],\n )\n\n\ndef send_failed_job_email(job):\n algorithm = job.algorithm_image.algorithm\n message = (\n f\"Unfortunately your job for algorithm \"\n f\"'{algorithm.title}' failed with an error. \"\n f\"The error message is:\\n\\n\"\n f\"{user_error(job.output)}\\n\\n\"\n f\"You may wish to try and correct this, or contact the challenge \"\n f\"organizers. The following information may help them:\\n\"\n f\"User: {job.creator.username}\\n\"\n f\"Job ID: {job.pk}\\n\"\n f\"Submission ID: {job.pk}\"\n )\n recipient_emails = [\n o.email for o in algorithm.editors_group.user_set.all()\n ]\n recipient_emails.append(job.creator.email)\n\n for email in {*recipient_emails}:\n send_mail(\n subject=(\n f\"[{Site.objects.get_current().domain.lower()}] \"\n f\"[{algorithm.title.lower()}] \"\n f\"Job Failed\"\n ),\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n", "path": "app/grandchallenge/algorithms/emails.py"}]} | 1,268 | 124 |
gh_patches_debug_23994 | rasdani/github-patches | git_diff | jupyter__docker-stacks-2074 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Healthcheck fails when using a custom runtime dir
### What docker image(s) are you using?
scipy-notebook (but applies to all images based on the `base-notebook` image)
### Host OS system
RHEL 8.0
### Host architecture
x86_64
### What Docker command are you running?
The following command DOES work as expected (default runtime dir):
```
docker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
```
The following command does NOT work as expected (customized runtime dir):
```
docker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
```
### How to Reproduce the problem?
1. Start the Jupyter container using the commands above.
2. In another terminal, run the healtcheck script: `docker exec jupyter /etc/jupyter/docker_healthcheck.py`
3. Observe the healthcheck script failing due to server state JSON file(s) not being found.
### Command output
```bash session
$ docker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
$ docker exec jupyter /etc/jupyter/docker_healthcheck.py
b'{"version": "2.12.1"}'
$ docker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
$ docker exec jupyter /etc/jupyter/docker_healthcheck.py
Traceback (most recent call last):
File "/etc/jupyter/docker_healthcheck.py", line 14, in <module>
json_file = next(runtime_dir.glob("*server-*.json"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
```
### Expected behavior
Healthcheck script to not fail, e.g. to display `b'{"version": "2.12.1"}'`, even with a customized runtime dir.
### Actual behavior
The healthcheck script fails because it cannot find server state JSON files in the hard-coded default runtime dir.
### Anything else?
The problem is that the `/etc/jupyter/docker_healthcheck.py` healtcheck script hard-codes the default runtime directory to search for server JSON state files as below:
https://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/docker_healthcheck.py#L13
When this directory is customized for example via `JUPYTER_RUNTIME_DIR`, then the healthcheck script does not work.
The actual problem is when deploying Jupyter containers as services.
The Jupyter images have a default healthcheck configured as below:
https://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/Dockerfile#L66-L70
When the healthcheck fails due to a custom runtime dir, the service is restarted continuously.
I think the healthcheck script should use the output of `jupyter --runtime-dir` which respects customizations:
```
$ docker run --rm -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 jupyter --runtime-dir
/home/jovyan/custom-runtime
```
If you agree with the above, I can send a PR with this fix.
### Latest Docker version
- [X] I've updated my Docker version to the latest available, and the issue persists
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `images/base-notebook/docker_healthcheck.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright (c) Jupyter Development Team.
3 # Distributed under the terms of the Modified BSD License.
4 import json
5 import os
6 from pathlib import Path
7
8 import requests
9
10 # Several operations below deliberately don't check for possible errors
11 # As this is a healthcheck, it should succeed or raise an exception on error
12
13 runtime_dir = Path("/home/") / os.environ["NB_USER"] / ".local/share/jupyter/runtime/"
14 json_file = next(runtime_dir.glob("*server-*.json"))
15
16 url = json.loads(json_file.read_bytes())["url"]
17 url = url + "api"
18
19 proxies = {
20 "http": "",
21 "https": "",
22 }
23
24 r = requests.get(url, proxies=proxies, verify=False) # request without SSL verification
25 r.raise_for_status()
26 print(r.content)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/images/base-notebook/docker_healthcheck.py b/images/base-notebook/docker_healthcheck.py
--- a/images/base-notebook/docker_healthcheck.py
+++ b/images/base-notebook/docker_healthcheck.py
@@ -3,6 +3,7 @@
# Distributed under the terms of the Modified BSD License.
import json
import os
+import subprocess
from pathlib import Path
import requests
@@ -10,7 +11,19 @@
# Several operations below deliberately don't check for possible errors
# As this is a healthcheck, it should succeed or raise an exception on error
-runtime_dir = Path("/home/") / os.environ["NB_USER"] / ".local/share/jupyter/runtime/"
+# Docker runs healtchecks using an exec
+# It uses the default user configured when running the image: root for the case of a custom NB_USER or jovyan for the case of the default image user.
+# We manually change HOME to make `jupyter --runtime-dir` report a correct path
+# More information: <https://github.com/jupyter/docker-stacks/pull/2074#issuecomment-1879778409>
+result = subprocess.run(
+ ["jupyter", "--runtime-dir"],
+ check=True,
+ capture_output=True,
+ text=True,
+ env=dict(os.environ) | {"HOME": "/home/" + os.environ["NB_USER"]},
+)
+runtime_dir = Path(result.stdout.rstrip())
+
json_file = next(runtime_dir.glob("*server-*.json"))
url = json.loads(json_file.read_bytes())["url"]
| {"golden_diff": "diff --git a/images/base-notebook/docker_healthcheck.py b/images/base-notebook/docker_healthcheck.py\n--- a/images/base-notebook/docker_healthcheck.py\n+++ b/images/base-notebook/docker_healthcheck.py\n@@ -3,6 +3,7 @@\n # Distributed under the terms of the Modified BSD License.\n import json\n import os\n+import subprocess\n from pathlib import Path\n \n import requests\n@@ -10,7 +11,19 @@\n # Several operations below deliberately don't check for possible errors\n # As this is a healthcheck, it should succeed or raise an exception on error\n \n-runtime_dir = Path(\"/home/\") / os.environ[\"NB_USER\"] / \".local/share/jupyter/runtime/\"\n+# Docker runs healtchecks using an exec\n+# It uses the default user configured when running the image: root for the case of a custom NB_USER or jovyan for the case of the default image user.\n+# We manually change HOME to make `jupyter --runtime-dir` report a correct path\n+# More information: <https://github.com/jupyter/docker-stacks/pull/2074#issuecomment-1879778409>\n+result = subprocess.run(\n+ [\"jupyter\", \"--runtime-dir\"],\n+ check=True,\n+ capture_output=True,\n+ text=True,\n+ env=dict(os.environ) | {\"HOME\": \"/home/\" + os.environ[\"NB_USER\"]},\n+)\n+runtime_dir = Path(result.stdout.rstrip())\n+\n json_file = next(runtime_dir.glob(\"*server-*.json\"))\n \n url = json.loads(json_file.read_bytes())[\"url\"]\n", "issue": "[BUG] Healthcheck fails when using a custom runtime dir\n### What docker image(s) are you using?\r\n\r\nscipy-notebook (but applies to all images based on the `base-notebook` image)\r\n\r\n### Host OS system\r\n\r\nRHEL 8.0\r\n\r\n### Host architecture\r\n\r\nx86_64\r\n\r\n### What Docker command are you running?\r\n\r\nThe following command DOES work as expected (default runtime dir):\r\n```\r\ndocker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh\r\n```\r\n\r\nThe following command does NOT work as expected (customized runtime dir):\r\n```\r\ndocker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh\r\n```\r\n\r\n### How to Reproduce the problem?\r\n\r\n1. Start the Jupyter container using the commands above.\r\n2. In another terminal, run the healtcheck script: `docker exec jupyter /etc/jupyter/docker_healthcheck.py`\r\n3. Observe the healthcheck script failing due to server state JSON file(s) not being found.\r\n\r\n### Command output\r\n\r\n```bash session\r\n$ docker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh\r\n$ docker exec jupyter /etc/jupyter/docker_healthcheck.py\r\nb'{\"version\": \"2.12.1\"}'\r\n\r\n$ docker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh\r\n$ docker exec jupyter /etc/jupyter/docker_healthcheck.py\r\nTraceback (most recent call last):\r\n File \"/etc/jupyter/docker_healthcheck.py\", line 14, in <module>\r\n json_file = next(runtime_dir.glob(\"*server-*.json\"))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nStopIteration\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nHealthcheck script to not fail, e.g. to display `b'{\"version\": \"2.12.1\"}'`, even with a customized runtime dir.\r\n\r\n### Actual behavior\r\n\r\nThe healthcheck script fails because it cannot find server state JSON files in the hard-coded default runtime dir.\r\n\r\n### Anything else?\r\n\r\nThe problem is that the `/etc/jupyter/docker_healthcheck.py` healtcheck script hard-codes the default runtime directory to search for server JSON state files as below:\r\nhttps://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/docker_healthcheck.py#L13\r\n\r\nWhen this directory is customized for example via `JUPYTER_RUNTIME_DIR`, then the healthcheck script does not work.\r\n\r\nThe actual problem is when deploying Jupyter containers as services.\r\nThe Jupyter images have a default healthcheck configured as below:\r\nhttps://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/Dockerfile#L66-L70\r\n\r\nWhen the healthcheck fails due to a custom runtime dir, the service is restarted continuously.\r\n\r\nI think the healthcheck script should use the output of `jupyter --runtime-dir` which respects customizations:\r\n```\r\n$ docker run --rm -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 jupyter --runtime-dir\r\n/home/jovyan/custom-runtime\r\n```\r\n\r\nIf you agree with the above, I can send a PR with this fix.\r\n\r\n### Latest Docker version\r\n\r\n- [X] I've updated my Docker version to the latest available, and the issue persists\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport json\nimport os\nfrom pathlib import Path\n\nimport requests\n\n# Several operations below deliberately don't check for possible errors\n# As this is a healthcheck, it should succeed or raise an exception on error\n\nruntime_dir = Path(\"/home/\") / os.environ[\"NB_USER\"] / \".local/share/jupyter/runtime/\"\njson_file = next(runtime_dir.glob(\"*server-*.json\"))\n\nurl = json.loads(json_file.read_bytes())[\"url\"]\nurl = url + \"api\"\n\nproxies = {\n \"http\": \"\",\n \"https\": \"\",\n}\n\nr = requests.get(url, proxies=proxies, verify=False) # request without SSL verification\nr.raise_for_status()\nprint(r.content)\n", "path": "images/base-notebook/docker_healthcheck.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\nimport json\nimport os\nimport subprocess\nfrom pathlib import Path\n\nimport requests\n\n# Several operations below deliberately don't check for possible errors\n# As this is a healthcheck, it should succeed or raise an exception on error\n\n# Docker runs healtchecks using an exec\n# It uses the default user configured when running the image: root for the case of a custom NB_USER or jovyan for the case of the default image user.\n# We manually change HOME to make `jupyter --runtime-dir` report a correct path\n# More information: <https://github.com/jupyter/docker-stacks/pull/2074#issuecomment-1879778409>\nresult = subprocess.run(\n [\"jupyter\", \"--runtime-dir\"],\n check=True,\n capture_output=True,\n text=True,\n env=dict(os.environ) | {\"HOME\": \"/home/\" + os.environ[\"NB_USER\"]},\n)\nruntime_dir = Path(result.stdout.rstrip())\n\njson_file = next(runtime_dir.glob(\"*server-*.json\"))\n\nurl = json.loads(json_file.read_bytes())[\"url\"]\nurl = url + \"api\"\n\nproxies = {\n \"http\": \"\",\n \"https\": \"\",\n}\n\nr = requests.get(url, proxies=proxies, verify=False) # request without SSL verification\nr.raise_for_status()\nprint(r.content)\n", "path": "images/base-notebook/docker_healthcheck.py"}]} | 1,395 | 338 |
gh_patches_debug_24008 | rasdani/github-patches | git_diff | fossasia__open-event-server-4806 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change is-ticketing-enabled field of Event to dump_only
**I'm submitting a ...** (check one with "x")
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-server
**Current behavior:**
It is possible to create tickets for an event in which is-ticketing-enabled field is set to false.
**Expected behavior:**
Either the field should be made dump_only or appropiate constraints must be imposed to ckeck this behaviour.
**Steps to reproduce:**
1. Create an event with is-ticketing-enabled attribute to false.
2. Create Ticket for that Event.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/tickets.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from flask_rest_jsonapi.exceptions import ObjectNotFound
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query
6 from app.api.helpers.permission_manager import has_access
7 from app.api.helpers.query import event_query
8 from app.api.helpers.utilities import require_relationship
9 from app.api.schema.tickets import TicketSchema, TicketSchemaPublic
10 from app.models import db
11 from app.models.access_code import AccessCode
12 from app.models.order import Order
13 from app.models.ticket import Ticket, TicketTag, ticket_tags_table
14 from app.models.ticket_holder import TicketHolder
15 from app.api.helpers.exceptions import ConflictException
16 from app.api.helpers.db import get_count
17
18
19 class TicketListPost(ResourceList):
20 """
21 Create and List Tickets
22 """
23 def before_post(self, args, kwargs, data):
24 """
25 before post method to check for required relationship and proper permission
26 :param args:
27 :param kwargs:
28 :param data:
29 :return:
30 """
31 require_relationship(['event'], data)
32 if not has_access('is_coorganizer', event_id=data['event']):
33 raise ObjectNotFound({'parameter': 'event_id'},
34 "Event: {} not found".format(data['event_id']))
35
36 if get_count(db.session.query(Ticket.id).filter_by(name=data['name'], event_id=int(data['event']))) > 0:
37 raise ConflictException({'pointer': '/data/attributes/name'}, "Ticket already exists")
38
39 schema = TicketSchema
40 methods = ['POST', ]
41 data_layer = {'session': db.session,
42 'model': Ticket}
43
44
45 class TicketList(ResourceList):
46 """
47 List Tickets based on different params
48 """
49 def before_get(self, args, view_kwargs):
50 """
51 before get method to get the resource id for assigning schema
52 :param args:
53 :param view_kwargs:
54 :return:
55 """
56 if view_kwargs.get('ticket_tag_id') or view_kwargs.get('access_code_id') or view_kwargs.get('order_identifier'):
57 self.schema = TicketSchemaPublic
58
59 def query(self, view_kwargs):
60 """
61 query method for resource list
62 :param view_kwargs:
63 :return:
64 """
65 query_ = self.session.query(Ticket).filter_by(is_hidden=False)
66 if view_kwargs.get('ticket_tag_id'):
67 ticket_tag = safe_query(self, TicketTag, 'id', view_kwargs['ticket_tag_id'], 'ticket_tag_id')
68 query_ = query_.join(ticket_tags_table).filter_by(ticket_tag_id=ticket_tag.id)
69 query_ = event_query(self, query_, view_kwargs)
70 if view_kwargs.get('access_code_id'):
71 access_code = safe_query(self, AccessCode, 'id', view_kwargs['access_code_id'], 'access_code_id')
72 # access_code - ticket :: many-to-many relationship
73 query_ = Ticket.query.filter(Ticket.access_codes.any(id=access_code.id))
74
75 if view_kwargs.get('order_identifier'):
76 order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')
77 ticket_ids = []
78 for ticket in order.tickets:
79 ticket_ids.append(ticket.id)
80 query_ = query_.filter(Ticket.id.in_(tuple(ticket_ids)))
81
82 return query_
83
84 view_kwargs = True
85 methods = ['GET', ]
86 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
87 fetch_as="event_id", model=Ticket, methods="POST",
88 check=lambda a: a.get('event_id') or a.get('event_identifier')),)
89 schema = TicketSchema
90 data_layer = {'session': db.session,
91 'model': Ticket,
92 'methods': {
93 'query': query,
94 }}
95
96
97 class TicketDetail(ResourceDetail):
98 """
99 Ticket Resource
100 """
101 def before_get(self, args, view_kwargs):
102 """
103 before get method to get the resource id for assigning schema
104 :param args:
105 :param view_kwargs:
106 :return:
107 """
108 if view_kwargs.get('attendee_id'):
109 self.schema = TicketSchemaPublic
110
111 def before_get_object(self, view_kwargs):
112 """
113 before get object method to get the resource id for fetching details
114 :param view_kwargs:
115 :return:
116 """
117 if view_kwargs.get('attendee_id') is not None:
118 attendee = safe_query(self, TicketHolder, 'id', view_kwargs['attendee_id'], 'attendee_id')
119 if attendee.ticket_id is not None:
120 view_kwargs['id'] = attendee.ticket_id
121 else:
122 view_kwargs['id'] = None
123
124 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
125 fetch_as="event_id", model=Ticket, methods="PATCH,DELETE"),)
126 schema = TicketSchema
127 data_layer = {'session': db.session,
128 'model': Ticket,
129 'methods': {
130 'before_get_object': before_get_object
131 }}
132
133
134 class TicketRelationshipRequired(ResourceRelationship):
135 """
136 Tickets Relationship (Required)
137 """
138 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
139 fetch_as="event_id", model=Ticket, methods="PATCH"),)
140 methods = ['GET', 'PATCH']
141 schema = TicketSchema
142 data_layer = {'session': db.session,
143 'model': Ticket}
144
145
146 class TicketRelationshipOptional(ResourceRelationship):
147 """
148 Tickets Relationship (Optional)
149 """
150 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
151 fetch_as="event_id", model=Ticket, methods="PATCH,DELETE"),)
152 schema = TicketSchema
153 data_layer = {'session': db.session,
154 'model': Ticket}
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/tickets.py b/app/api/tickets.py
--- a/app/api/tickets.py
+++ b/app/api/tickets.py
@@ -11,8 +11,9 @@
from app.models.access_code import AccessCode
from app.models.order import Order
from app.models.ticket import Ticket, TicketTag, ticket_tags_table
+from app.models.event import Event
from app.models.ticket_holder import TicketHolder
-from app.api.helpers.exceptions import ConflictException
+from app.api.helpers.exceptions import ConflictException, MethodNotAllowed
from app.api.helpers.db import get_count
@@ -36,6 +37,9 @@
if get_count(db.session.query(Ticket.id).filter_by(name=data['name'], event_id=int(data['event']))) > 0:
raise ConflictException({'pointer': '/data/attributes/name'}, "Ticket already exists")
+ if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_ticketing_enabled=False)) > 0:
+ raise MethodNotAllowed({'parameter': 'event_id'}, "Ticketing is disabled for this Event")
+
schema = TicketSchema
methods = ['POST', ]
data_layer = {'session': db.session,
| {"golden_diff": "diff --git a/app/api/tickets.py b/app/api/tickets.py\n--- a/app/api/tickets.py\n+++ b/app/api/tickets.py\n@@ -11,8 +11,9 @@\n from app.models.access_code import AccessCode\n from app.models.order import Order\n from app.models.ticket import Ticket, TicketTag, ticket_tags_table\n+from app.models.event import Event\n from app.models.ticket_holder import TicketHolder\n-from app.api.helpers.exceptions import ConflictException\n+from app.api.helpers.exceptions import ConflictException, MethodNotAllowed\n from app.api.helpers.db import get_count\n \n \n@@ -36,6 +37,9 @@\n if get_count(db.session.query(Ticket.id).filter_by(name=data['name'], event_id=int(data['event']))) > 0:\n raise ConflictException({'pointer': '/data/attributes/name'}, \"Ticket already exists\")\n \n+ if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_ticketing_enabled=False)) > 0:\n+ raise MethodNotAllowed({'parameter': 'event_id'}, \"Ticketing is disabled for this Event\")\n+\n schema = TicketSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n", "issue": "Change is-ticketing-enabled field of Event to dump_only\n**I'm submitting a ...** (check one with \"x\")\r\n- [ ] bug report\r\n- [x] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-server\r\n\r\n**Current behavior:**\r\nIt is possible to create tickets for an event in which is-ticketing-enabled field is set to false.\r\n\r\n**Expected behavior:**\r\nEither the field should be made dump_only or appropiate constraints must be imposed to ckeck this behaviour.\r\n\r\n**Steps to reproduce:**\r\n1. Create an event with is-ticketing-enabled attribute to false.\r\n2. Create Ticket for that Event.\r\n\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.tickets import TicketSchema, TicketSchemaPublic\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket, TicketTag, ticket_tags_table\nfrom app.models.ticket_holder import TicketHolder\nfrom app.api.helpers.exceptions import ConflictException\nfrom app.api.helpers.db import get_count\n\n\nclass TicketListPost(ResourceList):\n \"\"\"\n Create and List Tickets\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if get_count(db.session.query(Ticket.id).filter_by(name=data['name'], event_id=int(data['event']))) > 0:\n raise ConflictException({'pointer': '/data/attributes/name'}, \"Ticket already exists\")\n\n schema = TicketSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Ticket}\n\n\nclass TicketList(ResourceList):\n \"\"\"\n List Tickets based on different params\n \"\"\"\n def before_get(self, args, view_kwargs):\n \"\"\"\n before get method to get the resource id for assigning schema\n :param args:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('ticket_tag_id') or view_kwargs.get('access_code_id') or view_kwargs.get('order_identifier'):\n self.schema = TicketSchemaPublic\n\n def query(self, view_kwargs):\n \"\"\"\n query method for resource list\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Ticket).filter_by(is_hidden=False)\n if view_kwargs.get('ticket_tag_id'):\n ticket_tag = safe_query(self, TicketTag, 'id', view_kwargs['ticket_tag_id'], 'ticket_tag_id')\n query_ = query_.join(ticket_tags_table).filter_by(ticket_tag_id=ticket_tag.id)\n query_ = event_query(self, query_, view_kwargs)\n if view_kwargs.get('access_code_id'):\n access_code = safe_query(self, AccessCode, 'id', view_kwargs['access_code_id'], 'access_code_id')\n # access_code - ticket :: many-to-many relationship\n query_ = Ticket.query.filter(Ticket.access_codes.any(id=access_code.id))\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n ticket_ids = []\n for ticket in order.tickets:\n ticket_ids.append(ticket.id)\n query_ = query_.filter(Ticket.id.in_(tuple(ticket_ids)))\n\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"POST\",\n check=lambda a: a.get('event_id') or a.get('event_identifier')),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket,\n 'methods': {\n 'query': query,\n }}\n\n\nclass TicketDetail(ResourceDetail):\n \"\"\"\n Ticket Resource\n \"\"\"\n def before_get(self, args, view_kwargs):\n \"\"\"\n before get method to get the resource id for assigning schema\n :param args:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('attendee_id'):\n self.schema = TicketSchemaPublic\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method to get the resource id for fetching details\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('attendee_id') is not None:\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['attendee_id'], 'attendee_id')\n if attendee.ticket_id is not None:\n view_kwargs['id'] = attendee.ticket_id\n else:\n view_kwargs['id'] = None\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH,DELETE\"),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket,\n 'methods': {\n 'before_get_object': before_get_object\n }}\n\n\nclass TicketRelationshipRequired(ResourceRelationship):\n \"\"\"\n Tickets Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket}\n\n\nclass TicketRelationshipOptional(ResourceRelationship):\n \"\"\"\n Tickets Relationship (Optional)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH,DELETE\"),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket}\n", "path": "app/api/tickets.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.tickets import TicketSchema, TicketSchemaPublic\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket, TicketTag, ticket_tags_table\nfrom app.models.event import Event\nfrom app.models.ticket_holder import TicketHolder\nfrom app.api.helpers.exceptions import ConflictException, MethodNotAllowed\nfrom app.api.helpers.db import get_count\n\n\nclass TicketListPost(ResourceList):\n \"\"\"\n Create and List Tickets\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if get_count(db.session.query(Ticket.id).filter_by(name=data['name'], event_id=int(data['event']))) > 0:\n raise ConflictException({'pointer': '/data/attributes/name'}, \"Ticket already exists\")\n\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_ticketing_enabled=False)) > 0:\n raise MethodNotAllowed({'parameter': 'event_id'}, \"Ticketing is disabled for this Event\")\n\n schema = TicketSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Ticket}\n\n\nclass TicketList(ResourceList):\n \"\"\"\n List Tickets based on different params\n \"\"\"\n def before_get(self, args, view_kwargs):\n \"\"\"\n before get method to get the resource id for assigning schema\n :param args:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('ticket_tag_id') or view_kwargs.get('access_code_id') or view_kwargs.get('order_identifier'):\n self.schema = TicketSchemaPublic\n\n def query(self, view_kwargs):\n \"\"\"\n query method for resource list\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Ticket).filter_by(is_hidden=False)\n if view_kwargs.get('ticket_tag_id'):\n ticket_tag = safe_query(self, TicketTag, 'id', view_kwargs['ticket_tag_id'], 'ticket_tag_id')\n query_ = query_.join(ticket_tags_table).filter_by(ticket_tag_id=ticket_tag.id)\n query_ = event_query(self, query_, view_kwargs)\n if view_kwargs.get('access_code_id'):\n access_code = safe_query(self, AccessCode, 'id', view_kwargs['access_code_id'], 'access_code_id')\n # access_code - ticket :: many-to-many relationship\n query_ = Ticket.query.filter(Ticket.access_codes.any(id=access_code.id))\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n ticket_ids = []\n for ticket in order.tickets:\n ticket_ids.append(ticket.id)\n query_ = query_.filter(Ticket.id.in_(tuple(ticket_ids)))\n\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"POST\",\n check=lambda a: a.get('event_id') or a.get('event_identifier')),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket,\n 'methods': {\n 'query': query,\n }}\n\n\nclass TicketDetail(ResourceDetail):\n \"\"\"\n Ticket Resource\n \"\"\"\n def before_get(self, args, view_kwargs):\n \"\"\"\n before get method to get the resource id for assigning schema\n :param args:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('attendee_id'):\n self.schema = TicketSchemaPublic\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get object method to get the resource id for fetching details\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('attendee_id') is not None:\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['attendee_id'], 'attendee_id')\n if attendee.ticket_id is not None:\n view_kwargs['id'] = attendee.ticket_id\n else:\n view_kwargs['id'] = None\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH,DELETE\"),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket,\n 'methods': {\n 'before_get_object': before_get_object\n }}\n\n\nclass TicketRelationshipRequired(ResourceRelationship):\n \"\"\"\n Tickets Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket}\n\n\nclass TicketRelationshipOptional(ResourceRelationship):\n \"\"\"\n Tickets Relationship (Optional)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=Ticket, methods=\"PATCH,DELETE\"),)\n schema = TicketSchema\n data_layer = {'session': db.session,\n 'model': Ticket}\n", "path": "app/api/tickets.py"}]} | 1,997 | 257 |
gh_patches_debug_4691 | rasdani/github-patches | git_diff | mindsdb__lightwood-608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
:wrench: Colored terminal output
## Task
Modify the lightwood [logger class](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py) so that its output is colored, depending on the log level. Color scheme is not set in stone, but should be something that makes sense, e.g.: red for `CRITICAL`, orange for `ERROR`, yellow for `WARNING`, green for `INFO`, uncolored for `DEBUG`.
## Steps :male_detective: :female_detective:
- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.
- Implement the necessary changes. The package to achieve colored logging can be chosen as part of the PR, but we suggest using lightweight alternatives like [colored](https://pypi.org/project/colored/) or [termcolor](https://pypi.org/project/termcolor/).
- Check the logs are using the new color scheme. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.
- Make the PR and address any comments that reviewers might make.
## Additional rewards :1st_place_medal:
Each documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightwood/helpers/log.py`
Content:
```
1 import logging
2 import os
3
4
5 def initialize_log():
6 pid = os.getpid()
7 logging.basicConfig()
8 log = logging.getLogger(f'lightwood-{pid}')
9 log.setLevel(logging.DEBUG)
10 return log
11
12
13 log = initialize_log()
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py
--- a/lightwood/helpers/log.py
+++ b/lightwood/helpers/log.py
@@ -1,10 +1,15 @@
import logging
import os
+import colorlog
def initialize_log():
pid = os.getpid()
- logging.basicConfig()
+
+ handler = colorlog.StreamHandler()
+ handler.setFormatter(colorlog.ColoredFormatter())
+
+ logging.basicConfig(handlers=[handler])
log = logging.getLogger(f'lightwood-{pid}')
log.setLevel(logging.DEBUG)
return log
| {"golden_diff": "diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py\n--- a/lightwood/helpers/log.py\n+++ b/lightwood/helpers/log.py\n@@ -1,10 +1,15 @@\n import logging\n import os\n+import colorlog\n \n \n def initialize_log():\n pid = os.getpid()\n- logging.basicConfig()\n+\n+ handler = colorlog.StreamHandler()\n+ handler.setFormatter(colorlog.ColoredFormatter())\n+\n+ logging.basicConfig(handlers=[handler])\n log = logging.getLogger(f'lightwood-{pid}')\n log.setLevel(logging.DEBUG)\n return log\n", "issue": ":wrench: Colored terminal output\n## Task\r\n\r\nModify the lightwood [logger class](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py) so that its output is colored, depending on the log level. Color scheme is not set in stone, but should be something that makes sense, e.g.: red for `CRITICAL`, orange for `ERROR`, yellow for `WARNING`, green for `INFO`, uncolored for `DEBUG`.\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.\r\n- Implement the necessary changes. The package to achieve colored logging can be chosen as part of the PR, but we suggest using lightweight alternatives like [colored](https://pypi.org/project/colored/) or [termcolor](https://pypi.org/project/termcolor/).\r\n- Check the logs are using the new color scheme. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.\r\n- Make the PR and address any comments that reviewers might make.\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\n", "before_files": [{"content": "import logging\nimport os\n\n\ndef initialize_log():\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n log.setLevel(logging.DEBUG)\n return log\n\n\nlog = initialize_log()\n", "path": "lightwood/helpers/log.py"}], "after_files": [{"content": "import logging\nimport os\nimport colorlog\n\n\ndef initialize_log():\n pid = os.getpid()\n\n handler = colorlog.StreamHandler()\n handler.setFormatter(colorlog.ColoredFormatter())\n\n logging.basicConfig(handlers=[handler])\n log = logging.getLogger(f'lightwood-{pid}')\n log.setLevel(logging.DEBUG)\n return log\n\n\nlog = initialize_log()\n", "path": "lightwood/helpers/log.py"}]} | 669 | 125 |
gh_patches_debug_40903 | rasdani/github-patches | git_diff | mozilla__bugbug-916 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tune the check for training metrics
We currently set an arbitrary max 5% worse result, we might need to tune it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/analyze_training_metrics.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5 """ Given a directory containing training metrics, generate SVF graphs and check that the metrics are not getting worse than before.
6 """
7
8 import argparse
9 import json
10 import logging
11 import sys
12 from collections import defaultdict
13 from datetime import datetime, timezone
14 from pathlib import Path
15 from typing import Dict
16
17 import matplotlib.dates as mdates
18 import matplotlib.pyplot as plt
19
20 LOGGER = logging.getLogger(__name__)
21
22 logging.basicConfig(level=logging.INFO)
23
24 # If the latest metric point is 5% lower than the previous one, show a warning and exit with 1.
25 WARNING_THRESHOLD = 0.95
26
27 REPORT_METRICS = ["accuracy", "precision", "recall"]
28
29
30 def plot_graph(
31 model_name: str,
32 metric_name: str,
33 values_dict: Dict[datetime, float],
34 output_directory: Path,
35 ) -> bool:
36 sorted_metrics = sorted(values_dict.items())
37 x, y = zip(*sorted_metrics)
38
39 # Compute the threshold
40 if len(y) >= 2:
41 before_last_value = y[-2]
42 else:
43 before_last_value = y[-1]
44 metric_threshold = before_last_value * WARNING_THRESHOLD
45
46 figure = plt.figure()
47 axes = plt.axes()
48
49 # Formatting of the figure
50 figure.autofmt_xdate()
51 axes.fmt_xdata = mdates.DateFormatter("%Y-%m-%d-%H-%M")
52 axes.set_title(f"{model_name} {metric_name}")
53
54 # Display threshold
55 axes.axhline(y=metric_threshold, linestyle="--", color="red")
56 plt.annotate(
57 "{:.4f}".format(metric_threshold),
58 (x[-1], metric_threshold),
59 textcoords="offset points", # how to position the text
60 xytext=(-10, 10), # distance from text to points (x,y)
61 ha="center",
62 color="red",
63 )
64
65 # Display point values
66 for single_x, single_y in zip(x, y):
67 label = "{:.4f}".format(single_y)
68
69 plt.annotate(
70 label,
71 (single_x, single_y),
72 textcoords="offset points",
73 xytext=(0, 10),
74 ha="center",
75 )
76
77 axes.plot_date(x, y, marker=".", fmt="-")
78
79 output_file_path = output_directory.resolve() / f"{model_name}_{metric_name}.svg"
80 LOGGER.info("Saving %s figure", output_file_path)
81 plt.savefig(output_file_path)
82
83 plt.close(figure)
84
85 # Check if the threshold has been crossed
86 return y[-1] < metric_threshold
87
88
89 def analyze_metrics(metrics_directory: str, output_directory: str):
90 root = Path(metrics_directory)
91
92 metrics: Dict[str, Dict[str, Dict[datetime, float]]] = defaultdict(
93 lambda: defaultdict(dict)
94 )
95
96 threshold_ever_crossed = False
97
98 for metric_file_path in root.glob("metric*.json"):
99
100 # Load the metric
101 with open(metric_file_path, "r") as metric_file:
102 metric = json.load(metric_file)
103
104 # Get the model, date and version from the file
105 # TODO: Might be better storing it in the file
106 file_path_parts = metric_file_path.stem.split("_")
107
108 assert file_path_parts[:5] == ["metric", "project", "relman", "bugbug", "train"]
109 model_name = file_path_parts[5]
110 assert file_path_parts[6:8] == ["per", "date"]
111 date_parts = list(map(int, file_path_parts[8:14]))
112 date = datetime(
113 date_parts[0],
114 date_parts[1],
115 date_parts[2],
116 date_parts[3],
117 date_parts[4],
118 date_parts[5],
119 tzinfo=timezone.utc,
120 )
121 # version = file_path_parts[14:] # TODO: Use version
122
123 # Then process the report
124 for key, value in metric["report"]["average"].items():
125 if key not in REPORT_METRICS:
126 continue
127
128 metrics[model_name][key][date] = value
129
130 # Also process the test_* metrics
131 for key, value in metric.items():
132 if not key.startswith("test_"):
133 continue
134
135 metrics[model_name][f"{key}_mean"][date] = value["mean"]
136 metrics[model_name][f"{key}_std"][date] = value["std"]
137
138 for model_name in metrics:
139 for metric_name, values in metrics[model_name].items():
140 threshold_crossed = plot_graph(
141 model_name, metric_name, values, Path(output_directory)
142 )
143
144 if threshold_crossed:
145 LOGGER.warning(
146 "Last metric %r for model %s is 5%% worse than the previous one",
147 metric_name,
148 model_name,
149 )
150
151 threshold_ever_crossed = threshold_ever_crossed or threshold_crossed
152
153 if threshold_ever_crossed:
154 sys.exit(1)
155
156
157 def main():
158 parser = argparse.ArgumentParser(description=__doc__)
159
160 parser.add_argument(
161 "metrics_directory",
162 metavar="metrics-directory",
163 help="In which directory the script can find the metrics JSON files",
164 )
165 parser.add_argument(
166 "output_directory",
167 metavar="output-directory",
168 help="In which directory the script will save the generated graphs",
169 )
170
171 args = parser.parse_args()
172
173 analyze_metrics(args.metrics_directory, args.output_directory)
174
175
176 if __name__ == "__main__":
177 main()
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/analyze_training_metrics.py b/scripts/analyze_training_metrics.py
--- a/scripts/analyze_training_metrics.py
+++ b/scripts/analyze_training_metrics.py
@@ -21,7 +21,8 @@
logging.basicConfig(level=logging.INFO)
-# If the latest metric point is 5% lower than the previous one, show a warning and exit with 1.
+# By default, if the latest metric point is 5% lower than the previous one, show a warning and exit
+# with 1.
WARNING_THRESHOLD = 0.95
REPORT_METRICS = ["accuracy", "precision", "recall"]
@@ -32,6 +33,7 @@
metric_name: str,
values_dict: Dict[datetime, float],
output_directory: Path,
+ warning_threshold: float,
) -> bool:
sorted_metrics = sorted(values_dict.items())
x, y = zip(*sorted_metrics)
@@ -41,7 +43,7 @@
before_last_value = y[-2]
else:
before_last_value = y[-1]
- metric_threshold = before_last_value * WARNING_THRESHOLD
+ metric_threshold = before_last_value * warning_threshold
figure = plt.figure()
axes = plt.axes()
@@ -86,7 +88,9 @@
return y[-1] < metric_threshold
-def analyze_metrics(metrics_directory: str, output_directory: str):
+def analyze_metrics(
+ metrics_directory: str, output_directory: str, warning_threshold: float
+):
root = Path(metrics_directory)
metrics: Dict[str, Dict[str, Dict[datetime, float]]] = defaultdict(
@@ -138,14 +142,21 @@
for model_name in metrics:
for metric_name, values in metrics[model_name].items():
threshold_crossed = plot_graph(
- model_name, metric_name, values, Path(output_directory)
+ model_name,
+ metric_name,
+ values,
+ Path(output_directory),
+ warning_threshold,
)
+ diff = (1 - warning_threshold) * 100
+
if threshold_crossed:
LOGGER.warning(
- "Last metric %r for model %s is 5%% worse than the previous one",
+ "Last metric %r for model %s is %f%% worse than the previous one",
metric_name,
model_name,
+ diff,
)
threshold_ever_crossed = threshold_ever_crossed or threshold_crossed
@@ -167,10 +178,18 @@
metavar="output-directory",
help="In which directory the script will save the generated graphs",
)
+ parser.add_argument(
+ "--warning_threshold",
+ default=WARNING_THRESHOLD,
+ type=float,
+ help="If the last metric value is below the previous one*warning_threshold, fails. Default to 0.95",
+ )
args = parser.parse_args()
- analyze_metrics(args.metrics_directory, args.output_directory)
+ analyze_metrics(
+ args.metrics_directory, args.output_directory, args.warning_threshold
+ )
if __name__ == "__main__":
| {"golden_diff": "diff --git a/scripts/analyze_training_metrics.py b/scripts/analyze_training_metrics.py\n--- a/scripts/analyze_training_metrics.py\n+++ b/scripts/analyze_training_metrics.py\n@@ -21,7 +21,8 @@\n \n logging.basicConfig(level=logging.INFO)\n \n-# If the latest metric point is 5% lower than the previous one, show a warning and exit with 1.\n+# By default, if the latest metric point is 5% lower than the previous one, show a warning and exit\n+# with 1.\n WARNING_THRESHOLD = 0.95\n \n REPORT_METRICS = [\"accuracy\", \"precision\", \"recall\"]\n@@ -32,6 +33,7 @@\n metric_name: str,\n values_dict: Dict[datetime, float],\n output_directory: Path,\n+ warning_threshold: float,\n ) -> bool:\n sorted_metrics = sorted(values_dict.items())\n x, y = zip(*sorted_metrics)\n@@ -41,7 +43,7 @@\n before_last_value = y[-2]\n else:\n before_last_value = y[-1]\n- metric_threshold = before_last_value * WARNING_THRESHOLD\n+ metric_threshold = before_last_value * warning_threshold\n \n figure = plt.figure()\n axes = plt.axes()\n@@ -86,7 +88,9 @@\n return y[-1] < metric_threshold\n \n \n-def analyze_metrics(metrics_directory: str, output_directory: str):\n+def analyze_metrics(\n+ metrics_directory: str, output_directory: str, warning_threshold: float\n+):\n root = Path(metrics_directory)\n \n metrics: Dict[str, Dict[str, Dict[datetime, float]]] = defaultdict(\n@@ -138,14 +142,21 @@\n for model_name in metrics:\n for metric_name, values in metrics[model_name].items():\n threshold_crossed = plot_graph(\n- model_name, metric_name, values, Path(output_directory)\n+ model_name,\n+ metric_name,\n+ values,\n+ Path(output_directory),\n+ warning_threshold,\n )\n \n+ diff = (1 - warning_threshold) * 100\n+\n if threshold_crossed:\n LOGGER.warning(\n- \"Last metric %r for model %s is 5%% worse than the previous one\",\n+ \"Last metric %r for model %s is %f%% worse than the previous one\",\n metric_name,\n model_name,\n+ diff,\n )\n \n threshold_ever_crossed = threshold_ever_crossed or threshold_crossed\n@@ -167,10 +178,18 @@\n metavar=\"output-directory\",\n help=\"In which directory the script will save the generated graphs\",\n )\n+ parser.add_argument(\n+ \"--warning_threshold\",\n+ default=WARNING_THRESHOLD,\n+ type=float,\n+ help=\"If the last metric value is below the previous one*warning_threshold, fails. Default to 0.95\",\n+ )\n \n args = parser.parse_args()\n \n- analyze_metrics(args.metrics_directory, args.output_directory)\n+ analyze_metrics(\n+ args.metrics_directory, args.output_directory, args.warning_threshold\n+ )\n \n \n if __name__ == \"__main__\":\n", "issue": "Tune the check for training metrics\nWe currently set an arbitrary max 5% worse result, we might need to tune it\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\"\"\" Given a directory containing training metrics, generate SVF graphs and check that the metrics are not getting worse than before.\n\"\"\"\n\nimport argparse\nimport json\nimport logging\nimport sys\nfrom collections import defaultdict\nfrom datetime import datetime, timezone\nfrom pathlib import Path\nfrom typing import Dict\n\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\n\nLOGGER = logging.getLogger(__name__)\n\nlogging.basicConfig(level=logging.INFO)\n\n# If the latest metric point is 5% lower than the previous one, show a warning and exit with 1.\nWARNING_THRESHOLD = 0.95\n\nREPORT_METRICS = [\"accuracy\", \"precision\", \"recall\"]\n\n\ndef plot_graph(\n model_name: str,\n metric_name: str,\n values_dict: Dict[datetime, float],\n output_directory: Path,\n) -> bool:\n sorted_metrics = sorted(values_dict.items())\n x, y = zip(*sorted_metrics)\n\n # Compute the threshold\n if len(y) >= 2:\n before_last_value = y[-2]\n else:\n before_last_value = y[-1]\n metric_threshold = before_last_value * WARNING_THRESHOLD\n\n figure = plt.figure()\n axes = plt.axes()\n\n # Formatting of the figure\n figure.autofmt_xdate()\n axes.fmt_xdata = mdates.DateFormatter(\"%Y-%m-%d-%H-%M\")\n axes.set_title(f\"{model_name} {metric_name}\")\n\n # Display threshold\n axes.axhline(y=metric_threshold, linestyle=\"--\", color=\"red\")\n plt.annotate(\n \"{:.4f}\".format(metric_threshold),\n (x[-1], metric_threshold),\n textcoords=\"offset points\", # how to position the text\n xytext=(-10, 10), # distance from text to points (x,y)\n ha=\"center\",\n color=\"red\",\n )\n\n # Display point values\n for single_x, single_y in zip(x, y):\n label = \"{:.4f}\".format(single_y)\n\n plt.annotate(\n label,\n (single_x, single_y),\n textcoords=\"offset points\",\n xytext=(0, 10),\n ha=\"center\",\n )\n\n axes.plot_date(x, y, marker=\".\", fmt=\"-\")\n\n output_file_path = output_directory.resolve() / f\"{model_name}_{metric_name}.svg\"\n LOGGER.info(\"Saving %s figure\", output_file_path)\n plt.savefig(output_file_path)\n\n plt.close(figure)\n\n # Check if the threshold has been crossed\n return y[-1] < metric_threshold\n\n\ndef analyze_metrics(metrics_directory: str, output_directory: str):\n root = Path(metrics_directory)\n\n metrics: Dict[str, Dict[str, Dict[datetime, float]]] = defaultdict(\n lambda: defaultdict(dict)\n )\n\n threshold_ever_crossed = False\n\n for metric_file_path in root.glob(\"metric*.json\"):\n\n # Load the metric\n with open(metric_file_path, \"r\") as metric_file:\n metric = json.load(metric_file)\n\n # Get the model, date and version from the file\n # TODO: Might be better storing it in the file\n file_path_parts = metric_file_path.stem.split(\"_\")\n\n assert file_path_parts[:5] == [\"metric\", \"project\", \"relman\", \"bugbug\", \"train\"]\n model_name = file_path_parts[5]\n assert file_path_parts[6:8] == [\"per\", \"date\"]\n date_parts = list(map(int, file_path_parts[8:14]))\n date = datetime(\n date_parts[0],\n date_parts[1],\n date_parts[2],\n date_parts[3],\n date_parts[4],\n date_parts[5],\n tzinfo=timezone.utc,\n )\n # version = file_path_parts[14:] # TODO: Use version\n\n # Then process the report\n for key, value in metric[\"report\"][\"average\"].items():\n if key not in REPORT_METRICS:\n continue\n\n metrics[model_name][key][date] = value\n\n # Also process the test_* metrics\n for key, value in metric.items():\n if not key.startswith(\"test_\"):\n continue\n\n metrics[model_name][f\"{key}_mean\"][date] = value[\"mean\"]\n metrics[model_name][f\"{key}_std\"][date] = value[\"std\"]\n\n for model_name in metrics:\n for metric_name, values in metrics[model_name].items():\n threshold_crossed = plot_graph(\n model_name, metric_name, values, Path(output_directory)\n )\n\n if threshold_crossed:\n LOGGER.warning(\n \"Last metric %r for model %s is 5%% worse than the previous one\",\n metric_name,\n model_name,\n )\n\n threshold_ever_crossed = threshold_ever_crossed or threshold_crossed\n\n if threshold_ever_crossed:\n sys.exit(1)\n\n\ndef main():\n parser = argparse.ArgumentParser(description=__doc__)\n\n parser.add_argument(\n \"metrics_directory\",\n metavar=\"metrics-directory\",\n help=\"In which directory the script can find the metrics JSON files\",\n )\n parser.add_argument(\n \"output_directory\",\n metavar=\"output-directory\",\n help=\"In which directory the script will save the generated graphs\",\n )\n\n args = parser.parse_args()\n\n analyze_metrics(args.metrics_directory, args.output_directory)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/analyze_training_metrics.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\"\"\" Given a directory containing training metrics, generate SVF graphs and check that the metrics are not getting worse than before.\n\"\"\"\n\nimport argparse\nimport json\nimport logging\nimport sys\nfrom collections import defaultdict\nfrom datetime import datetime, timezone\nfrom pathlib import Path\nfrom typing import Dict\n\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\n\nLOGGER = logging.getLogger(__name__)\n\nlogging.basicConfig(level=logging.INFO)\n\n# By default, if the latest metric point is 5% lower than the previous one, show a warning and exit\n# with 1.\nWARNING_THRESHOLD = 0.95\n\nREPORT_METRICS = [\"accuracy\", \"precision\", \"recall\"]\n\n\ndef plot_graph(\n model_name: str,\n metric_name: str,\n values_dict: Dict[datetime, float],\n output_directory: Path,\n warning_threshold: float,\n) -> bool:\n sorted_metrics = sorted(values_dict.items())\n x, y = zip(*sorted_metrics)\n\n # Compute the threshold\n if len(y) >= 2:\n before_last_value = y[-2]\n else:\n before_last_value = y[-1]\n metric_threshold = before_last_value * warning_threshold\n\n figure = plt.figure()\n axes = plt.axes()\n\n # Formatting of the figure\n figure.autofmt_xdate()\n axes.fmt_xdata = mdates.DateFormatter(\"%Y-%m-%d-%H-%M\")\n axes.set_title(f\"{model_name} {metric_name}\")\n\n # Display threshold\n axes.axhline(y=metric_threshold, linestyle=\"--\", color=\"red\")\n plt.annotate(\n \"{:.4f}\".format(metric_threshold),\n (x[-1], metric_threshold),\n textcoords=\"offset points\", # how to position the text\n xytext=(-10, 10), # distance from text to points (x,y)\n ha=\"center\",\n color=\"red\",\n )\n\n # Display point values\n for single_x, single_y in zip(x, y):\n label = \"{:.4f}\".format(single_y)\n\n plt.annotate(\n label,\n (single_x, single_y),\n textcoords=\"offset points\",\n xytext=(0, 10),\n ha=\"center\",\n )\n\n axes.plot_date(x, y, marker=\".\", fmt=\"-\")\n\n output_file_path = output_directory.resolve() / f\"{model_name}_{metric_name}.svg\"\n LOGGER.info(\"Saving %s figure\", output_file_path)\n plt.savefig(output_file_path)\n\n plt.close(figure)\n\n # Check if the threshold has been crossed\n return y[-1] < metric_threshold\n\n\ndef analyze_metrics(\n metrics_directory: str, output_directory: str, warning_threshold: float\n):\n root = Path(metrics_directory)\n\n metrics: Dict[str, Dict[str, Dict[datetime, float]]] = defaultdict(\n lambda: defaultdict(dict)\n )\n\n threshold_ever_crossed = False\n\n for metric_file_path in root.glob(\"metric*.json\"):\n\n # Load the metric\n with open(metric_file_path, \"r\") as metric_file:\n metric = json.load(metric_file)\n\n # Get the model, date and version from the file\n # TODO: Might be better storing it in the file\n file_path_parts = metric_file_path.stem.split(\"_\")\n\n assert file_path_parts[:5] == [\"metric\", \"project\", \"relman\", \"bugbug\", \"train\"]\n model_name = file_path_parts[5]\n assert file_path_parts[6:8] == [\"per\", \"date\"]\n date_parts = list(map(int, file_path_parts[8:14]))\n date = datetime(\n date_parts[0],\n date_parts[1],\n date_parts[2],\n date_parts[3],\n date_parts[4],\n date_parts[5],\n tzinfo=timezone.utc,\n )\n # version = file_path_parts[14:] # TODO: Use version\n\n # Then process the report\n for key, value in metric[\"report\"][\"average\"].items():\n if key not in REPORT_METRICS:\n continue\n\n metrics[model_name][key][date] = value\n\n # Also process the test_* metrics\n for key, value in metric.items():\n if not key.startswith(\"test_\"):\n continue\n\n metrics[model_name][f\"{key}_mean\"][date] = value[\"mean\"]\n metrics[model_name][f\"{key}_std\"][date] = value[\"std\"]\n\n for model_name in metrics:\n for metric_name, values in metrics[model_name].items():\n threshold_crossed = plot_graph(\n model_name,\n metric_name,\n values,\n Path(output_directory),\n warning_threshold,\n )\n\n diff = (1 - warning_threshold) * 100\n\n if threshold_crossed:\n LOGGER.warning(\n \"Last metric %r for model %s is %f%% worse than the previous one\",\n metric_name,\n model_name,\n diff,\n )\n\n threshold_ever_crossed = threshold_ever_crossed or threshold_crossed\n\n if threshold_ever_crossed:\n sys.exit(1)\n\n\ndef main():\n parser = argparse.ArgumentParser(description=__doc__)\n\n parser.add_argument(\n \"metrics_directory\",\n metavar=\"metrics-directory\",\n help=\"In which directory the script can find the metrics JSON files\",\n )\n parser.add_argument(\n \"output_directory\",\n metavar=\"output-directory\",\n help=\"In which directory the script will save the generated graphs\",\n )\n parser.add_argument(\n \"--warning_threshold\",\n default=WARNING_THRESHOLD,\n type=float,\n help=\"If the last metric value is below the previous one*warning_threshold, fails. Default to 0.95\",\n )\n\n args = parser.parse_args()\n\n analyze_metrics(\n args.metrics_directory, args.output_directory, args.warning_threshold\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/analyze_training_metrics.py"}]} | 1,961 | 686 |
gh_patches_debug_5795 | rasdani/github-patches | git_diff | sunpy__sunpy-7148 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fido query for GBM data only matches files with v00.pha and misses those that have been version updated (e.g. v01.pha)
### Describe the bug
The way the source client for GBM is currently implemented, scraper only looks for files ending with `v00.pha`, and hence wont return data if its instead `v01.pha`
I'll make a PR now
### To Reproduce
```python
>>> from sunpy.net import Fido, attrs as a
>>> res_gbm = Fido.search(a.Time("2014-10-14 00:00", "2014-10-15"), a.Instrument.gbm, a.Resolution.cspec, a.Detector("n5"))
Results from 1 Provider:
1 Results from the GBMClient:
Source: https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily
Start Time End Time Instrument Physobs Source Provider Resolution Detector
----------------------- ----------------------- ---------- ------- ------ -------- ---------- --------
2014-10-15 00:00:00.000 2014-10-15 23:59:59.999 GBM flux FERMI NASA cspec n5
```
however data exists for the 14th but not returned here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/net/dataretriever/sources/fermi_gbm.py`
Content:
```
1 from sunpy.net.dataretriever import GenericClient
2
3 __all__ = ['GBMClient']
4
5
6 class GBMClient(GenericClient):
7 """
8 Provides access to data from the Gamma-Ray Burst Monitor (GBM) instrument
9 on board the Fermi satellite.
10
11 Although GBMs primary objective is to detect gamma-ray bursts,
12 it provides high quality high energy solar flare observations.
13
14 The instrument consists of 12 Sodium Iodide (NaI) scintillation
15 detectors, which are sensitive to an energy range of 4keV to 1MeV.
16 At any one time, 6 of the NaI detectors are Sunward facing.
17 The detectors are numbered 'n1' to 'n11'. This client supports the user
18 to choose which detector to use through the `a.Detector <sunpy.net.attrs.Detector>` attribute.
19 The default detector is 'n5'.
20
21 The GBM data comes in daily version files in two formats:
22
23 * CSPEC - counts accumulated every 4.096 seconds in 128 energy channels for each detector.
24 * CTIME - counts accumulated every 0.256 seconds in 8 energy channels
25
26 Both of which can be accessed through the attrs `a.Resolution <sunpy.net.attrs.Resolution>`.
27 The default data type is CSPEC unless the user defines.
28
29 Examples
30 --------
31 >>> from sunpy.net import Fido, attrs as a
32 >>> res = Fido.search(a.Time('2015-06-21 00:00', '2015-06-23 23:59'),
33 ... a.Instrument.gbm, a.Detector.n3,
34 ... a.Resolution.ctime) # doctest: +REMOTE_DATA
35 >>> res # doctest: +REMOTE_DATA
36 <sunpy.net.fido_factory.UnifiedResponse object at ...>
37 Results from 1 Provider:
38 <BLANKLINE>
39 3 Results from the GBMClient:
40 Source: https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily
41 <BLANKLINE>
42 Start Time End Time ... Resolution Detector
43 ----------------------- ----------------------- ... ---------- --------
44 2015-06-21 00:00:00.000 2015-06-21 23:59:59.999 ... ctime n3
45 2015-06-22 00:00:00.000 2015-06-22 23:59:59.999 ... ctime n3
46 2015-06-23 00:00:00.000 2015-06-23 23:59:59.999 ... ctime n3
47 <BLANKLINE>
48 <BLANKLINE>
49
50 """
51 baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\w){5}_(\w){2}_%y%m%d_v00.pha'
52 pattern = '{}/daily/{year:4d}/{month:2d}/{day:2d}/current/glg_{Resolution:5}_{Detector:2}_{:6d}{}'
53
54 @property
55 def info_url(self):
56 return 'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily'
57
58 @classmethod
59 def register_values(cls):
60 from sunpy.net import attrs
61 adict = {attrs.Instrument: [('GBM', 'Gamma-Ray Burst Monitor on board the Fermi satellite.')],
62 attrs.Physobs: [('flux', 'a measure of the amount of radiation received by an object from a given source.')],
63 attrs.Source: [('FERMI', 'The Fermi Gamma-ray Space Telescope.')],
64 attrs.Provider: [('NASA', 'The National Aeronautics and Space Administration.')],
65 attrs.Resolution: [
66 ("cspec", "CSPEC 128 channel spectra every 4.096 seconds."),
67 ("ctime", "CTIME provides 8 channel spectra every 0.256 seconds.")],
68 attrs.Detector: [(f"n{x}", f"GBM Detector short name for the detector NAI_{x:02}") for x in range(12)]}
69 return adict
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/net/dataretriever/sources/fermi_gbm.py b/sunpy/net/dataretriever/sources/fermi_gbm.py
--- a/sunpy/net/dataretriever/sources/fermi_gbm.py
+++ b/sunpy/net/dataretriever/sources/fermi_gbm.py
@@ -48,7 +48,8 @@
<BLANKLINE>
"""
- baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\w){5}_(\w){2}_%y%m%d_v00.pha'
+
+ baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\w){5}_(\w){2}_%y%m%d_.*\.pha'
pattern = '{}/daily/{year:4d}/{month:2d}/{day:2d}/current/glg_{Resolution:5}_{Detector:2}_{:6d}{}'
@property
| {"golden_diff": "diff --git a/sunpy/net/dataretriever/sources/fermi_gbm.py b/sunpy/net/dataretriever/sources/fermi_gbm.py\n--- a/sunpy/net/dataretriever/sources/fermi_gbm.py\n+++ b/sunpy/net/dataretriever/sources/fermi_gbm.py\n@@ -48,7 +48,8 @@\n <BLANKLINE>\n \n \"\"\"\n- baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\\w){5}_(\\w){2}_%y%m%d_v00.pha'\n+\n+ baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\\w){5}_(\\w){2}_%y%m%d_.*\\.pha'\n pattern = '{}/daily/{year:4d}/{month:2d}/{day:2d}/current/glg_{Resolution:5}_{Detector:2}_{:6d}{}'\n \n @property\n", "issue": "Fido query for GBM data only matches files with v00.pha and misses those that have been version updated (e.g. v01.pha)\n### Describe the bug\r\n\r\nThe way the source client for GBM is currently implemented, scraper only looks for files ending with `v00.pha`, and hence wont return data if its instead `v01.pha`\r\n\r\nI'll make a PR now\r\n\r\n\r\n\r\n### To Reproduce\r\n\r\n```python\r\n>>> from sunpy.net import Fido, attrs as a\r\n>>> res_gbm = Fido.search(a.Time(\"2014-10-14 00:00\", \"2014-10-15\"), a.Instrument.gbm, a.Resolution.cspec, a.Detector(\"n5\"))\r\nResults from 1 Provider:\r\n\r\n1 Results from the GBMClient:\r\nSource: https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily\r\n\r\n Start Time End Time Instrument Physobs Source Provider Resolution Detector\r\n----------------------- ----------------------- ---------- ------- ------ -------- ---------- --------\r\n2014-10-15 00:00:00.000 2014-10-15 23:59:59.999 GBM flux FERMI NASA cspec n5\r\n```\r\nhowever data exists for the 14th but not returned here.\r\n\r\n\n", "before_files": [{"content": "from sunpy.net.dataretriever import GenericClient\n\n__all__ = ['GBMClient']\n\n\nclass GBMClient(GenericClient):\n \"\"\"\n Provides access to data from the Gamma-Ray Burst Monitor (GBM) instrument\n on board the Fermi satellite.\n\n Although GBMs primary objective is to detect gamma-ray bursts,\n it provides high quality high energy solar flare observations.\n\n The instrument consists of 12 Sodium Iodide (NaI) scintillation\n detectors, which are sensitive to an energy range of 4keV to 1MeV.\n At any one time, 6 of the NaI detectors are Sunward facing.\n The detectors are numbered 'n1' to 'n11'. This client supports the user\n to choose which detector to use through the `a.Detector <sunpy.net.attrs.Detector>` attribute.\n The default detector is 'n5'.\n\n The GBM data comes in daily version files in two formats:\n\n * CSPEC - counts accumulated every 4.096 seconds in 128 energy channels for each detector.\n * CTIME - counts accumulated every 0.256 seconds in 8 energy channels\n\n Both of which can be accessed through the attrs `a.Resolution <sunpy.net.attrs.Resolution>`.\n The default data type is CSPEC unless the user defines.\n\n Examples\n --------\n >>> from sunpy.net import Fido, attrs as a\n >>> res = Fido.search(a.Time('2015-06-21 00:00', '2015-06-23 23:59'),\n ... a.Instrument.gbm, a.Detector.n3,\n ... a.Resolution.ctime) # doctest: +REMOTE_DATA\n >>> res # doctest: +REMOTE_DATA\n <sunpy.net.fido_factory.UnifiedResponse object at ...>\n Results from 1 Provider:\n <BLANKLINE>\n 3 Results from the GBMClient:\n Source: https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily\n <BLANKLINE>\n Start Time End Time ... Resolution Detector\n ----------------------- ----------------------- ... ---------- --------\n 2015-06-21 00:00:00.000 2015-06-21 23:59:59.999 ... ctime n3\n 2015-06-22 00:00:00.000 2015-06-22 23:59:59.999 ... ctime n3\n 2015-06-23 00:00:00.000 2015-06-23 23:59:59.999 ... ctime n3\n <BLANKLINE>\n <BLANKLINE>\n\n \"\"\"\n baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\\w){5}_(\\w){2}_%y%m%d_v00.pha'\n pattern = '{}/daily/{year:4d}/{month:2d}/{day:2d}/current/glg_{Resolution:5}_{Detector:2}_{:6d}{}'\n\n @property\n def info_url(self):\n return 'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily'\n\n @classmethod\n def register_values(cls):\n from sunpy.net import attrs\n adict = {attrs.Instrument: [('GBM', 'Gamma-Ray Burst Monitor on board the Fermi satellite.')],\n attrs.Physobs: [('flux', 'a measure of the amount of radiation received by an object from a given source.')],\n attrs.Source: [('FERMI', 'The Fermi Gamma-ray Space Telescope.')],\n attrs.Provider: [('NASA', 'The National Aeronautics and Space Administration.')],\n attrs.Resolution: [\n (\"cspec\", \"CSPEC 128 channel spectra every 4.096 seconds.\"),\n (\"ctime\", \"CTIME provides 8 channel spectra every 0.256 seconds.\")],\n attrs.Detector: [(f\"n{x}\", f\"GBM Detector short name for the detector NAI_{x:02}\") for x in range(12)]}\n return adict\n", "path": "sunpy/net/dataretriever/sources/fermi_gbm.py"}], "after_files": [{"content": "from sunpy.net.dataretriever import GenericClient\n\n__all__ = ['GBMClient']\n\n\nclass GBMClient(GenericClient):\n \"\"\"\n Provides access to data from the Gamma-Ray Burst Monitor (GBM) instrument\n on board the Fermi satellite.\n\n Although GBMs primary objective is to detect gamma-ray bursts,\n it provides high quality high energy solar flare observations.\n\n The instrument consists of 12 Sodium Iodide (NaI) scintillation\n detectors, which are sensitive to an energy range of 4keV to 1MeV.\n At any one time, 6 of the NaI detectors are Sunward facing.\n The detectors are numbered 'n1' to 'n11'. This client supports the user\n to choose which detector to use through the `a.Detector <sunpy.net.attrs.Detector>` attribute.\n The default detector is 'n5'.\n\n The GBM data comes in daily version files in two formats:\n\n * CSPEC - counts accumulated every 4.096 seconds in 128 energy channels for each detector.\n * CTIME - counts accumulated every 0.256 seconds in 8 energy channels\n\n Both of which can be accessed through the attrs `a.Resolution <sunpy.net.attrs.Resolution>`.\n The default data type is CSPEC unless the user defines.\n\n Examples\n --------\n >>> from sunpy.net import Fido, attrs as a\n >>> res = Fido.search(a.Time('2015-06-21 00:00', '2015-06-23 23:59'),\n ... a.Instrument.gbm, a.Detector.n3,\n ... a.Resolution.ctime) # doctest: +REMOTE_DATA\n >>> res # doctest: +REMOTE_DATA\n <sunpy.net.fido_factory.UnifiedResponse object at ...>\n Results from 1 Provider:\n <BLANKLINE>\n 3 Results from the GBMClient:\n Source: https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily\n <BLANKLINE>\n Start Time End Time ... Resolution Detector\n ----------------------- ----------------------- ... ---------- --------\n 2015-06-21 00:00:00.000 2015-06-21 23:59:59.999 ... ctime n3\n 2015-06-22 00:00:00.000 2015-06-22 23:59:59.999 ... ctime n3\n 2015-06-23 00:00:00.000 2015-06-23 23:59:59.999 ... ctime n3\n <BLANKLINE>\n <BLANKLINE>\n\n \"\"\"\n\n baseurl = r'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily/%Y/%m/%d/current/glg_(\\w){5}_(\\w){2}_%y%m%d_.*\\.pha'\n pattern = '{}/daily/{year:4d}/{month:2d}/{day:2d}/current/glg_{Resolution:5}_{Detector:2}_{:6d}{}'\n\n @property\n def info_url(self):\n return 'https://heasarc.gsfc.nasa.gov/FTP/fermi/data/gbm/daily'\n\n @classmethod\n def register_values(cls):\n from sunpy.net import attrs\n adict = {attrs.Instrument: [('GBM', 'Gamma-Ray Burst Monitor on board the Fermi satellite.')],\n attrs.Physobs: [('flux', 'a measure of the amount of radiation received by an object from a given source.')],\n attrs.Source: [('FERMI', 'The Fermi Gamma-ray Space Telescope.')],\n attrs.Provider: [('NASA', 'The National Aeronautics and Space Administration.')],\n attrs.Resolution: [\n (\"cspec\", \"CSPEC 128 channel spectra every 4.096 seconds.\"),\n (\"ctime\", \"CTIME provides 8 channel spectra every 0.256 seconds.\")],\n attrs.Detector: [(f\"n{x}\", f\"GBM Detector short name for the detector NAI_{x:02}\") for x in range(12)]}\n return adict\n", "path": "sunpy/net/dataretriever/sources/fermi_gbm.py"}]} | 1,727 | 250 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.